๐Ÿ–ค

Colleen Dailey

Portfolio โ€” password required

Incorrect password

Need access? cmarydai@gmail.com

Colleen Dailey /Link AI

AI Product Design ยท Legaltech

Designing AI that legal professionals actually trust

Link AI lets legal teams ask natural-language questions across their entire contract repository. I designed the end-to-end experience as the sole designer, from research through interactive prototyping.

"Disguised as something simple, it requires the skill of taking something impossibly complicated under the hood and making it extremely obvious to the person using it."
โ€” Jenny Wen
See it in action
Explore the full deep analysis flow โ€” streaming results, save, and re-run
Role
Senior Product Designer
Duration
2025โ€“2026
Scope
Research, Interaction Design, Prototyping, Design System

01 โ€” The Problem

Legal teams don't trust black boxes

Legal professionals spend hours manually searching through hundreds of contracts to answer questions like "Which vendors allow us to use their logo?" LinkSquares built an AI that could read, reason, and answer these questions across thousands of contracts.

My job was to design the interface. The core challenge wasn't the AI โ€” it was trust. Legal professionals make high-stakes decisions. If they can't verify how the AI reached its conclusion, they won't use it.

2โ€“3
Trust rating out of 5
from user testing
100%
of participants cross-referenced
AI output against the table
3
different mental models
for what "save" means

02 โ€” Understanding the Users

Transparency isn't universally good

I mapped how different user types relate to AI transparency. For some users, showing the AI's reasoning builds trust. For others, it's noise that erodes confidence. Legal professionals land in the top half โ€” they want to understand. But even within legal teams, an in-house counsel wants plain-language summaries while a legal engineer wants sources and reasoning.

AI transparency needs by user type
wants to understand
non-technical
Curious non-expert
Wants explanation but not raw mechanics. Benefits from plain-language summaries of reasoning.
e.g. in-house counsel, compliance manager
Developer / power user
Wants to see reasoning, sources, and confidence. Will interrogate the output.
e.g. in-house legal eng, IT admin, data analyst
Outcome-first user
Just wants the answer. AI process is noise. Transparency can erode rather than build trust.
e.g. executive, business owner, occasional user
Efficient expert
Trusts the AI, wants clean answers fast. Will dig in only when something looks off.
e.g. paralegal, ops lead, contract specialist
technical
just wants the answer

This framework shaped every transparency decision in the product. It also surfaced a key research finding: users didn't just want to see results โ€” they wanted to interrogate the AI's reasoning. And the system already had semantic matching capabilities that users couldn't see, creating a gap between what the product could do and what users believed it was doing.

Research finding
Users want to ask the AI "why"
Participants wanted to understand why the AI included specific contracts and how it classified them โ€” not just see the output.
"Is there a way to ask why it pulled it, or what stood out about that contract?" โ€” P3
Research finding
The system's smarts were invisible
Users worried about literal keyword matching. The system already does semantic matching during deep analysis, but users couldn't see it.
"My company uses 'branding guidelines' and 'trademarks.' We don't use references to logos." โ€” P3

03 โ€” The Response Model

Giving the AI human language

I identified 8 distinct intent categories for how legal professionals use AI chat. The key finding: 56% of queries fall into "exhaustive retrieval," but users don't actually want lists โ€” they need comprehensive data for downstream workflows like compliance audits and report generation.

Engineering called the feature "creating a prompt-based value." I reframed it into language users would understand: basic search and deep analysis. These names gave users a mental model for what the AI was doing and how long it would take.

Basic search
Fast, metadata-driven. Returns results from already-extracted fields. Answers in seconds.
Deep analysis
Reads every matching document. AI reasons through each contract to classify information. Takes minutes, with a confirmation step first.

Within deep analysis, I identified a further branching the team hadn't considered. There are two fundamentally different intents hiding inside what looks like the same question:

"Show me contracts with this clause"
User only wants matches. No blank rows, no "not specified." If a contract doesn't have the clause, don't show it.
"Show me all contracts, with or without"
User wants the full picture โ€” every vendor, including those without. Blank values are expected and useful here.
Design decision
"Blank rows aren't a bug โ€” they're the wrong intent. When users ask for matches and get blanks, trust drops. When users ask for a full audit, blanks are the answer. Distinguishing these two intents solved the 'not specified' problem from user testing."

04 โ€” Streaming, Not Gating

Show real results, not previews

The original engineering flow required users to jump through several gates before seeing any results: review 5 example extractions, confirm accuracy with yes/no for each, edit the AI instructions, then click "Create Value" โ€” which would run the full analysis with no way to stop it.

Original flow (engineering POC)
Review 5 examples โ†’ Rate each yes/no โ†’ Edit AI instructions โ†’ Create Value โ†’ Wait (no stop button)
My redesign
Confirm scope โ†’ Results stream in โ†’ Stop anytime โ†’ Edit instructions if needed

I flipped the flow. Skip the gates. Show the AI instructions if users want to edit them, but don't make it a required step. Instead, go straight into streaming real results โ€” not previews, not examples, actual contract data filling in row by row. Users can see what the AI is producing and stop at any point if it's not right.

The streaming table syncs with a thought process component in the chat: the "n of m" counter matches the filled rows. A stop button preserves partial results. Error states show which step failed with retry that only re-runs the failed step. The split-view layout gives the table 60% width while chat stays accessible on the left.

Design decision
"Don't make users prove they understand the AI before they can use it. Show them real results and let them course-correct. The stop button is their safety net, not a pre-flight checklist."

05 โ€” From Conversation to Artifact

Three users, three different expectations for "save"

In research, every participant wanted save โ€” but each had a different mental model. One expected chat history. Another expected results for reuse. A third expected a file download.

Key finding
Users are worried about losing work, not accidental clicks
The dev POC had an "Are you sure?" confirmation before updating. Users didn't care about that โ€” they cared about whether editing would overwrite their previous results.
"I will actually lose the first set of results. I'm not quite sure how I feel about that." โ€” P1

I redesigned the save flow: one-click save from the artifact card, no naming step, auto-generated title. The card updates from "Save" to "View in Library." Editing happens on the saved page, never in chat.

Before โ€” Dev POC
Link AI
Found Logo Usage Permissions in 5 documents. Review each result below:
MSA_Acme_2023.pdf
"Licensee may display Licensor's logo solely for co-marketing..."
Yes
No
Prompt
Vendor_NDA_GlobalCo.pdf
"No party shall use the other's trademarks without prior written..."
Yes
No
Prompt
SLA_TechPartner_Q2.pdf
No relevant language found in this document.
Yes
No
Prompt
Enterprise_Renewal_2024.pdf
"Client grants Company a limited license to use Client's name and logo..."
Yes
No
Prompt
+ 11 more documents pending review...
4 of 15
Activate Value
Run 5 More
Re-analyze
After โ€” Final Design
Logo Usage Permissions
Deep Analysis ยท View in Library
Design philosophy
"Chat is for conversation. The saved page is for configuration. If you let people edit analysis instructions in the chat, you're turning a conversational interface into a configuration tool. Those are two completely different user mindsets."

06 โ€” Naming the Thing

What are users actually saving?

The naming evolved multiple times. Each rename reflected a real shift in product thinking, not just terminology.

Reports
Implied static documents
โ†’
Smart Values
Too abstract
โ†’
Prompts
Tested poorly
โ†’
AI Values
Current direction

"Prompts" tested poorly โ€” users didn't connect the word with what they were getting. I was heading toward "AI Value" because it describes what the saved output actually is: a persistent, org-wide data field that the AI extracts from every contract going forward.

To bridge the conceptual gap, I designed a one-time explanatory modal for the first save. Instead of assuming users would understand, the modal explicitly states: "Transform this one-time analysis into a persistent data field that runs across all your contracts." It highlights two key behaviors โ€” the value becomes available org-wide for filtering and reporting, and it automatically runs on new contracts as they're uploaded.

Design decision
"Users don't need to learn a new concept to use save. They just need to understand what happens when they click it. The modal does that job once, then gets out of the way."

07 โ€” Design Principles

What I took away from designing for AI

1

Show the work. Transparency isn't a feature โ€” it's a requirement for trust. But calibrate it to the user. Too much reasoning for an outcome-first user erodes confidence.

2

Let users control scope. A confirmation step isn't friction โ€” it's respect for the user's time.

3

Make uncertainty visible. The AI sounds confident whether it's 95% sure or 40% sure. The UI must communicate what the AI can't.

4

Separate conversation from configuration. Chat is where analysis is born. The saved page is where it lives.

5

Stream, don't gate. Show real results and let users course-correct, rather than making them prove they understand before they can use the tool.