Portfolio โ password required
Incorrect password
Need access? cmarydai@gmail.com
AI Product Design ยท Legaltech
Link AI lets legal teams ask natural-language questions across their entire contract repository. I designed the end-to-end experience as the sole designer, from research through interactive prototyping.
01 โ The Problem
Legal professionals spend hours manually searching through hundreds of contracts to answer questions like "Which vendors allow us to use their logo?" LinkSquares built an AI that could read, reason, and answer these questions across thousands of contracts.
My job was to design the interface. The core challenge wasn't the AI โ it was trust. Legal professionals make high-stakes decisions. If they can't verify how the AI reached its conclusion, they won't use it.
02 โ Understanding the Users
I mapped how different user types relate to AI transparency. For some users, showing the AI's reasoning builds trust. For others, it's noise that erodes confidence. Legal professionals land in the top half โ they want to understand. But even within legal teams, an in-house counsel wants plain-language summaries while a legal engineer wants sources and reasoning.
This framework shaped every transparency decision in the product. It also surfaced a key research finding: users didn't just want to see results โ they wanted to interrogate the AI's reasoning. And the system already had semantic matching capabilities that users couldn't see, creating a gap between what the product could do and what users believed it was doing.
03 โ The Response Model
I identified 8 distinct intent categories for how legal professionals use AI chat. The key finding: 56% of queries fall into "exhaustive retrieval," but users don't actually want lists โ they need comprehensive data for downstream workflows like compliance audits and report generation.
Engineering called the feature "creating a prompt-based value." I reframed it into language users would understand: basic search and deep analysis. These names gave users a mental model for what the AI was doing and how long it would take.
Within deep analysis, I identified a further branching the team hadn't considered. There are two fundamentally different intents hiding inside what looks like the same question:
04 โ Streaming, Not Gating
The original engineering flow required users to jump through several gates before seeing any results: review 5 example extractions, confirm accuracy with yes/no for each, edit the AI instructions, then click "Create Value" โ which would run the full analysis with no way to stop it.
I flipped the flow. Skip the gates. Show the AI instructions if users want to edit them, but don't make it a required step. Instead, go straight into streaming real results โ not previews, not examples, actual contract data filling in row by row. Users can see what the AI is producing and stop at any point if it's not right.
The streaming table syncs with a thought process component in the chat: the "n of m" counter matches the filled rows. A stop button preserves partial results. Error states show which step failed with retry that only re-runs the failed step. The split-view layout gives the table 60% width while chat stays accessible on the left.
05 โ From Conversation to Artifact
In research, every participant wanted save โ but each had a different mental model. One expected chat history. Another expected results for reuse. A third expected a file download.
I redesigned the save flow: one-click save from the artifact card, no naming step, auto-generated title. The card updates from "Save" to "View in Library." Editing happens on the saved page, never in chat.
06 โ Naming the Thing
The naming evolved multiple times. Each rename reflected a real shift in product thinking, not just terminology.
"Prompts" tested poorly โ users didn't connect the word with what they were getting. I was heading toward "AI Value" because it describes what the saved output actually is: a persistent, org-wide data field that the AI extracts from every contract going forward.
To bridge the conceptual gap, I designed a one-time explanatory modal for the first save. Instead of assuming users would understand, the modal explicitly states: "Transform this one-time analysis into a persistent data field that runs across all your contracts." It highlights two key behaviors โ the value becomes available org-wide for filtering and reporting, and it automatically runs on new contracts as they're uploaded.
07 โ Design Principles
Show the work. Transparency isn't a feature โ it's a requirement for trust. But calibrate it to the user. Too much reasoning for an outcome-first user erodes confidence.
Let users control scope. A confirmation step isn't friction โ it's respect for the user's time.
Make uncertainty visible. The AI sounds confident whether it's 95% sure or 40% sure. The UI must communicate what the AI can't.
Separate conversation from configuration. Chat is where analysis is born. The saved page is where it lives.
Stream, don't gate. Show real results and let users course-correct, rather than making them prove they understand before they can use the tool.