LinkSquares · AI
Legal teams dug through contracts one by one to find answers. I led design on a new AI feature inside LinkSquares: one that reads through hundreds of contracts, answers natural-language questions, and sources every claim back to the exact clause.
01 ยท Overview
Legal teams used to read contracts one at a time to answer a single question. Link AI lets them ask once and get the answer, with the exact clause shown. I led design end-to-end, from reframing a vague brief, through reworking what engineering had built, to a redesign that shipped to production.
02 ยท Problem
I was asked to design a feature called "Custom AI Values," a way for LinkSquares' AI to extract answers from contracts that didn't exist as structured data. The brief was vague. The team knew it involved AI extraction and that the results would become searchable, but nobody could articulate what the top use case actually was.
03 ยท Discovery
Without a clear use case, engineering built around their own guess: users would want to save custom AI values as permanent searchable fields. The version they built put saving first. Six steps stood between a user and an answer: ask, search, confirm extraction, edit instructions, review five previews, give feedback on each. Before a single result hit the screen.
I ran a demo of the POC with our internal legal team. The tool had a ton of usability issues, but buried in that meeting was the moment that changed everything. Someone on legal asked about logo usage permissions across their contracts. Their reaction to the multi-step flow was immediate.
That reframed the whole problem. Engineering had designed for saving. Users needed the answer first. The save could come after, if they even wanted it.
04 ยท Process
My audit of the existing build came back with three problems.
The team had months in the existing build. Scrapping it would slip the AI roadmap. So the redesign had to keep the backend and only change what users saw. I mapped what was movable with the lead engineer and data scientist.
Two changes drove the most discussion. We cut the "preview five examples" step: the data scientist confirmed the model could classify scope on its own, which made the gate redundant. And we moved from batch results to streaming with a stop button, which addressed engineering's cost concern and gave users a sense of progress.
Answer first, save later. Flipped the flow so users see results before being asked to configure anything.
Cut the 5-preview review step. The model could handle scope on its own; the user shouldn't have to.
Streaming results with a stop button. Users see progress as it runs; engineering keeps its cost control.
05 ยท Outcome
With the redesigned prototype, I ran usability tests with three external legal professionals. The scenario: marketing pings legal because the website has logos from companies who aren't even customers anymore. They need to know which logos they can use, by end of day.
The good news: the renamed two-mode search landed. All three participants understood the distinction between "basic search" and "deep analysis" immediately, and saw the value in having both options.
But I also learned things I wasn't expecting.
Impact
06 ยท Reflection
From the usability findings, I designed a second iteration. I added a thought process component showing what the AI searched for in plain language, renamed "save prompt" to "save AI Value" with a first-time explainer modal, and pushed the PM to prioritize clause-level source linking, which all three participants had asked for independently. I also designed the library screens with a live table of results.
The biggest takeaway: a vague brief is a design opportunity, not a blocker. The team had built a POC around an unverified assumption, and one internal demo flipped the framing of the entire problem. I'd take the same approach into any AI product: pressure-test what users actually need from the system before designing around what the system can do.