
Half of My Legal Writing Is Now Done by AI — and I’ve Redefined What “Legal Tools” Mean
Matrix Featured Article
Matrix is the writing community of SSPAI, where we encourage sharing genuine product experiences, practical insights, and thoughtful reflections. We regularly feature the best Matrix articles to showcase authentic perspectives from real users.
The article represents the author’s personal views. SSPAI has only made minor edits to the title and formatting.
In August, I wrote an article sharing my experiences with legal research. After finishing the first part, the second one was delayed for quite a while. The main reason was that I kept feeling my approach to using legal AI tools was almost the opposite of most users’, to the point where I doubted whether these experiences were even worth sharing.
This week, however, I started to sense what had been off. The problem, I realized, might lie in my misunderstanding of the functional roles of today’s different AI tools. So instead of focusing on research experiences, I’d like to organize and reflect on my small workflow for conducting legal research and drafting legal documents — and through that, think about what has truly changed for lawyers today.
Configuration and Experience Sharing
My most frequently used research setup consists of two large language model (LLM) chat windows and one legal AI product (currently, the one I use most often is PKULaw’s LüAiduo). The roles of these three components are as follows:
- LLM 1: The main force — used to construct the framework and draft the initial version of legal documents. This process feels somewhat like “vibe coding” for programmers: let the AI generate a basic document structure first, then keep what looks right, delete what doesn’t, input new prompts, and keep iterating until the document is complete.
- LLM 2: Used to sort out related, smaller, and more detailed issues that arise during research or drafting.
- Legal AI tools (LüAiduo, Yuandian Q&A, Xiaowei AI+, etc.): Used to verify content accuracy.

This somewhat elaborate setup corresponds to the three core stages of my work: drafting documents, organizing ideas, and fact-checking. Below, I’ll explain why this division of roles is necessary.
Why Do I Need Two Chat Windows to Complete the Task?
Quick Answer:
- In most practical legal documents, there’s an inherent position or stance, which tends to bias the large model toward giving answers favorable to my side. This is detrimental when dealing with contentious or uncertain issues. Therefore, whenever my reasoning gets tangled, I prefer to open a new chat window to discuss it separately — preventing confused thoughts from interfering with the main document.
- Moreover, AI still has a limited context length. When both the document and the disputed issues are handled in the same chat, the model’s performance declines significantly as the text grows.
Detailed Explanation:
Legal documents inherently carry a stance, which I believe is the biggest difference between workplace legal writing and academic papers or reports.
Why emphasize stance? Because one of the most criticized traits of large models is that they hallucinate and tend to please users, often giving answers that align with what the user seems to want. This is an objective property of the model — it can be both an advantage and a drawback.
Let’s look at both sides of this characteristic:
- The good side: A legal document needs to find subjectivity within objectivity — it must lean toward a particular position while remaining factual.
Most of the time, a lawyer’s job is to extract subjectivity from objectivity, and identify favorable arguments within disputes. If you’re lucky, you’ll find objective evidence (like precedents or statutes) that supports your stance — perfect! If not, you still need to construct arguments based on general principles to defend your side (>o<).
How do you explain it? How do you structure the explanation? Which words do you choose, and how do you use them? There’s a lot of room here for those so-called “hallucinations” to actually become creative reasoning.
- The bad side: When you’re still in the middle of figuring something out, and your conversation already assumes a stance, you’re very likely to be led astray.
This “bad” side becomes especially obvious in a specific situation: when I haven’t yet clarified the feasibility of an argument or issue, and I need to think it through. Because of my own uncertainty, if someone (or something) provides me with a seemingly reasonable direction and analysis, I’m easily swayed by that reasoning.
In such cases, I need a more neutral discussion to make sure my reasoning doesn’t drift off course.
That’s exactly why I need two chat windows for research. By moving uncertain topics into a separate chat, I ensure that the model isn’t influenced by the predetermined stance of my main document (for instance, if I’m representing the defendant, the model tends to answer in ways that favor the defendant).
I’ve tested this — when a stance is present vs. when it isn’t, the model’s responses are completely different.
What’s the Difference Between the “Thinking Window” and a Legal AI Tool? Can’t the content in the Thinking Window be handled directly by a legal AI product?
I regard the “Thinking Window” as a preprocessing stage for questions.
Preprocessing means that I don’t yet have a fully formed question — but I need to raise one.
The process of moving from a vague sense of “something’s there” to articulating a clear and structured question used to rely on wandering through materials, reading papers, and collecting references. Now, this stage can be accelerated with AI.
In my workflow of drafting legal documents, I generally need to sort through three types of thought processes:
- Organizing thoughts – clarifying fragmented ideas and identifying the next direction for reasoning (finding the core legal issue);
- Discussing controversial content – issues that might not have explicit legal authority but require careful phrasing or argumentative balance;
- Preprocessing for legal research – extracting keywords and identifying substantive legal questions.
- The third type is easy to understand — essentially asking the model to generate more search keywords for a given issue.For (1) and (2), I’ll give one example each:
Example 1: Organizing fragmented thoughts and identifying the next step of reasoning
I currently have a question about rent-free periods in a lease contract that I need to study.
Our side is the lessee. The contract states: “The lease term is ten years, from [date] to [date]. The rent-free period is three months, starting from March 16, 2024, when rent calculation begins.” Currently, the lease was terminated on [date] in 2025. What I want to explore is whether we can argue that:
- According to the contract, rent should be calculated from March 16, 2024, and rent during the rent-free period is not payable;
- If rent for the rent-free period must be paid, what is the legal nature of such rent?
If we have already paid rent for the rent-free period, do we still need to pay liquidated damages?I would like you to help me clarify the core legal issues I need to focus on based on my description.
Example 2: Discussing contentious issues
The primary function of a security deposit is to offset debts that have been confirmed by a judgment. Its legal nature is to provide security for contract performance. The “occupancy fee” claimed by the respondent, however, is a debt not yet adjudicated, and therefore cannot override the confirmed amount already paid by the appellant. First… Next…
(Details omitted here for brevity.)I find both arguments enlightening, but my follow-up question is this:
Does an unadjudicated occupancy fee truly fall outside the damages resulting from contract termination? From the respondent’s standpoint, how might they refute this position?
And how should we present this reasoning in the relevant section of our appeal brief?
What Does “Verification” Mean When Using a Legal AI Tool?
- When a legal issue is clearly defined, I directly use legal AI tools to search for supporting materials — such as statutes, cases, or scholarly articles.
- For uncertain parts of a large language model’s conclusion, I use legal AI to verify whether the conclusion is correct. For example, Gemini once gave me the following response:
The basic principle of law is that no party may use an unadjudicated, uncertain, or contingent debt to offset or withhold a payment that the other party has already made and that has a definite amount.
It sounded somewhat reasonable — but I wasn’t aware of any such “basic legal principle” (scratches head). In such cases, I use Lü Aido (律爱多) to perform search and translation, to see if I can find more precise or appropriate wording.
- When the issue involves specific legal provisions or case numbers, I go directly to check the statutes or case law.
*This is just one way I use Lü Aido in legal research. In fact, I also use it for other purposes — for example, to directly draft procedural documents that don’t require much thought (e.g., an application for suspension of proceedings). It’s more specialized than large models in such tasks. I’ll explore this part another time.
I Use Existing Legal AI Tools in a Very Traditional and Proper Way (LOL)
After organizing everything above, I’ve realized there’s nothing wrong with how I use legal AI — in fact, I use it in a very proper, almost old-fashioned way (?). The reason I once thought I was “using it backwards” is because my expectations of what a tool should do have fundamentally changed.
In the past, when lawyers used tools, their expectations were usually single and clear: to obtain information. We searched databases for cases, statutes, and articles using keywords.
In that sense, a legal research tool functioned like an infinitely large librarian — it responded to instructions and brought the materials we might need to our desk. But after this “delivery,” it was entirely up to the lawyer to filter, extract, and structure those materials into logically sound and compliant legal documents. The endpoint of the tool was the starting point of our work.

However, with the rise of large language models, this has completely changed.
The technology we have now doesn’t just retrieve information — it can also analyze and generate text. This means the boundaries of what a tool can do have expanded from “providing raw materials” to “processing semi-finished or even finished products.” Once I realized this, it was only natural that I began to expect more from my tools — hoping they could take on a larger role in my workflow.
Because of this, I once found myself confused — feeling that I might be “using the tool the wrong way.”The root of that confusion was that I subconsciously projected the powerful analytical and generative capabilities I saw in general-purpose large models onto AI-powered legal tools, expecting them to become an all-encompassing “do-it-all” assistant.
Now I understand that wasn’t the case — it wasn’t that I used them wrong, but that I misunderstood the division of labor between tools. The core function of Lü Aido (the legal AI I use most often) is still retrieval + analysis — it’s just a more responsive, more interactive, and more advanced retrieval system.
This intrinsic retrieval-based nature defines its core value: traceability. Every conclusion it gives must be accompanied by references to cases or statutes — a hallmark of rigor for any legal tool. However, in many practical scenarios — especially during the iterative and exploratory phase of drafting legal documents — what I need is not a constant fact-checker asking “where’s your authority,” but rather a clean, unconstrained creative partner who can freely generate and analyze text.
Therefore, the two core tasks I used to handle manually — analyzing information and drafting documents — are now the parts I prefer to delegate to large models. These days, a large model’s chat window has practically become my second Word panel; at least half of my legal documents are completed through conversations, revisions, and iterations with the model.
I no longer start from scratch, constructing sentences one by one. Instead, I work like a programmer doing vibe coding: First, I let the AI generate a basic framework or paragraph; then I review and filter it — keeping what’s right, deleting or fixing what’s wrong, and entering new instructions to rewrite. Through this process of repeated tuning, the document gradually gains structure and substance until it’s complete.
That’s all for today’s rambling thoughts. Just as I finished writing this, I saw that Wenma Laoshi published a new article — “Legal AI Technical Pathways and Training Data: Thoughts Triggered by Harvey and Legora” — highly recommended.
That’s it for now. Have a great weekend, everyone!