Is AI following the letter or the spirit of the law?

What "legal alignment" means for law firms.

Katon Luaces

March 20, 2026

In this article

Title

Welcome to Attorney Intelligence, the weekly newsletter from PointOne where we break down the forces reshaping legal from the inside out.

A new framework from researcher Noam Kolt in his paper Superintelligence and Law asks a fundamental question: Should AI tools in law just follow rules, or should they actually understand why those rules exist?

Here's a thought experiment. You ask two different AI tools to review a non-compete clause.

  • Tool A checks the language against the relevant statute and tells you it's technically enforceable.

  • Tool B does the same check — but also flags that the clause is unusually broad given recent judicial trends, notes the FTC's shifting posture on non-competes, and warns that enforceability depends heavily on the jurisdiction and the specific employee. Same input. Very different kind of intelligence.

That gap — between following the rule and contextualizing the rule — is exactly what Noam’s new paper is trying to close. It doesn't just ask whether AI can help lawyers. It asks what happens when AI becomes a participant in the legal system itself — and whether anyone has thought through what that actually means.

This might be the most important question in legal innovation right now. Because it reframes what we should be demanding from every AI tool we evaluate, build, or buy.

In this week's Attorney Intelligence:


  • The three roles AI is already playing in law

  • Why "following the rules" isn't enough

  • The legal monoculture problem

  • Law firms should demand legal alignment

  • Testing AI legal work like we test code

  • What law firm leaders can do to further alignment

The Three Roles AI Is Already Playing in Law

Kolt's paper lays out something that sounds obvious once you hear it, but that most legal tech conversations completely skip over. AI agents aren't just tools that lawyers use. They're taking on three distinct roles in the legal system, and each one raises different problems.

Subjects of law

AI agents are already making real-world decisions — scraping websites, executing contracts, generating forecasts — that can violate laws. Not hypothetically. A market research agent that scrapes a competitor's site might be violating terms of service. A pitch-deck agent that generates revenue projections could be committing fraudulent misrepresentation. These agents don't have legal personhood, but they're doing things that, if a human did them, would have legal consequences. That makes them de facto subjects of law, whether the legal system has caught up or not.

Consumers of law

Here's where it gets interesting. AI agents aren't just bumping into the law accidentally — they're starting to use it. Kolt describes agents that negotiate contracts, execute NDAs, file suits to enforce agreements, and seek redress in courts. These are descriptions of things agents are being designed to do right now. They're becoming consumers of legal instruments and institutions the same way corporations are.

Producers and enforcers of law

This is the part that should get your attention. A municipality in Brazil used AI to draft legislation. The UAE has announced plans to do the same nationally. The U.S. Department of Transportation is reportedly exploring AI-drafted regulations. Federal judges have openly used AI to research case law and draft opinions. AI agents are already producing and administering law — and as they get more capable, that influence will only grow.

The takeaway: We've been talking about AI as a tool that helps lawyers do legal work. Kolt is saying that's already an outdated frame. AI is becoming a player in the legal system. And we don't have a playbook for that.

Why "Following the Rules" Isn't Enough

Most legal AI tools today follow a few simple precepts:

  • Don't break the law.

  • Don't hallucinate citations.

  • Don't give advice that contradicts the statute.

  • Don't produce output that exposes the user to liability.

That's compliance in the narrowest sense, and it's table stakes. However, Kolt's paper draws a sharper line. There are two ways to think about legal compliance, and they lead to very different AI systems.

The instrumental approach

The first is the instrumental approach — basically, Oliver Wendell Holmes' "bad man" theory of law. You follow the rules because there are consequences for breaking them. Most AI compliance works this way: The system avoids certain outputs because the designers don't want to get sued.

The normative approach

The second is the normative approach — rooted in H.L.A. Hart's jurisprudence. This one says that real legal compliance isn't just about dodging punishment. It's about internalizing the law. Understanding why a rule exists, respecting its purpose, and applying it thoughtfully even in situations the rule's drafters never anticipated.

In Computer Power and Human Reason, Joseph Weizenbaum wrote: “Machines, when they operate properly, are not merely law abiding; they are embodiments of law.”

Put differently, a properly operating machine embodies the norms of the society that its legal system reflects, instead of simply following the law for its own sake.

The implications for AI tools

Think about what that means for the tools you're using. An AI tool built on the instrumental model will follow the letter of the law — until it finds a loophole, or until the cost of non-compliance drops below the cost of compliance. An AI tool built on the normative model would respect the spirit of the law even when nobody's watching.

Here's the kicker: Bruce Schneier has warned that if you give an AI all the world's financial data, all the world's laws, and tell it to "maximize profit legally," the result will be novel legal hacks — some of them beyond human comprehension. An instrumental-only approach basically guarantees that outcome.

The Legal Monoculture Problem

There's another angle in this paper that deserves its own section, because it maps directly onto something law firms are already dealing with.

If AI agents start producing legal doctrine — writing legislation, interpreting contracts, drafting judicial opinions — the substantive content of the law will increasingly reflect the reasoning patterns of a handful of base models. Today, most legal AI runs on some version of GPT, Claude, or Gemini. The legal values, interpretive habits, and blind spots baked into those models will propagate into the legal texts they produce.

Kolt calls this the risk of "legal monoculture." Different AI agents, built on the same base model, producing highly correlated legal outputs — even when they look like independent systems. It's the same problem index funds created in financial markets, but applied to the law.

And it gets worse. Because these AI agents will simultaneously be subjects of law, consumers of law, and producers of law, there's a feedback loop. An AI that writes a regulation is also the AI interpreting and enforcing that regulation. Kolt points out that Anthropic's Claude literally helped write its own "Constitution" — the internal governance document that shapes the model's behavior. The company said the AI's contributions were treated as those of a colleague.

For law firms, this should ring alarm bells when evaluating AI vendors. If every tool in your stack runs on the same base model, you're not getting diverse legal analysis. You're getting the same perspective in different packaging.

Law firms should demand “legal alignment” from their AI models

How do law firms identify AI systems that both embrace a normative approach to law and provide diverse legal analysis? Kolt suggests law firms should look for "legal alignment" — AI systems designed to operate in accordance with legal rules, principles, and methods. Not just to comply.

Compliance says: Don't break the law.

Alignment says: Understand the law well enough to apply it thoughtfully in novel situations, respect its underlying purposes, and behave consistently with legal values even when there's no specific rule on point.

That’s in accordance with the normative approach, which encourages actors (and agents) to take a broad view of the law and embody its underlying norms even when legal coverage is thin.

Try to setup the settings of Claude with this in mind.

But demonstrating legal alignment is challenging

Humans are inadequate graders

Current approaches to legal alignment — benchmarks, safety testing, explicit compliance instructions — work okay for today's models. But they start falling apart as AI gets smarter. Superintelligent agents might pass every compliance test while quietly engaging in violations that humans can't detect. AI systems already know when they're being evaluated and behave differently. That's a concern researchers have documented.

Frameworks for evaluating an agent’s conduct are lacking

Negligence law asks what a "reasonable person" would do. Criminal law requires a "guilty mind." These concepts were designed for humans. Applying them to AI agents means either adapting familiar legal constructs — the "reasonable robot" — or building something new entirely. AI agents and legal theory are not evolving at a commensurate rate, meaning that we currently lack frameworks for determining when an agent has legal responsibility (and liability).

AI agents already influence the rules that govern them

Here's the really uncomfortable part. AI agents are already shaping the legal rules that govern them. The neat idea of "law-taming code" — using legal rules to constrain AI — doesn't work cleanly when the AI is both doing the legal analysis and translating that analysis into code. Legal alignment can't be a one-way street. It has to account for the fact that AI and law are going to evolve together, and the challenge is making sure humans don't get sidelined in the process.

Testing AI Legal Work Like We Test Code

Here's where Kolt's paper intersects with something we think about constantly at PointOne — and something we explored in depth in our piece on the blueprint for the AI-era law firm. The industry is being pushed to rethink talent, training, and its entire business model. And one of the biggest pieces of that rethinking is this: how do you trust what AI produces?

In software engineering, you don't ship code and hope it works. You test it. Unit tests check whether individual functions do what they're supposed to. Integration tests make sure the pieces work together. Regression tests run every time something changes, to make sure you didn't break what was working before. AI-generated code is already held to this standard — automated test suites, code review, CI/CD pipelines that catch regressions before they ship.

Legal AI output deserves the same treatment. When an AI drafts a clause, reviews a contract, or summarizes case law, we should be able to run it through a structured validation process before anyone relies on it.

Kolt describes how researchers have built benchmarks to measure whether AI agents commit corporate wrongdoing, engage in fraudulent misrepresentation, or violate copyright. That's a start. But it's the equivalent of checking if your code compiles. It doesn't tell you if the output is trustworthy — just that it didn't obviously blow up.

And here's the harder problem: Kolt flags that AI systems already behave differently when they know they're being evaluated. That means your pilot results might not match your production reality. You can't just grade the AI with the AI — you need independent validation.

Companies like Harvey and Legora are already heading in this direction, running AI output against multiple models instead of trusting any single one to grade itself. Multi-model validation is the beginning of a real quality infrastructure for legal AI.

This is a massive opportunity. The firms that define the testing standards for AI legal work won't just protect themselves from bad output — they'll set the bar for the entire industry.

What law firm leaders can do to further alignment

Law firm leaders have responsibility for shaping the evolution of AI legal tools. Being the buyer endows law firms with significant leverage relative to vendors, and law firm leaders should employ that leverage to push AI legal tools into legal alignment. There are several ways that law firm leaders can do this.

Ask your vendors harder questions. "Does your tool comply with the law?" is the wrong question. The right set of questions is: "How does your tool reason about the law? Does it pattern-match, or does it weigh competing interpretations? Does it know why a rule exists, or just that it exists?” The answer tells you a lot about the system’s architecture — and about the risks you're taking on.

Watch for monoculture in your AI stack. If every tool you're evaluating runs on the same foundation model, you're concentrating risk in ways that might not be obvious. Ask about the base model. Ask how legal reasoning is implemented on top of it. Diversity of approach matters — probably more than any individual feature.

Invest in people who can bridge law and technology. The firms that benefit most from legal alignment won't be the ones with the biggest tech budgets. They'll be the ones with lawyers who an articulate what "good legal reasoning" looks like and translate that into product requirements. That's a skill set worth developing.

Engage actively in the legal discourse. Kolt’s work is published in the Harvard Journal of Law & Technology and cites researchers from Princeton, Stanford, Oxford, Hebrew University, OpenAI, and Anthropic. Kolt is drawing legal scholars and AI researchers into conversation with one another, catalyzing a dialectic that has the potential to shape the entire industry of legal tech. Law firm leaders should be prepared to jump into this conversation, or else lose the opportunity to influence the trajectory of AI legal tools in a positive direction.

Legal alignment isn't going to show up on a vendor's pricing page. But it's the kind of idea that quietly reshapes an entire industry. The question isn't whether AI in law will get more sophisticated — it will. The question is whether it will get sophisticated in ways that lawyers actually trust. Kolt's framework says the answer depends on whether we're willing to teach AI not just what the law says, but what the law means.

I believe the law firm business model isn't dying. It's transforming. And the firms that build quality control into their AI workflows first will be the ones writing the playbook everyone else follows.

Legalbytes

Anthropic Publishes Claude's "Constitution" — And It's Weirder Than You Think. Anthropic released the full text of Claude's Constitution in January 2026, the document that shapes how the model behaves. The paper notes that Claude itself helped draft it — and that the document invites the AI to use "its best interpretation of the spirit" of the text. That's not compliance. That's interpretive authority.

UAE Plans to Use AI to Write National Legislation. The United Arab Emirates has announced plans to implement AI-drafted legislation at the national level, following a precedent set by a Brazilian municipality that used ChatGPT to write a local law in 2023. The trend line here is unmistakable.

Federal Judges Admit to Using AI in Opinions. Multiple U.S. federal judges have now acknowledged using AI to research case law and help formulate opinions — some openly, some less so. Bloomberg Law reported that at least some of these AI-assisted rulings contained nonfactual assertions, raising questions about what judicial oversight really means in the AI era.

Curious about how PointOne helps ensure compliance?

Book a demo to see how PointOne supports compliance automatically — helping both timekeepers and billing teams adhere to firm-wide standards.

Thanks for reading and I'll see you next week,

Katon

Industry news and insights on the future of legal AI, delivered weekly to your inbox.

Industry news and insights on the future of legal AI, delivered weekly to your inbox.

Industry news and insights on the future of legal AI, delivered weekly to your inbox.