The AI Adoption Illusion
How vanity metrics are distracting firms from measuring what really matters


Adrian Parlow
August 21, 2025
Welcome to Attorney Intelligence, where we break down the biggest advancements in AI for legal professionals every week.
Read any Am Law 100 press release and you'll see impressive statistics. “We bought 500 Harvey licenses.” “80% of associates use AI weekly.”
But there's a problem lurking beneath these numbers: firms are measuring usage, not impact.
When I ask innovation leaders how they measure the impact of their AI investments, they almost always point to usage metrics. But usage and impact are not the same thing. And until firms start measuring what actually matters, they're not just wasting money - they're missing the transformation entirely.
In this week's Attorney Intelligence, we'll explore:
Why current AI metrics create an illusion of progress
What lawyers are actually doing with these expensive tools
The two questions that would change everything
How to measure transformation, not just adoption
Let's get into it.
The Current State: Measuring Usage, Not Impact
The pattern is remarkably consistent across Big Law. Innovation teams proudly share their metrics in press releases and at conferences: number of licenses purchased, weekly login rates, monthly active users. They create dashboards showing usage trends heading up and to the right.
There are two core problems with this approach.
The bar being for “adoption” is way too low. In many firms, an “active user” is someone who logs into the tool once per week. A top 10 firm I spoke with defines someone as a "Harvey power user" if they run one query per day. As someone who works with developers every day, I can tell you: a power user is someone who uses AI all day, every day. Once per day is table stakes.
Measuring usage says nothing about the actual impact. It doesn’t tell you what they used AI for or whether it's changing their work. Just: did they log in and run a query?
This is like measuring the success of a gym membership by counting who swiped their card at the door, regardless of whether they worked out or just used the WiFi.
But the dashboards look great in partner meetings.
The Reality Check: What Users Are Actually Doing
Here's what practitioners tell me in private.
The vast majority of usage mirrors what any professional might do with ChatGPT. Draft an email. Rewrite this paragraph to sound more formal. Write a client alert article. Summarize this case.
These are useful tasks, for sure. But they're not transforming legal practice. They're not changing how deals get done or how litigation strategies develop. They're productivity tweaks on the margins.
That’s not to say there aren’t some actual “power users” or that real industry transformation isn’t coming - it definitely is.
But right now, we’re barely scratching the tip of the iceberg.
The Questions Firms Should Be Asking
If firms want to understand whether AI is actually working - and whether their investments are paying off - they need to ask different questions.
What Percentage of Work Is Done Using AI?
Stop asking "Did you use AI?" and start asking "What percentage of your work was done with AI?"
There's a massive difference between someone who opens Legora for 5 minutes to draft an email and someone who automates 10 hours of due diligence with it. Yet most firms would count both as "AI adoption."
Time-based measurement is the ground truth. If associates are spending 50% of their workflows with AI tools, that's real adoption. If they're spending 2%, that's experimentation.
What Impact Is AI Actually Having?
Usage without impact is just innovation theater. The metrics that matter are:
Efficiency improvements: Is legal work actually getting done faster? By how much? Not anecdotally - measured.
Quality improvements: Are we producing higher quality work product when AI is involved?
The current reality is that most firms aren’t even trying to measure these things. But clients are asking about their AI strategy, and all their competitors are announcing sky-high “adoption” metrics. So the default talk track becomes “70% of our attorneys use AI.”
Why This Measurement Gap Matters
This isn't just about justifying IT spend or making innovation teams look good. The measurement gap has real consequences.
First, you can't optimize what you don't measure. If firms don't know which workflows are actually being transformed and which are just being decorated with AI garnish, they can't focus their training, development, and investment where it matters.
Second, there's a growing risk of AI fatigue. Associates are being pushed to use tools, but don’t know how to do so in ways that actually help them. Clients are being promised AI-powered efficiency that doesn't show up in outcomes or pricing.
Third, and most importantly, some firms are figuring this out. While most of Big Law is celebrating vanity metrics, a handful of firms are quietly moving the needle in a very real way.
The competitive advantage won't go to firms with the most AI licenses. It will go to firms that know how to put that AI to work.
The Path Forward: Building Real Measurement
The solution isn't complicated in theory, but there's a practical bottleneck: the tooling simply doesn't exist in most firms.
It's almost impossible to get attorneys to manually log where and how they’re using AI. They're already resistant to regular time tracking, and asking them to add another layer of categorization is a non-starter. Tools like PointOne can automatically track where people are actually spending their time and which AI tools were used in their workflows.
The other main challenge is measuring quality. Speed improvements are easy to quantify, but how do you measure whether AI is making legal work better, not just faster? Firms need better quality metrics that go beyond partners’ gut feelings. Things like case and deal outcomes and feedback from clients could become valuable signals.
The Bottom Line
The current metrics create an illusion of progress that's shielded firms from scrutiny so far. Firms can show steep adoption curves to justify their positioning to themselves and their clients. But that “adoption” often represents shallow, marginal use that doesn't change how law is actually practiced.
Real transformation requires honest assessment. It requires asking uncomfortable questions about what's actually happening versus what we wish was happening. It requires measuring impact, not activity.
The firms that figure this out - that start optimizing based on real impact - will pull ahead. Those still celebrating login statistics will wonder why their "transformation" never showed up in their results.
The question for every firm is simple: Are you measuring what matters, or just what's easy to count?
Legal Bytes
Harvey and Ironclad announced a strategic partnership to fuse Harvey’s legal reasoning copilot with Ironclad’s CLM workflows, aiming to let in-house teams draft, review, and route approvals inside the system where contracts already live.
LexisNexis introduced “Protégé General AI,” a secure, multi-model layer inside Lexis+ AI that lets lawyers tap GPT-5 and other top models alongside Lexis-tuned agents—bringing agentic, citation-validated workflows and general-purpose AI into one environment.
The American Lawyer spotlighted Ropes & Gray’s “Tech Jams,” structured GenAI workshops where summer associates used the firm’s AI tools on real matters—signaling AI fluency as part of standard early-career training.
Looking to improve your firm's efficiency?
Book a demo to see how PointOne can help maximize billable time and measure what actually matters.
Thanks for reading and I'll see you next week,
Adrian