Podcast Ep #60: Practical Uses for Legal AI - No Hype, No Fear with Damien Riehl

March 11, 2025
March 11, 2025
chat_bubble_outline
0 Comments. Create a free account to comment and submit questions.  
settings
Get Started
Have you been wondering how AI can help your law practice today? In this episode, I talk with Damien Riehl, a lawyer and legal technologist who has been at the forefront of AI in law for years. We skip past the usual AI ethics debates and hallucination horror stories to focus on the most important question: how can small and mid-sized firms use AI right now to improve client service and profitability?

Damien breaks down the core functions of lawyering - assessing risk, navigating complexity, and providing wise counsel - and explains how today's AI tools can enhance each one. By automating rote tasks and augmenting human expertise, AI opens up new possibilities for efficient, effective legal service delivery. But realizing those possibilities will require most firms to evolve their business models and workflows.

Whether you're an AI skeptic or an enthusiastic early adopter, you won't want to miss Damien's practical, actionable insights on the tools that are already transforming law practice. Tune in to learn how you can leverage AI to do more high-value work for clients in less time than ever before.


I know many of you are thinking about your goals and strategies for 2025. To support this process, I've created a focused guide that I'm calling the Strategic Planning Shortcut.


This isn't just another planning template. It's a considered approach that will help you identify realistic goals while building a plan that engages your team, delights your clients, and delivers real results for your business. Click here to get actionable steps that you can implement right away in your practice.
Start your Agile transformation today! Grab these free resources, including my Law Firm Policy Template, to help you and your team develop a more Agile legal practice. 

What You'll Learn in This Episode:

  • Why AI assistance is becoming essential for efficient, effective legal service delivery.
  • How AI can automate risk assessments, legal research, and document drafting to free up lawyer time.
  • Strategies for using AI to deliver better client experiences and outcomes.
  • The business model challenges and opportunities that AI creates for law firms.
  • How to get started with safe, simple AI experiments in your practice.

Listen to the Full Episode:

Featured on the Show:

If you're a regular listener, you've probably noticed a distinct lack of me talking about AI on this podcast, and that's been intentional. As my guest today, Damien Riehl, will explain, improvement efforts require people, process, and tools, and I primarily work in the people and process parts of that equation. But there's no denying that AI tools are driving significant changes to the practice of law.

So I invited Damien on the show to give you a sense of where the technology is today and where it's headed in the not-too-distant future. Damien is a lawyer, a coder, and a legal technologist who's been at the intersection of law and technology for decades, and he currently works on the Vincent AI product for the legal technology company VLex.

Fair warning, today we are skipping over a lot of the common AI concerns. No legal ethics debates, no hallucination horror stories. So we can get straight to what really matters. How can small and mid-sized law firms actually use AI to improve their practice right now? And whether you're already an AI user or an AI skeptic, you're not going to want to miss what Damien has to say.

You're listening to The Agile Attorney Podcast, powered by Agile Attorney Consulting. I'm John Grant, and I help legal professionals of all kinds harness the tools of modern entrepreneurship to build practices that are profitable, sustainable, and scalable for themselves and the communities they serve. Ready to become a more agile attorney? Let's go.

Damien Riehl, welcome to The Agile Attorney Podcast.

Damien Riehl: Thank you so much for having me. I'm thrilled to be here.

John: Yeah, I am super excited to talk to you. Give us the high level intro of who you are and what you're up to and how you got to what you're doing today, because I think you've got a great career path.

Damien: Sure, I've been a lawyer since 2002. I worked for a large law firm, Robbins Kaplan, for a bunch of years, representing Bernie Madoff victims, suing JP Morgan over the mortgage-backed security crisis. So I did high stakes bet the company litigation, but I've also been a coder since 1985. So I have the law plus technology background. So throughout my entire career, I've been thinking about how can I do things better in the law with a technological spin?

So I started going to law school in 1999, right around the time of the internet revolution. And so my entire 15 year career, I thought about how can I use the internet to be able to make my work better? How can I use technology to make my work better? So in 2015, I pitched Thompson Reuters. I said, "Here's legal tech that I think can change the world, you should build it and hire me." And they were dumb enough to do that.

So I led a team, a huge team of lawyers and coders building a big thing. Then I worked in cybersecurity for a while, where the biggest thing I did was Facebook hired me and my company to investigate Cambridge Analytica. So I spent a year of my life on Facebook's campus with Facebook's data scientists, and we figured out how bad guys use Facebook data.

Then I left that cool job to do my current cool job, where I have a data set of a billion legal documents, cases, statutes, regulations, motions, briefs, pleadings, orders. And that's my playground where I'm doing data science on those things, extracting everything that matters, running large language models across them to do things that are superhuman for VLex and like a tool that we're building called Vincent.

So in the morning building tools and in the afternoon I'm talking about the stuff I've built. So I think I might have the best job in legal technology where I'm able to both build and talk about those things.

John: That's, yeah, fantastic. You and I have some overlap in terms of early part tech careers. I was never an actual coder, but I was in business process and business operations. So it was all about how to use the tech in order to drive business outcomes, right? Delight customers, delight clients, get competitive advantages, although often not always in the way that people think, right? It really is about delivering better experiences, but better outcomes for the user.

Damien: That's right. There's people process technology and certainly technology is one of those, but the people in process legs of that stool are probably more important than the technology.

John: There's certainly the parts of the sandbox I like to play in, and I appreciate folks like you that are playing at the technology end of the sandbox. And that's where we get to have a lot of fun when we chat.

So believe it or not, this is like my first episode that mentions, maybe, I think I may have mentioned AI once before, but I have been, in 2025, it's amazing that I've held off from mentioning AI. And then again, that's partly because I play in the people in process end of the pool. But I do think that there's a lot that is important, especially for the firms that I'm generally speaking to, and those tend to be your smaller midsize firms, in-house legal teams, sometimes practice groups inside of bigger firms.

But I don't think for the most part, maybe I'm selling myself short, but I don't think that the COOs or CTOs of large AMLA 100 firms are necessarily listening to me yet. Maybe they will someday, who knows. But in order to have a tight conversation about AI, I think I'd like to lay a couple of ground rules. And you and I talked about this beforehand, but for everyone who's listening, we're just gonna make some assumptions. Number one, that we're totally fine around AI and our professional duties, professional responsibilities.

So take it for yourself that you need to be comfortable with that, that you need to understand what's going on in terms of duties of confidentiality and things like that, duties around what you can do as far as timekeeping for AI and the use of AI. We're not going to hit on that, I don't think, very much. The other thing that I think is really well-covered ground that I don't necessarily want to retread is this notion of hallucinations and making things up and the fears that a lot of lawyers rightfully have and the caution that a lot of lawyers rightfully have, but I think there are other places that are talking about that different from this.

What I want to do is use my time with Damien here to get a sense of what are some really solid use cases for that small and mid-sized legal practitioner to be using or thinking strongly about using AI in their practices today. Open-ended lead. Let me let you riff on that for a while.

Damien: Yeah. So yes, I agree with both of those. Ethics is, is something we shouldn't talk about. Hallucinations are things we shouldn't talk about. And one of the reasons we shouldn't talk about them is because they're an easy no button.

That is lawyers can easily put their head in the sand and say, no, because it's unethical, no, because it's hallucinations, therefore I'm not even going to think about AI. But if we were to say, and it's a true thing, that you can ethically use tools like Vincent that are secure, and it's not going to train on your data. And we reduce hallucinations to pretty close to zero. Yes, we should not talk about them today because they are largely solved problems. And they are largely excuses for people to be able to not use generative AI and put their heads in the sand. So let's not talk about those.

John: It reminds me of the debates that we had around cloud computing 10 and 15 years ago.

Damien: That’s exactly right. You'll never put me in the cloud because you'll take my server out of my cold, dead hands. And everybody knows that I'm never going to use eDiscovery because every human eye needs to be on every document. And now if you don't use eDiscovery, you're often violating your ethical duties. Even further in 1999, when I was going to law school, my law librarians told me, do not trust Westlaw because there were errors in the Westlaw version that were not present in the book.

So they said, go to the stacks. And I spent a lot of time trudging to the stacks. But did that make me a better lawyer? It wasted my time. And so I think we're at that separate time right now with AI, with the things that AI can do for us. Are we requiring people to trudge to the stacks and waste their time? Or are we going to practice law like we should in the 21st century?

John: I think that's great. Okay, so then setting those things aside and even marking them as solved, what are the things that lawyers can and maybe should be doing with AI today?

Damien: If you think about every single thing a lawyer does, whether you're a litigator or whether you're a transactional lawyer, every single thing that any lawyer does involves words. All we do is we ingest words, we analyze words, and we output words. And it turns out that large language models can do all three of those things at a superhuman speed and at a postgraduate level. It's beating PhDs in physics. It's beat 90% of humans on the bar exam.

So we are at a time where now law is words and that's different than other professions. A doctor still puts a scalpel into someone, right? We don't have any physical portion of our work. Every single thing that we do is intellectual, is word-based.

And so now that large language models can do the word-based things, what does that open up for all the work that we do? And I think that's, then we as lawyers do two types of things. Most of what we do is backward looking. We apply facts to law. That is, here's my client's new facts. Here's the law that applies. So let's take the law and apply the facts or vice versa, if you prefer. And it turns out that large language models do those things really well.

So whether you're a litigator saying, what's my risk of getting sued for this, or whether I'm a transactional lawyer saying, can I include this in a contract or not? Will this violate EU antitrust law or will this violate employment law? Each one of these things is applying facts to law. And large language models can do both of those things really well. So the thing number one that it can do is backwards looking at applying facts to law.

The other is forward-looking ideation and large language models can be able to do that ideation very well. A friend of mine works in-house for an insurance company. And she said, Damien, what do you know about affective computing? That's A-F-E-C-T, affective computing. And I said, I'm a tech lawyer, and I wish I knew what it was, but I didn't. So I went to the large language model and I said, tell me about effective computing as it's being used in an insurance call center. And this is how my friend's company was thinking about doing it.

And the large language model said that affective computing is how computers interpret emotions. So from the tone of voice, from the affect, you're able to see if someone's very upset or very angry. Then I said, cool, tell me how it might be used in the call center. And the large language model said it might be used in the call center to be able to see if a customer who has just been injured is angry and that kind of thing.

I said, cool. Now tell me how it could be used. What are the legal implications for these things? And they said, have you thought about privacy law? Did somebody actually give you permission to analyze their emotions? Tell me about whether that complies with GDPR. Tell me about what jurisdictions these people are coming from. If they're from the EU, that'd be really bad. If they're from California, the CCPA might apply.

I said, cool. Now provide sub things, tasks to be able to qualify. And here are a bunch of questions that I can ask my client, the insurance company, to be able to do these. This is the work of the lawyer and this is not backward looking.

I would wager that nowhere in the large language model's training set, did somebody ever ask, no one has ever asked, tell me about affective computing and the law. But what it did is it did this forward-looking concept of it knows what the law is, and it knows what affective computing is, and it put those two things together in a way that nobody ever has in the past.

John: So there's a few things that I love about that. And before I hit them, I actually do want to back up for a second and say I do maybe have a quibble with your statement that there's not a physical thing that we do as lawyers. I think that maybe historically we have put our sort of more physical thing on the back burner and maybe it's not truly physical but one of the things that I talk about a lot is that lawyering is fundamentally a caregiving profession.

Damien: Agreed.

John: And I think that what you're talking about falls pretty squarely in what is sometimes thought of as augmented lawyering. So a lay person who was given the same sort of charge that you were with respect to your friend and the affect of computing is not going to be terribly effective, right? And maybe an AI tool could eventually get them closer.

But because you have a strong understanding, at least a good baseline understanding of privacy law and GDPR and California privacy, your training and experience as an issue spotter, which is one of our favorite things to do as lawyers, you're able to work collaboratively with AI to get some information that triggers some ideas or some ideation for you, you can then push that back to the AI to get some clarification.

Maybe there's some things that you quibble with and you might re-prompt it and say, oh, no, I don't think that sounds right. Can you look at it from this other angle? But it's this sort of conversational way of approaching a topic that you maybe didn't know anything about in the first place and together using the relative strengths and skills that you bring and that the AI brings in order to come to something that ultimately is going to hopefully be a really good solution for that client.

Damien: I agree with that. Yeah, so I think that humanity aspect of lawyering, that is to be able to talk your client off the ledge to say, gosh, do you really want to sue over this? Or in a family law matter, come on, do you really want to go into an all out war with your ex-spouse, or do you want to be able to work this thing out?

That kind of humanity and that being able to lock eyes with someone. And that is arguably one of the most important aspects of legal practice, largely because we are truly counselors. That is, counselors both in the legal sense and in the humanity sense, that we are counseling our people for what we think is the best thing for them to do.

So when we think about the issue spotting that you said, that is true today that I, as somebody who works with a large language model, have to say, I'm going to take my experience as an IP lawyer, or take my experience as a family law lawyer, or my experience as a bankruptcy lawyer. And then with that experience, I can prompt better and be able to ask the better questions that maybe a novice would not be able to.

That is true today, but it's increasingly becoming less true. Because if I'm doing my job correctly, you won't have to be a prompt engineer. If I had VLex, I'm doing my job correctly. I will say you are an expert issue spotter, who is an expert in bankruptcy law, be able to take this interview with my client and be able to ask the client questions that will be able to elicit the most important aspects of this bankruptcy case or the most important aspects of this family law case or the most important aspects of this whatever case.

And that's literally right this morning. I was building a tool to take this MP3 interview or this audio recording or this video recording, transcribe that audio recording, and then do issue spotting to be able to say, here are potential causes of action in a litigation matter. Here are potential family law matters that we should be thinking about in your jurisdiction in California or in New York or in Texas. All that's to say is today in the year of our Lord 2025 in February, we have to do issue spotting and be able to work with the large language models.

But because there's people like me that are actually building these tools to become better issue spotters. We can be more like Sam Altman when he was asked on stage, Sam Altman, the CEO of OpenAI, which makes ChatGPT. He was asked on stage, what do you think of the ascendance of prompt engineering as a field? And Sam said, I hope that's not a thing in a year. Because if I'm doing my job correctly, you won't have to be a prompt engineer. All you have to do is ask a question and I'll give you the answer.

So I agree with Sam Altman that if we at VLex and if I, Damien Riehl, I'm in doing my job correctly, you won't have to be a prompt engineer. You upload the interview. We do the issue spotting for you. And we say, what jurisdictions is this? And for the breach of contract claim, it's a California law, but for the trade secret claim, it's under New York law, we issue spot.

And then we say, what are the questions to ask their client to help them win? And we'll actually create that list of those questions based on the actual law. So if I'm doing my job correctly, you won't have to do any of those things. You just upload the interview and we do that issue spotting for you.

John: And that was going to be my next question, right? Presumably that interview will be incomplete in terms of the information contained in that interview. So presumably it'll also give you some ideas about what are the other things that together you and your tool set needs to know in order to get to more definitive answers or to rule in or out potential other issues, etc., etc.

Damien: That's exactly right. Somebody say, do I have a claim for breach of contract? It depends. Is this a claim under New York law or is this a claim under California law? And then what are the elements of that breach of contract claim under California law? Number one, is there a contract? Did you figure it out that you need the sub? Is there an offer? Is there acceptance? Was there consideration?

And then number two, did the plaintiff breach that contract? Or did the defendant breach that contract? Number three, did the plaintiff abide by the contract? Number four, was there damages? So you can imagine my tool goes out and finds those elements of the claim.

And then goes into the interview and says, which of these elements are not addressed in the interview, and then ask questions of the client to be able to answer those questions. So we're building these tools right now for lawyers, but you can imagine the lawyers using those same types of tools to automate interviews with clients. So you could have the clients actually talk to a system, maybe using voice, and the system can ask those questions about offer, acceptance, the consideration, all of those things, and then give you a nice transcript that can then automate the creation of a complaint that can then be filed.

John: Yeah, so that's interesting. So you're taking it to a place where, I recognize that this is probably the place that we're going, I guess I wonder out loud, as I'm doing right now, how close we are, because I do think, and I've got a model that I work with, right, that's cobbled together from a few different things, and my listeners will have heard this before, that starts with the question, what is a legal service? And to answer that question, I turned to a couple different places.

One, our mutual friend Daniel Katz, who many years ago gave a presentation where he basically said, look, a legal service solves for two and only two things. It helps people mitigate risk and navigate complexity full stop. And I think that's great. I love that from a practical breakdown. But then as I was talking about that with another friend, colleague, actually a former law partner of mine, he said, that's not quite right because there are also social emotional drivers in the need to seek legal help.

And we broke that down and the formulation I have for that is people, when they encounter risk or complexity, there's a fundamental human need for seeking wisdom, for seeking advice, and for seeking consortion, right? Feeling that someone's in it with them.

Where I have my open wondering, right, is at what level are the machines going to provide a definitive enough connection to information giving, we already know. Humans trust machines for information giving, although information overload is also a problem with that. I think a problem that AI has actually helped solve because the Google results are more overwhelming than ChatGPT responses.

The advice is interesting because I think we're not quite at a place where most people, most of the time, have a high degree of trust in the advice. And certainly lawyers, right, the skeptics that we are, I think struggle with the trust element of the advice that they're getting from an AI. Part of what I know that's what you're working on, right, is to make it as trustworthy as possible.

Where I have my doubt is at what point the AI can actually model the consortium piece and can we have consortium with the machine, Joaquin Phoenix movies notwithstanding, at what level are we going to be able to do that? I think we are seeing that happening in certain subsections already. So I don't doubt that we're getting there, but I don't think that most machines are as good as most humans most of the time yet.

Damien: I think that's right. And the question is, how soon do we get there? And what is the client base's appetite to be able to do such things? So I agree both with Dan Katz that really everything that we do is assessing risk, legal risk, and then assessing complexity. And I would say that your former law partner's human layer aspect of it would be arching umbrella above those two things. That's the way that you deliver the risk and complexity is through emotionally connecting with your client to be able to work this. So I would say it's not a third item, but instead is an overlay amongst those two items.

So then when you think about that humanity overlay, we'll do two scenarios. Scenario number 1 is the bot that I was describing earlier, the interview bot, where the client comes in and the bot asks questions, the human responds, and then you crank out a complaint that is done programmatically. That's option 1.

Option 2 is you have a human, maybe a lower paid human, maybe a paralegal, maybe somebody less than a paralegal, where the bot is actually feeding questions that human asks to that person, thereby giving the illusion of the client that this is actually a human interview that is just really good because the bot is feeding the right questions.

So between option 1, a completely automatic process, and option 2, option 1 is far more scalable, therefore far less expensive than hiring a human to do this. But of course, some people might want option 2. So when you think about who might prefer option 1 over option 2, and that is maybe somebody that is part of the 92% of legal needs that are unmet because we lawyers are too expensive. Maybe they would prefer option 1 because they can't afford option 2. And option 3 of course is asking a human lawyer that is far more expensive than option 2.

So they can't afford option 3, the human lawyer. They can't even afford option 2, which is the low paid paralegal, but they can't afford option 1, because option 1 is $5 versus the others. So when we think about who is willing to do that thing, there's a cost of humanity because humanity doesn't scale. So maybe people will settle for the bot.

And maybe the younger generations, the TikTok generation and others, might prefer speaking to the bot rather than speaking to the human. When we think about who would prefer option 1 versus option 2, option 3, I think that's the X factor and the Y axis, is number one, how much can you afford? And number two is what is your willingness to be able to speak with an automated system rather than humans?

John: Yeah, and alternatives, right? Because I do think that right now for that 92%, they're going and seeking advice and consortium, they're just seeking them from people that aren't legal professionals, right? They're seeking from friends and acquaintances and clergy or whoever else that may have some information and be useful. And again, not to do all of the potential use cases for it, but a well-designed legal bot could augment a clergy member, could augment a mental health counselor in terms of giving some good level information about legal impacts of things.

And of course, now we're getting into UPL, and that's another wormhole that I won't go down. Maybe should have brought that up at the top of the show too. But as far as utility to society, utility to the user, it's hard to argue with the usefulness of getting more people able to provide more and better help to people that need it.

Damien: That's right. And I would say two things to that is that what are people doing today who can't afford legal help and they're going to Google. If they live in California, they're coming across a website that actually talks about New York law, but they're essentially relying on New York law to solve their California problem, which is an error. So the question is for the tools that we build, will it be better than that? And the answer is probably.

If the question says, okay, where do you live? And the user says, California law. And then if the system then restricts everything to California law, that's a much better system than today, which is just plain old Googling. So that's thing number one is that…

John: Or again, if they're not going to Google, they're going to their friend's aunt's cousin that had a similar thing and had a similar thing five years ago and whatever. That's a human thing that people do is who else has experienced this thing before and how can I get information and advice from them?

Damien: That's right. Who do I trust? And often I trust my aunt or I trust my uncle or I trust a lawyer who's gone to school a lot of time. But if I can't afford a lawyer, then I just make do. And so this is something that's called Jevons Paradox.

And that came up a lot if you've heard about DeepSeek. There's been a lot of talk about Jevons Paradox. And Jevons paradox is the reason we thought LED light bulbs were going to save us a lot of energy, but they're so cheap we just leave them on all the time. LED light bulbs are too cheap to measure. So the Jevons Paradox is that if you increase efficiency, you'll increase societal use of that thing.

So what if today, every time I call John, I think as a client, every time I call John it costs me a thousand bucks. Forget it, I'm just gonna risk it. I'm just gonna go Google and try to make do, right? What if that thousand bucks shrinks to a hundred bucks? Then I might call John more than 10 times, maybe 15 times.

And then in the aggregate, I actually, John makes more money than in the old world. That's Jevons paradox. If you shrink the unit cost of something, then you'll be able to use more of it. So what if legal advice were too cheap to measure? What if John could create the John bot that I as a client could be able to get the right questions asked of me and it's not going to cost a thousand bucks.

Maybe it doesn't even cost a hundred bucks, maybe it costs 20 bucks. Then I'm going to be able to use John bot all the time and avoid all the legal problems that I would have had because I was just going to risk it. So this Jevons paradox is a way that we as lawyers can actually dig into the 92% of legal needs that are unmet because we lawyers are too expensive. And that as an economist, we call that a latent market that is just waiting to be exploited in the best way.

John: And I've done some research just here in Oregon, and it's a few years old now, but there I looked just at the number of family law complaints that are either unanswered entirely or are answered by someone who's not represented by counsel just in Oregon, with some very back envelope calculations, that's a almost $20 million annual market in Oregon alone. We are a relatively small state. I think that latent market is very real.

I want to come around, if I have any sort of intuition around my listeners, at least some of them right now are freaking out because they have a business model that is based on the effort that you put into the delivery of a legal service. And we measure that effort by time. And you're now talking about eliminating time effectively, right? Not completely, but for all intents and purposes, we're taking all of the time out of the equation in terms of, or not all, but a huge chunk of the time out of the equation in terms of what it takes to deliver a legal service.

I want to pause that and then also talk about, right, from my operations lens, when I talk about delivery workflows and a lot of the things that I care about, I measure flow efficiency. And flow efficiency is the ratio between the working time it takes you to deliver that unit of work and the total elapsed time that it takes you to deliver that unit of work.

And I can tell you from experience without exposing any of my individual client information, that lawyer flow efficiency is typically terrible, right? The amount of working time that it takes to deliver a piece of work is generally on the order of hours, but the elapsed time is often on the order of months, if not years. And so if we express that as a percentage, flow efficiencies are often down in the 5% or less range, sometimes less than 1% range for the effect of delivery for legal work.

And the reason that's important, and I'll get off this particular soapbox, is that flow efficiency is a great proxy for client experience and customer experience. Because all things being equal, people don't want to live with uncertainty. They don't want to live with risk. They don't want to live with complexity for any longer than they have to. And so nobody enjoys the process of having to work through a legal process, except for maybe true lawyers that like the sport of it.

Understandably, it's fun for me too, but that's not a good client experience. So the other piece to it is that in any workflow, there's working time and there's waiting time. The working time is broken up into two parts. There's the working time that is actually in the critical path for delivering the value of that work. But then there's another form of working time that is administrative overhead, right? Just the amount of working time it takes to manage the work that's in your system, in your firm. And that's things like, certainly the engagement and onboarding process, that is the legal project management work, the tracking of the system, that is sometimes the moving of papers or the filing of things with court systems, et cetera, et cetera.

And it's interesting because I think like my advice to my clients today is the best way to improve flow efficiency is to reduce the gaps between the times that you work on your cases, right? Let's try to compress things into getting out of doing individual tasks and bring them into task sets. I talk a lot about what are all of the things that you can accomplish in a single sitting in order to get your matter to its next natural resting state. And that's usually out with the client for review or filed with the court or things like that.

But what you're talking about is reducing the working time. And that's what I find fascinating from an operation standpoint. It's a mindset shift for me even to be like, oh, the obvious place to improve flow efficiency forever and ever has been to decrease the waiting times. You're talking about decreasing the working times and specifically the working time in that value delivery pathway.

What I worry about from an op standpoint, is that going to create a greater and greater administrative overhead because I still have to do intake, I still have to engage these people, I still have to do all the things to process the file inside of my practice, and what's that going to do to my relative delivery of work?

Damien: That's the right question to ask. Let's put this into the concrete rather than the abstract. You've talked about tasks and those kind of being collected in task sets. The nomenclature that Vincent and VLex use for that is there are tasks and then there are workflows, which a workflow is a combination of those tasks. So we call it task sets a workflow.

So when we think about those workflows, one workflow is I research this legal topic. And so I, as a new litigator, 15 years, I would often spend eight hours or so researching cases, reading through hundreds of cases, maybe landing on the 20 that are actually relevant, and then drafting a memo that would actually be helpful to my client. But Vincent does all those things, that eight-hour task, and shrinks it down to a minute and a half. It takes all of the, if there were a hundred cases and only 20 of them were relevant, it reads through the 80s so you don't have to. And so now it's taken this eight-hour task and shrunk it to a minute and a half.

You might say, what does that happen to my billable hours? What does that do to my billable hour? Option number one is it shrinks it. Option number two is you can now actually go to ask question number two and question number five and question number 20. And now you spend the same eight hours, but now you've given the client a much better output than you would have spending eight hours answering only one question. Now you've spent eight hours answering 20 questions.

So the option number one is maybe your billable hour stays fine, but you're just able to give a better workflow product. But then the next thing that you would need to do is then draft a complaint. That is of course, now that I have the research and the client's facts, maybe you can shrink the amount of time that it takes to do that complaint with the tools I'm building.

Then workflow number three is what if there's an interview of the client on intake? Is that a substantive thing that you can bill for? Or is that something that you don't bill for? Right? I would bet that firms would vacillate on whether that's a billable thing or not for sure.

So then the question is, okay, if this is on the administrative side of the fence, that is a non-billable side of the fence, maybe you can automate that away and be able to then shrink that administrative cost. So when we think about, okay, now, if I can not only shrink the administrative cost and now it's, I make more money if I automate away the interview task. What if I then say, even for the substance of things, what if I go to my flat fee? If I done do a fat fee, then once I shrink the cost of something, I increase my profit margin.

And so the idea with flat fee and some of the smartest lawyers I know today that are using AI are going flat fee, shrinking that eight hour task to a two minute task, and then being able to enjoy the rest of that as profit margin and do it all ethically and I'll do it fine.

John: Yeah, I think that's right. And I've done a lot of episodes on flat fee. I've recently done a couple of other interviews around this idea of flat free pricing, and I haven't decided yet if I'm going to release them before or after this interview. I actually think I might do you first because I think that it's a good sort of tone setting for this. But one of the guys I interviewed is Jonathan Stark, who I don't know, you may not have heard of because he doesn't deal with legal.

He wrote a book probably 10, maybe even more years ago called Hourly Billing is Nuts. And he's been on the forefront of the move to alternative billing models in professional services firms. He's a software guy, but he works with.

Damien: Oh, you did a podcast with him. He's really smart. I love that episode.

John: Yeah, I aired one that I appeared on his podcast and then I actually invited him on this one too.

Damien: Took so many notes for that. Yeah, that was brilliant. I watched and listened to both of them. He's brilliant and he's got the exact right idea.

John: And the thing that I think is interesting, because you mentioned flat fee. And once you're in that world of flat fee, there's actually a few different ways to do that, because there's a flat fee that is a fixed fee, which is basically, okay, I'm going to put a price tag on this and advertise it to the world that this is my price. And I think there's a lot of value in that. But the other is value-based pricing, which is really to, and the sort of flippant way that some marketers will talk about it is price the client, not the work, right? Or price the problem, not the work might be the better way, right?

So what is it worth to the client to be able to mitigate this particular risk, navigate this set of complexities, they know better than you do how they can price it. It's not like this hasn't been used in legal, right? Patrick Lamb and Valorum talked about this again, starting more than a decade ago with the value adjustment line and things like that. But these are the places where I certainly think that law practices will have to go if they want to have sustainable business models in the call it the hyper technology assisted age, right? We've been technology assisted as lawyers for a long time now, but we're in the middle of a quantum shift in what's possible around that technology assistance.

Damien: I agree 100%. So yeah, so number one is flat fees. Number two is value-based that you talked about. And one of the ways that you do value-based that I've seen lawyers do it is on the M&A context, whatever value we provide here, if you get an extra X number of dollars, we'll take a percentage of that, right? That's demonstrating the value that I'm providing.

A third is something we've had for decades called the contingency fee. If you think a plaintiff's lawyer, I'm not billing by the hour. If I shrink my unit cost, I increase my profit margin, right? So they're using automation all over the place. So that's number one, flat fee.

Number two, value-based fee. Number three, contingency fee. Number four, subscription fee. This is something that with private equity companies they say annual recurring revenue is the thing. Option one in the old world is to say, okay, I'm going to be able to do a flat fee, but that's a one-off thing.

But a subscription fee, I get that every month. I get that whether they use my product or not. So now I have reliable recurring revenue that I can count on. And then if I shrink my unit cost or maybe set up bots for the people that are my clients to be able to say, hey, go interview this bot and you can be able to get all the free legal advice you want. Think about how that can really transform your legal practice.

I would say that hourly is option A, but option B1, B2, B3, B4 is flat fee, value fee, contiguous fee, subscription fee. There's the old joke that the slowest lawyer wins the race. That is, if you spend 10 hours on a thing, then I spend one hour, you make more money than me. But that's only true for option A, hourly. It's not true for B1, B2, B3, or B4.

For all of them, if you shrink your time, you increase your profit margin. So I think getting us to option B is where humanity and lawyers need to go. And then the world's going to be much better as a result.

John: And I've talked about it again before, and I'm sure will again, but you don't have to go all in on any one of these, right? You can mix and match and you can do phase flat fees that convert to hourly if you have to actually go spend hours prepping and appearing at a trial or whatever it happens to be. Although I think you can also flat fee those things, right? But depending on people's individual comfort zone, there's a lot of possibility.

I would say the main thing is it behooves folks to do a little work, do a little research to understand, okay, what are others doing either in the legal space or outside of the legal space? Because I think there's lots of models outside of the legal space where all of these ways of pricing work and professional services work is working and we are gonna be doing a lot of experiments inside of legal, I do think we're a little bit disadvantaged by, I even hesitate to say this out loud in our modern sort of political era, because thank God for starting to sizes and adherence to precedent right now.

There is a place where the point of law is to create stability in society. And I know for a lot of folks, society doesn't feel very stable at the moment. That said, it also can be a barrier to innovation on some of these things. And I know a lot of case law, and even in the personal injury plaintiff's workspace, a lot of firms will take the better of either the contingency fee or the lodestar, right?

Which is effectively what they would have earned had they built it hourly. I think the lodestar model is on shaky ground right now, if it can be AI assisted into that work, right? But better to be leaning into the contingency side of that than the lodestar side.

Damien: That's exactly right. And then really, the question is, how expensive is it to do a task? How expensive is it to go to trial? How expensive is it to do those deposition questions? How expensive is it to do intake?

There are two aspects of that. Number 1 is how expensive was it in 2020? And how expensive is it today in 2025? And those are two different costs because it was much more expensive, but using AI, it becomes much less expensive, which is an opportunity for those lawyers that want to eat their competitors' lunch.

John: And that's profit is revenue minus expenses, right? And the revenue is what will the market bear? What will the client pay? How do they value the work? And then of course the cost is what it costs you to deliver it.

And I think that there are ways to be more intelligent on both sides of that equation. And unfortunately, or just from historical accident, the billable hour model doesn't do that very well, right? It caps that profit margin once you set your hourly rate. Your potential revenue is capped and so the only thing you have left to do is control costs, but control costs doesn't usually mean also reducing hours. So now we've got a problem.

Damien: And if you're an hourly lawyer, you only think about revenue usually. You don't think as much about cost. You don't think as much about how much it costs for this overhead. How much it costs for me to pay the associate for them to get that revenue?

John: Yeah, or we're trying to eliminate the administrative costs but not the actual value chain delivery costs.

Damien: That's right, so there are all sorts of reasons why the hourly model is less applicable or less ideal than the flat fee, especially with AI. I've often thought recently about, isn't it amazing that a large language model, a machine, is able to do a lot of the work of a lawyer, applying facts to law, and is able to parse through thousands of statutes and cases and regulations and apply the facts to the law. So isn't it amazing that a machine can do that in 2025 at the very same time that the rule of law itself is threatened? That is, there is a real question as to whether you're even gonna follow that case or whether you're even gonna follow that statute. What's the downside of breaking that law?

If there's no downside, what are we even doing? So isn't it an amazing time in humanity where machines are able to do an analysis of what is the law at a time that the rule of law itself is threatened?

John: So before I let you go, I'm hoping you can give listeners just two to five things that the folks that you're talking to are thinking about in terms of adopting AI in their practice, things that you think that you know, are capable that you're like, oh, I wish lawyers would think more about this. Where can especially again, the smaller firm practitioners, a lot of those that are closer to the access to justice problem, a lot of those that are out there without a lot of technology support or other people helping them with the administration of their practice, how can they be thinking of ways to adopt AI to make them more effective in their practice?

Damien: I think there are three things that I want to focus on. Number one is there's a saying that's true that what book do you read to figure out how to swim? And there is no book to figure out how to swim. You just swim. And so really I would say that how do you learn how to use AI is using AI.

So I would say that if you don't spend the $20 a month for the large language model of your choice, whether that be ChatGPT or Anthropic or Google's Gemini, spend the $20 for one month and start using it. And being able to say, how can I be able to use this in my practice in a way that would make sense?

John: Let me reflect on that for a minute, because I know, and you know this too, but in the Agile world, we talk about safe to fail experiments. So don't necessarily play with it in the context of your client delivery work at first, help it write a new landing page for your website or a newsletter for your clients or your prospects or things like that, just to get a feel for what it's capable of. And then once you're better at that, I'll put you in a better position to think how you can incorporate it into legal work.

Damien: Yes, do that safe to fail. And I would say don't just limit yourself to just administrative tasks like you described, even though it's really good for administrative tasks. If you have a publicly filed complaint and you're the defendant, put that publicly filed complaint on it and extract from that to say, what are potential legal defenses that might be able to help me win? What are potential questions I can ask my client to help them win? This is not any client confidential information. You're merely pulling from a public document.

So I would say, yes, administrative, do those. But to the extent you can take public information and be able to reason with that, or to be able to say, my client is a single mother who is in a family law dispute with her ex-husband and is doing some of these things, right? Query whether that kind of broad kind of question is putting any privileged information into the system. Because is that much different than Googling saying family law and negotiation with former spouse? That's really the amount of legal client information is quite low.

So anyway, number one, work on it, whether it's administrative or others. And then when you work on it, you'll find the limitations that it provides. You'll find that the $20 a month doesn't have yesterday's case. It doesn't have yesterday's statute. It doesn't have yesterday's regulations. So you're going to bump up to the edges of this.

And then you have to say, okay, what if I were to actually spend for a legal based item where I can essentially have yesterday's case, yesterday's statute, yesterday's regulation, along with the ideation that I've been doing on my own. So now you could say, okay, now if I use something like Vincent, like I'm building, and what if I as a lawyer don't have to do that ideation? What if Vincent already has all these buttons for you, no prompting necessary?

So I can be able to upload a complaint and Vincent literally gives you 15 different buttons saying things like, what are the claims being alleged? What are the facts supporting those claims? Who are the parties? What are they looking for? Are they looking for money or are they looking for an injunction? And then we go to ideation to say things like, what are the legal defenses that are based on actual cases, statutes and regulations?

What are questions I can ask my client to help them win? We do all that prompting for you. So really, number one, spend the 20 bucks and see what the limits are. Number two, if you spend a bit more, what more can you do that you don't have to be a prompt engineer, you can really just focus on the work and then what's left after that, you would say, I would go to my item number three is to say, as you see a lot more of these work processes get cheaper because it will take less time. Then you flip over to the flat fee and then you have to say, okay, what is the value that I am a lawyer providing to my client?

And as we're able, I Damien Riehl and we had VLex are able to automate more of these processes. All that's left is the humanity. That is that human layer up top that your partner said, yes, you need to know legal risk. Vincent is gonna tell you the legal risk. It's going to tell you the complexity, and it's gonna simplify that complexity by saying, now explain this at an eighth grade reading level, or explain this to my developmentally disabled, a five-year-old client.

So we'll both assess the risk, number one, and also provide the simplicity to the complex. What's left is that human layer for you to be able to deliver it. And at what cost can you deliver that at scale? That's for all of us to figure out.

John: Yes. And this is a topic for another episode, but law school doesn't necessarily give us tools and training around the lawyer's equivalent of bedside manner. Some people are more naturally inclined to it. Some get training on the job and a lot just never do. These are things that we haven't historically thought of as professional development. Probably need to start coming into our purview as far as improve ourselves as professional caregivers at the end of the day.

Damien: That's exactly right. I've spoken to many law schools and I've told them just that. I said, you're teaching doctrinal things and Vincent can do doctrinal question in seconds. So if you're teaching to be able to read the 100 cases to land on the 20 that are most relevant to then be able to draft a memo that turns into a complaint, I can already do that. So what are you training?

Are you training your lawyers for a 20th century workflow? Are you training lawyers for a 21st century workflow where the doctrinal things are largely covered and the humanity is what's left? You should be doing more, how do I be able to speak with a good bedside manner to my client? How can I be able to bring work in to be able to then have actually a reasonable business model rather than the old model? I think that parts of humanity and the business model are where law schools that get it are gonna be doing and for the lawyers that get it are gonna be making more money than their competitors who don't.

John: Let's leave it at that. There's another rabbit hole we could go down around the role of affective computing for helping lawyers do that. I'm gonna not do that. Thank you so much, Damien. This has been great. I hope my listeners are finding this useful and love to have you back in a few months and see how much more these tools can do.

Damien: I'm really thrilled. I've always been a huge fan of yours, John, and I know that you're doing great work on the people and process side of the work and I'm happy to lend the technology layer and yeah, thank you for all the work you're doing on the people and process side.

John: Thanks for coming on.

All right, as usual, there is a lot to unpack in that episode, but let me leave you with three key takeaways. Number 1, AI is here to stay, right? Just like we weren't going back to the library stacks after the rise of Westlaw and Lexis, there is no resisting the changes that AI is bringing to the profession. As the industry moves to adopt it more and more and clients come to expect it, you and your team are going to need to keep up. As Damien hinted at, the earlier you get comfortable with the tools, the better you'll be in a strong position to evolve as the technology does.

Number 2, good AI technology is going to need to work alongside good people and good process. The tech alone is not going to fix all the things that need improving in the legal system, and at least for now, it isn't going to have the context to help you imagine the best ways of using it inside of your practice. The best results are going to come when AI supports well-structured workflows, human expertise, and ideally a culture of engagement and continuous improvement in delivering client outcomes.

Number 3, legal services business models are going to change. And I've been beating the ditch hourly drum for several episodes now. You might be a little sick of it, but hopefully you can see why. Even non-hourly billing models aren't going to be immune. For example, attorney fee awards that are supposed to be within a certain range of a lodestar amount are going to face downward pressure as AI reduces the number of hours it takes to get a result, even for plaintiffs' firms.

So in next week's episode, I'm going to talk more about techniques for workflow efficiency. So if you haven't already, be sure to subscribe to this podcast in your favorite app or player so you don't miss it. And if you have questions or topics you'd like to hear me cover in a future episode, shoot me an email at john.grant@agileattorney.com.

This podcast gets production support from the fantastic team at Digital Freedom Productions, and our theme music is the song Hello by Lunara. Thanks for listening, and I'll catch you next week.

Enjoy the Show?

Create a Free Account to Join the Discussion

Comment, Respond to Others, and Ask Questions
Already a member? Login.
  © 2014–2025 Agile Professionals LLC  
 © 2014–2025 Agile Professionals LLC 
[bot_catcher]