🗞️ Is ChatGPT The Future of Law, A Billable Hours Showdown, & AI Versus IP
This Week: is ChatGPT coming for your job, a law firm suing its associates over billed hours (or lack thereof), and AI-generated art raises new questions. Plus, the Bankman-Frieds have disappeared from Stanford Law, and some celebrity legal drama.
Hey there. When not providing meaningful and funny content, we’re helping busy in-house legal departments do more with less with flexible, on-demand talent. Cool right?
Want to learn more?
The story begins like this: on November 30, OpenAI (an AI research laboratory co-founded by Elon Musk, and supported by a billion-dollar investment from Microsoft) released a public version of an AI program called ChatGPT that uses a large language model (LLM) to generate text responses in a natural dialogue format. That is to say, you can have conversations with ChatGPT and it will talk to you like a person and even explain things to you.
Well, ChatGPT blew up. It reached 1 million users in a matter of days and the internet was awash with people posting their chat texts. One journalist used the program to create imagined taglines in the style of Real Housewives for TV characters. Others used it to write a “brief to the United States Supreme Court on why its decision on same-sex marriage should not be overturned”, reports Reuters. And the results are quite convincing, leading some to ask if ChatGPT—or AI chatbots in general—are coming to a firm near you.
Don't worry, writes The Atlantic. “GPT and other large language models are aesthetic instruments rather than epistemological ones.” In other words, ChatGPT is not exactly performing any creative writing and thinking. Well, at least not any more creative than a room full of monkeys with typewriters is creative enough to eventually write Hamlet. And this is because LLMs are just combing through their data sets for a prediction of what comes next. As Casey Newton describes on the Hardfork podcast, “I were to say to you, 'twinkle, twinkle, little star,' your brain would just say, 'how I wonder where you are.' You’re just predicting that is how that sentence finishes.” Hard to be creative when you're just predicting off data of what we already know. But LLMs do give lawyers and other professions “a new instrument—that’s really the right word for it—with which to play with an unfathomable quantity of textual material,” The Atlantic continues. Reuters adds that law firms could find themselves in malpractice issues should they rely on ChatGPT because, as OpenAI itself warns, the bot “sometimes writes plausible-sounding but incorrect or nonsensical answers.”
So, is ChatGPT the future of anything for the legal world? It's plausible that the program could be used to write drafts of things that require no real creativity, or are heavily templated already. But for more complex and nuanced tasks, it's best to leave it to the humans.
As with the hype over self-driving cars a few years ago, our imaginations may be getting ahead of the actual tech here. ChatGPT is definitely an exciting new toy, but don't expect it to replace your firm's associates tomorrow (or the next day).
👀 Question for You
No one wants to be the lowest biller in the office, but is it a lawsuit-worthy offense? Larson Latham Huettl, a firm out in North Dakota seems to think so. As AboveTheLaw.com notes, the firm “sent bills to two former associates alleging 'overpayment' when the associates didn’t bill enough. The firm took both to court and won—both cases are on appeal.” The cases seem to hinge on the Larson Latham Huettl's employment agreement, which states that if an “Associate bills out less than the base quota for a three month [sic] period, the Associate’s salary will be reduced appropriately at the discretion of LLH in order to make up for any discrepancy.”
The employment agreement was apparently sent in March 2020, when work dried up across the industry due to the pandemic.
While Larson Latham Huettl's lawsuit is both highly unusual and ties back to March 2020, it does take place against a backdrop of widespread layoffs in the legal industry. With the economy tightening, Big Law has been on a hiring freeze over the last few months, even resorting to a tactic of lowering an associate's billable hours then firing them over their lack of hours.
Always read the fine print. It's a helpful tip for life, but especially if you're a new associate reading the employment agreement of a firm. We don't believe this is a tactic other firms will start implementing, but the rule of thumb still stands.
If you've been on social media recently, you've no doubt seen people post cartoonish photos of themselves in styles ranging from anime character to Impressionist painting. These images are the product of an AI program on the app Lensa, which has been trained off real human artists' work that is publicly available images (pulled from DeviantArt, Pinterest, Getty Images, etc.) but never credited—or compensated, reports Buzzfeed News. Well, those artists aren't happy. In fact, they are calling AI art theft. “Artists dislike AI art because the programs are trained unethically using databases of art belonging to artists who have not given their consent,” an artist told BuzzFeed News.
Prisma Labs, the company behind Lensa, released a statement via tweet noting: “As cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool. …We also believe that the growing accessibility of AI-powered tools would only make man-made art in its creative excellence more valued and appreciated, since any industrialization brings more value to handcrafted works.”
The Disney Hypothesis
While the artists who have found heavy reference to their work in Lensa's output (suggesting the AI was trained on their work) tend to be less well-known, some are beginning to ask if their copyright claims would be heard if a bigger IP-holder were on their side. “AI art isn’t theft? Pump some Disney and Nintendo in there. See what happens,” Illustrator Lauren Walsh tweeted.
New technology always raises new legal issues, and AI source material is no different. Is training your AI on the intellectual property of others and without compensating them, only to then generate your own IP for profit stealing? That's the question at hand.
📤What Else We’re Forwarding
Family Matters: Joseph Bankman and Barbara Fried, Stanford Law professors and parents of infamous crypto conman Sam Bankman-Fried, have disappeared from the school's upcoming course catalog, reports The Stanford Daily. Professor Bankman canceled his upcoming tax law course set for the winter, while emeritus professor Fried has said her decision to retire has “nothing to do with anything else going on.”
Monkey Trouble: Paris Hilton, Madonna, Justin Bieber and other celebs are being sued by investors for not disclosing that they were paid to pump up the value of Bored Ape NFTs, sold in 2021 by Yuga Labs. As Bloomberg says, the SEC is currently investigating Yuga Labs for violating federal law in the sale of its NFTs, “and whether certain nonfungible tokens from the Miami-based company are more akin to stocks and should follow the same disclosure rules.”
Here’s a short from last week.
Did you enjoy this week’s edition?
⭐️ Give us a star (or five!)
😇 Community @ Lawtrades