As a writer and typographic designer—generically, a professional artist—I have objections to generative AI. So do other artists.
These objections are frequently mischaracterized as anti-technological. For instance, a longtime tech pundit has caricatured artists as a herd of gabbling nitwits: “Wait, it works too well. We need to hobble it. Do something!” Straw man constructed, he then derides this view as “reactive”, “misguided”, and a “panic”.
This critique is also encapsulated in the evergreen sick burn Luddite, a term that has long been used epithetically to connote someone who resists technological progress out of fear, nostalgia, or intransigence. (Leaving aside the inaccuracy of this use compared to the actual Luddites.)
[Though I am currently co-counsel in two lawsuits challenging generative AI, this piece is about the broader interactions of AI and the law.]
For those who haven’t been self-employed their entire adult life as a professional artist—including decades on the internet—I’m happy to share what it’s like. Your years are full of confronting:
Ripoffs of your work that are deliberate but calculated to be just barely legal, so you tolerate those;
Ripoffs that are illegal but too minor to pursue, so you tolerate those too;
Ripoffs that are illegal and large enough to pursue, but still not worth the opportunity cost, so you tolerate those too; and finally,
Ripoffs that are sufficiently large and illegal to be existential for your business, so you have to interrupt your work and engage at length with an annoying stranger because otherwise the tiny, nerdy, arty territory you occupy is going to be incinerated. The US Supreme Court protects the habitat of the snail darter; they don’t protect mine.
No sympathy sought. This is the career I chose. The idea, however, that working artists are—what, riding unicorns across the internet and shooting lasers of intellectual-property law from our fingertips?—is bewildering. Maybe there are a handful of Hollywood studios or European fashion houses for whom that’s worthwhile. The rest of us have to accept the rough & tumble as a cost of doing business. I’m even a motherflipping lawyer, and it’s still true for me. My online font business runs on the honor system because it has to. Enforcement would be financially irrational. The key competitive differentiator of artists is that we can make new things. In most cases this is a better option than the blunt force of legal action.
Let’s also note that deriding professional artists as Luddites is, on the merits, bonkers. Unless you’re Al Jaffee, the tools of most artists are heavily digital, and have been for years.
For instance, to run a typography business, I have to stay conversant in at least four programming languages—Python, JavaScript, Racket, and Swift—and be able to troubleshoot technical problems on Mac OS, Windows, and Ubuntu Linux. I wrote a publishing system for my websites and a graphical editor for my fonts. Gosh, I’m sure I engineer more software than many software engineers.
Why? For one thing, that’s the gig. For another, unless you’re already well established, working artists usually have no choice but to embrace any technological advantage that comes our way. Because that’s one of the primary ways we can stay profitable. I’ve been finding ways to automate my work for decades. I’m sure many artists would love to have AI tools—the ethical and legal kind—that can let us produce more in less time. I certainly would. The idea that artists are the obstacles to progress is nonsensical.
As is the idea that artists are somehow too soft to compete. Most of us are independent contractors. So there is no minimum wage. There is no healthcare. If you distribute your work on the internet, people often take it for free. Independent artists are economically adaptable and resilient because we have to be. The soft ones wash out after the first year.
Thus, to be clear: the core objection to generative AI is not that it’s fancy technology that artists find intimidating. It isn’t. Or that it will displace certain jobs. That happens during every wave of technological change. Or that market competition is intolerable. Far from it. Rather, the core objection is that so far, many generative AI products are based on massive violations of law. If generative AI companies want to compete against human artists by legal means—they’re welcome to do so. But in many cases, that’s not what they’ve chosen. As a professional artist, I’m not opposed to advancements in technology; I’m opposed to violations of the law.
So what is the “Luddite” smear really about? Why are so many so eager to protect wealthy AI companies from, I guess, something less than complete public submission? It seems to be a vehicle for a deeper claim:
An argument that copyright law in the technological age needs at least reform, at most replacement. For example, this view has been advanced by copyright lawyer William Patry in his book How to Fix Copyright. These points have merit.
But advocates of copyright reform have never addressed where the votes will come from. In the last 50 years, US copyright legislation has moved only toward stronger, longer rights for authors. (There have been some major fair-use rulings during that time, but those have come from the courts, not legislation.) This has not been due to the rising economic and political clout of people like me, but rather that of big corporate IP owners: from the Copyright Act of 1976 through the Digital Millennium Copyright Act and the Copyright Term Extension Act in 1998, aka the Mickey Mouse Protection Act.
But this is a special case of the more potent form of this argument: that AI should not be subject to the rule of law at all. Roughly—we like its results so much that no one should be allowed to scrutinize whether it reaches them legally. The ends justify the means. No votes necessary.
On this view, if AI is capable of essentially unlimited upside, then nothing can be tolerated that may obstruct it. All dissent must be silenced. All resistance must be extinguished. All laws must be suspended. Let the golem emerge.
What do you call someone who advocates for the end of the rule of law? An anarchist? They’ve made appearances in US history, especially in times of socioeconomic upheaval. But an anarchist seeks to topple those in power. With AI, it is the wealthy & powerful who are seeking to arrogate to themselves further wealth & power by suspending the law. Those who erode the rule of law to sustain their own power are often known by a different name: authoritarians.
Suspending the law, some may be surprised to hear, will probably be the easiest part.
In Copyright for Literate Robots (2016), law professor James Grimmelmann observes that copyright law largely “ignores robots”, which has led to unintended consequences:
[T]here is something unsettling about a rule of law that regulates humans and gives robots free rein. Most immediately, it encourages people and businesses to outsource their reading. … This pressure to use robots is indifferent to whether people use robots for good or for ill. …
The paradox goes deeper. By valorizing robotic reading, copyright doctrine denigrates human reading. A transformative fair use test that categorically exempts robots means that a digital humanist can skim a million books with abandon while a humanist who reads a few books closely must pay full freight for hers. Romantic readership therefore discourages the personal engagement with a work that it claims to value. Copyright’s expressive message here—robots good, humans bad—is the exact opposite of the one it means to convey.
The rule of law is the social agreement that we will conduct ourselves according to the law, and that consequences will be imposed on those who do not. It is premised on humans as legally culpable agents. The law interrogates and evaluates human action.
But as Grimmelmann points out, machines often have “free rein” legally. This axiom made sense when a machine was primarily understood as a tool wielded by a human. This distinction has gotten murkier, however, as machines have moved into roles traditionally reserved to human judgment.
Grimmelmann argues that by delegating reading to a legally impervious machine—the “literate robot”—human actors avoid the usual legal scrutiny that would apply to their actions. In so doing, copyright law is essentially neutralized. He foresees this remaining a tremendous incentive for humans to “outsource” reading to machines that are not treated as legally culpable agents. Even to the point of annihilating human reading altogether.
But the uses of AI already extend well beyond reading. Thus, we could likewise extend Grimmelmann’s argument beyond copyright law. How will human behavior change once every human activity can be delegated to a machine—instead of “literate robots”, let’s call them “AI systems”—that are not legally culpable agents?
If AI companies are allowed to market AI systems that are essentially black boxes, they could become the ultimate ends-justify-the-means devices. Before too long, we will not delegate decisions to AI systems because they perform better. Rather, we will delegate decisions to AI systems because they can get away with everything that we can’t. You’ve heard of money laundering? This is human-behavior laundering. At last—plausible deniability for everything.
Moreover, let’s be clear: AI is already in widespread use in recruitment and hiring, health insurance, money lending, advertising, and ramming emergency vehicles. The generative AI tools released in the past year are maybe the first time many consumers have gotten to operate a sophisticated AI device directly. But our digital lives have long been ground up in their gears.
Of course, because the profit margins from breaking the law are pretty good, businesses in competition with each other will face market pressure to adopt similar AI tools. Suppose you’re a health insurer that finds a way to kill off your least profitable customers with AI. Your competitors will tend to do the same. Now imagine this transition happening in every industry in parallel over the next three years.
If we’re confident in the legality of these systems, what is the basis for our confidence? How do we know that, say, an AI that determines eligibility for a loan isn’t blithely violating laws that are designed to ensure fairness in lending? Unlike a human, who might have to create some kind of written record, the AI doesn’t produce a memo explaining each decision with citations. Even AI researchers concede they can’t fully explain how and why their systems work or what they’re capable of, an abyss euphemistically called “emergent behavior”.
When combined with automation bias—that is, the tendency of humans to impute too much credibility to machine outputs—we could end up with something truly novel: technology systems that deserve much higher levels of legal scrutiny (because of the consequentiality of their outputs) but simultaneously resist such scrutiny (because of the opacity of their inputs and reasoning).
Another big problem—again, at least for those of us opposed to violations of the law—is that the deep-learning techniques currently in vogue are fundamentally descriptive (= focused on what humans actually do) whereas law wants to be prescriptive (= focused on what humans should do). Meaning—when a system is trained on mass quantities of existing human data, it necessarily absorbs all the bias and other noxious crud of human existence.
The project of steering AI systems toward something other than the worst version of humanity is known as “alignment”. So far the best known technique is euphemistically called “reinforcement learning from human feedback”, which has entailed—sweet cheese & crackers, I wish I were making this up—hiring low-wage foreign workers to spend hours with the AI, nudging it away from toxic results. The mechanical turk, reinvented—what an accomplishment.
“But if AI is a tool for subverting the rule of law, why bother spending money on alignment?” AI vendors have been asking themselves the same question. In the past nine months, Meta, Twitter, Microsoft, and Amazon have all disbanded their teams devoted to AI ethics and safety, even as they increase their investments in AI elsewhere. Recently, Google launched its Bard chatbot despite internal safety and ethics objections. Draw your own conclusions.
As commercial AI arrives, will the US Congress start regulating tech companies? Well, um, have they done so in the last 50 years? No jinxies. But tech stocks represent an outsize portion of the US equity market because they’re a huge driver of earnings growth (and hence, price appreciation). Many government and private pension funds, already underfunded, are counting on the tech sector to help them meet rate-of-return expectations. There is no plan B. The money, we could fairly say, has already been spent.
That’s even before we consider tech’s direct investments in lobbying and other political action. Against that backdrop, will there be sufficient political motivation to crimp AI? I doubt it—unless AI causes some unspinnably major catastrophe, thereby creating a countervailing political imperative for Congress to act.
On the other hand, though the political appetite for tech regulation is maybe higher than it’s ever been, new legislation can often be a case of “be careful what you wish for”. Horses get traded. There’s something to be said for exploring the enforcement possibilities of current law.
AI systems made available to the public, or used to make decisions affecting the public, should have some basic level of transparency and accountability to the public. Much like nutrition labels on food: you can’t make informed decisions unless you have the information.
Though AI researchers have historically published papers describing their methods, as market competition heats up, we will likely see the for-profit players retreat to the familiar fortress of solitude. From here, the black boxes will go Vantablack.
Though we do have a National AI Initiative—with a groovy logo—I wouldn’t be surprised if the Federal Trade Commission plays an important early role as the commercial AI market emerges. Looking over their call for “truth, fairness, and equity” in AI, I find a lot to agree with:
Start with the right foundation. … If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups. …
Watch out for discriminatory outcomes. … How can you reduce the risk of your company becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity? …
Embrace transparency and independence. … As your company develops and uses AI, think about ways to embrace transparency and independence—for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection. …
Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. …
Tell the truth about how you use data. In our guidance on AI last year, we advised businesses to be careful about how they get the data that powers their model. …
Do more good than harm. To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. …
Hold yourself accountable—or be ready for the FTC to do it for you.
In recognition of a trained AI model’s ability to retain training data indefinitely, the FTC has also pioneered the remedy of algorithmic destruction, which is just what it sounds like—violators have to eradicate their infringing AI models and training data.
In addition to the guidelines above, I would also favor an opt-out or “patdown” rule for AI. By analogy—when you go to the airport, you can opt out of the standard metal detector and ask the TSA to perform a manual patdown. Likewise, I think companies that rely on AI for key functions should be required to offer customers the option to have their request—whether a résumé, or loan application, or medical record—reviewed by a human instead.
Still, much of the FTC’s call to action is built on an appeal to ethics. “Hold yourself accountable” has been a principle with mixed success in the presence of billions and trillions of dollars. For instance, the fact that internal AI ethics and safety teams are already being disbanded is not an encouraging sign. Private litigation will fill some of the gap. But if we want the rule of law to survive the advent of AI, we will all have a role to play in holding these companies accountable.
Rebooting AI by Gary Marcus and Ernest Davis; The Road to AI We Can Trust by Gary Marcus. Marcus is a cognitive scientist who is optimistic about the possibilities of AI, but a fair-minded critic of current AI, especially the corners being cut in the rush to commercialize these systems.
Considerations for IRB review of research involving AI by the US Dept. of Health and Human Services. A regulatory angle separate from that of the FTC: under what conditions should informed consent (and other ethical guardrails) be necessary when exposing humans to AI systems? More broadly, what will be the distinction in AI between a “free public beta” and “unregulated human-subjects research”?
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Bostrom is a philosopher and one of the earliest voices advocating for caution with AI. Bostrom also invented the now-famous metaphor of the paperclip maximizer.
Better without AI by David Chapman. Chapman has a PhD in artificial intelligence. A sharp critique of the current state of AI research and policy. If you’re wondering what an “AI apocalypse” would entail in practical terms, Chapman argues that it’s a comparatively small step from where we are now.
The Center for AI and Digital Policy has filed an FTC complaint against OpenAI.
The White House has proposed an AI Bill of Rights that includes AI opt-out. That’s good. But it does not recommend mandatory dataset transparency. That’s bad.
From 2016, some similar thoughts from Microsoft CEO Satya Nadella.
National treasure Al Jaffee dies at age 102, after officially retiring from cartooning at age 99. That’s pretty much my dream exit too. Please enjoy my all-time favorite Jaffee cartoon, keeping in mind that part 1 appeared on the right side, so you wouldn’t see part 2 till you turned the page.
The AI Now Institute has released policy proposals for regulating AI. (Also, nice job on the typography and logo.)
The EU is proposing mandatory AI dataset transparency. Of course, this would benefit US residents even without a parallel US rule, because any US company that operates in the EU would have to comply.
A short webcomic by Tom Humberstone on the ethics of the actual Luddites. I am complimented to have been depicted in cartoon form nearby William Morris and Huey Newton.