Will AI obliterate the rule of law?

As a writer and typo­graphic designer—gener­i­cally, a profes­sional artist—I have objec­tions to gener­a­tive AI. So do other artists.

These objec­tions are frequently mischar­ac­ter­ized as anti-tech­no­log­ical. For instance, a long­time tech pundit has cari­ca­tured artists as a herd of gabbling nitwits: “Wait, it works too well. We need to hobble it. Do some­thing!” Straw man constructed, he then derides this view as “reac­tive”, “misguided”, and a “panic”.

This critique is also encap­su­lated in the ever­green sick burn Luddite, a term that has long been used epithet­i­cally to connote someone who resists tech­no­log­ical progress out of fear, nostalgia, or intran­si­gence. (Leaving aside the inac­cu­racy of this use compared to the actual Luddites.)

[Though I am currently co-counsel in two lawsuits chal­lenging gener­a­tive AI, this piece is about the broader inter­ac­tions of AI and the law.]

It’s not all unicorns and laser beams

For those who haven’t been self-employed their entire adult life as a profes­sional artist—including decades on the internet—I’m happy to share what it’s like. Your years are full of confronting:

  1. Ripoffs of your work that are delib­erate but calcu­lated to be just barely legal, so you tolerate those;

  2. Ripoffs that are illegal but too minor to pursue, so you tolerate those too;

  3. Ripoffs that are illegal and large enough to pursue, but still not worth the oppor­tu­nity cost, so you tolerate those too; and finally,

  4. Ripoffs that are suffi­ciently large and illegal to be exis­ten­tial for your busi­ness, so you have to inter­rupt your work and engage at length with an annoying stranger because other­wise the tiny, nerdy, arty terri­tory you occupy is going to be incin­er­ated. The US Supreme Court protects the habitat of the snail darter; they don’t protect mine.

No sympathy sought. This is the career I chose. The idea, however, that working artists are—what, riding unicorns across the internet and shooting lasers of intel­lec­tual-prop­erty law from our finger­tips?—is bewil­dering. Maybe there are a handful of Holly­wood studios or Euro­pean fashion houses for whom that’s worth­while. The rest of us have to accept the rough & tumble as a cost of doing busi­ness. I’m even a moth­er­flip­ping lawyer, and it’s still true for me. My online font busi­ness runs on the honor system because it has to. Enforce­ment would be finan­cially irra­tional. The key compet­i­tive differ­en­tiator of artists is that we can make new things. In most cases this is a better option than the blunt force of legal action.

Urban legends punctured

Let’s also note that deriding profes­sional artists as Luddites is, on the merits, bonkers. Unless you’re Al Jaffee, the tools of most artists are heavily digital, and have been for years.

For instance, to run a typog­raphy busi­ness, I have to stay conver­sant in at least four program­ming languages—Python, JavaScript, Racket, and Swift—and be able to trou­bleshoot tech­nical prob­lems on Mac OS, Windows, and Ubuntu Linux. I wrote a publishing system for my websites and a graph­ical editor for my fonts. Gosh, I’m sure I engi­neer more soft­ware than many soft­ware engi­neers.

Why? For one thing, that’s the gig. For another, unless you’re already well estab­lished, working artists usually have no choice but to embrace any tech­no­log­ical advan­tage that comes our way. Because that’s one of the primary ways we can stay prof­itable. I’ve been finding ways to auto­mate my work for decades. I’m sure many artists would love to have AI tools—the ethical and legal kind—that can let us produce more in less time. I certainly would. The idea that artists are the obsta­cles to progress is nonsen­sical.

As is the idea that artists are somehow too soft to compete. Most of us are inde­pen­dent contrac­tors. So there is no minimum wage. There is no health­care. If you distribute your work on the internet, people often take it for free. Inde­pen­dent artists are econom­i­cally adapt­able and resilient because we have to be. The soft ones wash out after the first year.

Thus, to be clear: the core objec­tion to gener­a­tive AI is not that it’s fancy tech­nology that artists find intim­i­dating. It isn’t. Or that it will displace certain jobs. That happens during every wave of tech­no­log­ical change. Or that market compe­ti­tion is intol­er­able. Far from it. Rather, the core objec­tion is that so far, many gener­a­tive AI prod­ucts are based on massive viola­tions of law. If gener­a­tive AI compa­nies want to compete against human artists by legal means—they’re welcome to do so. But in many cases, that’s not what they’ve chosen. As a profes­sional artist, I’m not opposed to advance­ments in tech­nology; I’m opposed to viola­tions of the law.

Unpacking the epithet

So what is the “Luddite” smear really about? Why are so many so eager to protect wealthy AI compa­nies from, I guess, some­thing less than complete public submis­sion? It seems to be a vehicle for a deeper claim:

  1. An argu­ment that copy­right law in the tech­no­log­ical age needs at least reform, at most replace­ment. For example, this view has been advanced by copy­right lawyer William Patry in his book How to Fix Copy­right. These points have merit.

    But advo­cates of copy­right reform have never addressed where the votes will come from. In the last 50 years, US copy­right legis­la­tion has moved only toward stronger, longer rights for authors. (There have been some major fair-use rulings during that time, but those have come from the courts, not legis­la­tion.) This has not been due to the rising economic and polit­ical clout of people like me, but rather that of big corpo­rate IP owners: from the Copy­right Act of 1976 through the Digital Millen­nium Copy­right Act and the Copy­right Term Exten­sion Act in 1998, aka the Mickey Mouse Protec­tion Act.

  2. But this is a special case of the more potent form of this argu­ment: that AI should not be subject to the rule of law at all. Roughly—we like its results so much that no one should be allowed to scru­ti­nize whether it reaches them legally. The ends justify the means. No votes neces­sary.

On this view, if AI is capable of essen­tially unlim­ited upside, then nothing can be toler­ated that may obstruct it. All dissent must be silenced. All resis­tance must be extin­guished. All laws must be suspended. Let the golem emerge.

What do you call someone who advo­cates for the end of the rule of law? An anar­chist? They’ve made appear­ances in US history, espe­cially in times of socioe­co­nomic upheaval. But an anar­chist seeks to topple those in power. With AI, it is the wealthy & powerful who are seeking to arro­gate to them­selves further wealth & power by suspending the law. Those who erode the rule of law to sustain their own power are often known by a different name: author­i­tar­ians.

But that’s impossible

Suspending the law, some may be surprised to hear, will prob­ably be the easiest part.

In Copy­right for Literate Robots (2016), law professor James Grim­mel­mann observes that copy­right law largely “ignores robots”, which has led to unin­tended conse­quences:

[T]here is some­thing unset­tling about a rule of law that regu­lates humans and gives robots free rein. Most imme­di­ately, it encour­ages people and busi­nesses to outsource their reading. … This pres­sure to use robots is indif­ferent to whether people use robots for good or for ill. …

The paradox goes deeper. By valorizing robotic reading, copy­right doctrine deni­grates human reading. A trans­for­ma­tive fair use test that cate­gor­i­cally exempts robots means that a digital humanist can skim a million books with abandon while a humanist who reads a few books closely must pay full freight for hers. Romantic read­er­ship there­fore discour­ages the personal engage­ment with a work that it claims to value. Copy­right’s expres­sive message here—robots good, humans bad—is the exact oppo­site of the one it means to convey.

The rule of law is the social agree­ment that we will conduct ourselves according to the law, and that conse­quences will be imposed on those who do not. It is premised on humans as legally culpable agents. The law inter­ro­gates and eval­u­ates human action.

But as Grim­mel­mann points out, machines often have “free rein” legally. This axiom made sense when a machine was primarily under­stood as a tool wielded by a human. This distinc­tion has gotten murkier, however, as machines have moved into roles tradi­tion­ally reserved to human judg­ment.

Grim­mel­mann argues that by dele­gating reading to a legally imper­vious machine—the “literate robot”—human actors avoid the usual legal scrutiny that would apply to their actions. In so doing, copy­right law is essen­tially neutral­ized. He fore­sees this remaining a tremen­dous incen­tive for humans to “outsource” reading to machines that are not treated as legally culpable agents. Even to the point of anni­hi­lating human reading alto­gether.

The inscrutable black box

But the uses of AI already extend well beyond reading. Thus, we could like­wise extend Grim­mel­mann’s argu­ment beyond copy­right law. How will human behavior change once every human activity can be dele­gated to a machine—instead of “literate robots”, let’s call them “AI systems”—that are not legally culpable agents?

If AI compa­nies are allowed to market AI systems that are essen­tially black boxes, they could become the ulti­mate ends-justify-the-means devices. Before too long, we will not dele­gate deci­sions to AI systems because they perform better. Rather, we will dele­gate deci­sions to AI systems because they can get away with every­thing that we can’t. You’ve heard of money laun­dering? This is human-behavior laun­dering. At last—plau­sible deni­a­bility for every­thing.

More­over, let’s be clear: AI is already in wide­spread use in recruit­ment and hiring, health insur­ance, money lending, adver­tising, and ramming emer­gency vehi­cles. The gener­a­tive AI tools released in the past year are maybe the first time many consumers have gotten to operate a sophis­ti­cated AI device directly. But our digital lives have long been ground up in their gears.

Of course, because the profit margins from breaking the law are pretty good, busi­nesses in compe­ti­tion with each other will face market pres­sure to adopt similar AI tools. Suppose you’re a health insurer that finds a way to kill off your least prof­itable customers with AI. Your competi­tors will tend to do the same. Now imagine this tran­si­tion happening in every industry in parallel over the next three years.

If we’re confi­dent in the legality of these systems, what is the basis for our confi­dence? How do we know that, say, an AI that deter­mines eligi­bility for a loan isn’t blithely violating laws that are designed to ensure fair­ness in lending? Unlike a human, who might have to create some kind of written record, the AI doesn’t produce a memo explaining each deci­sion with cita­tions. Even AI researchers concede they can’t fully explain how and why their systems work or what they’re capable of, an abyss euphemisti­cally called “emer­gent behavior”.

When combined with automa­tion bias—that is, the tendency of humans to impute too much cred­i­bility to machine outputs—we could end up with some­thing truly novel: tech­nology systems that deserve much higher levels of legal scrutiny (because of the conse­quen­tiality of their outputs) but simul­ta­ne­ously resist such scrutiny (because of the opacity of their inputs and reasoning).

Another big problem—again, at least for those of us opposed to viola­tions of the law—is that the deep-learning tech­niques currently in vogue are funda­men­tally descrip­tive (= focused on what humans actu­ally do) whereas law wants to be prescrip­tive (= focused on what humans should do). Meaning—when a system is trained on mass quan­ti­ties of existing human data, it neces­sarily absorbs all the bias and other noxious crud of human exis­tence.

The project of steering AI systems toward some­thing other than the worst version of humanity is known as “align­ment”. So far the best known tech­nique is euphemisti­cally called “rein­force­ment learning from human feed­back”, which has entailed—sweet cheese & crackers, I wish I were making this up—hiring low-wage foreign workers to spend hours with the AI, nudging it away from toxic results. The mechan­ical turk, rein­vented—what an accom­plish­ment.

“But if AI is a tool for subverting the rule of law, why bother spending money on align­ment?” AI vendors have been asking them­selves the same ques­tion. In the past nine months, Meta, Twitter, Microsoft, and Amazon have all disbanded their teams devoted to AI ethics and safety, even as they increase their invest­ments in AI else­where. Recently, Google launched its Bard chatbot despite internal safety and ethics objec­tions. Draw your own conclu­sions.

There ought to be a law

As commer­cial AI arrives, will the US Congress start regu­lating tech compa­nies? Well, um, have they done so in the last 50 years? No jinxies. But tech stocks repre­sent an outsize portion of the US equity market because they’re a huge driver of earn­ings growth (and hence, price appre­ci­a­tion). Many govern­ment and private pension funds, already under­funded, are counting on the tech sector to help them meet rate-of-return expec­ta­tions. There is no plan B. The money, we could fairly say, has already been spent.

That’s even before we consider tech’s direct invest­ments in lobbying and other polit­ical action. Against that back­drop, will there be suffi­cient polit­ical moti­va­tion to crimp AI? I doubt it—unless AI causes some unspinnably major cata­strophe, thereby creating a coun­ter­vailing polit­ical imper­a­tive for Congress to act.

On the other hand, though the polit­ical appetite for tech regu­la­tion is maybe higher than it’s ever been, new legis­la­tion can often be a case of “be careful what you wish for”. Horses get traded. There’s some­thing to be said for exploring the enforce­ment possi­bil­i­ties of current law.

A middle path

AI systems made avail­able to the public, or used to make deci­sions affecting the public, should have some basic level of trans­parency and account­ability to the public. Much like nutri­tion labels on food: you can’t make informed deci­sions unless you have the infor­ma­tion.

Though AI researchers have histor­i­cally published papers describing their methods, as market compe­ti­tion heats up, we will likely see the for-profit players retreat to the familiar fortress of soli­tude. From here, the black boxes will go Vantablack.

Though we do have a National AI Initia­tive—with a groovy logo—I wouldn’t be surprised if the Federal Trade Commis­sion plays an impor­tant early role as the commer­cial AI market emerges. Looking over their call for “truth, fair­ness, and equity” in AI, I find a lot to agree with:

  • Start with the right foun­da­tion. … If a data set is missing infor­ma­tion from partic­ular popu­la­tions, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups. 

  • Watch out for discrim­i­na­tory outcomes. … How can you reduce the risk of your company becoming the example of a busi­ness whose well-inten­tioned algo­rithm perpet­u­ates racial inequity? 

  • Embrace trans­parency and inde­pen­dence. … As your company develops and uses AI, think about ways to embrace trans­parency and inde­pen­dence—for example, by using trans­parency frame­works and inde­pen­dent stan­dards, by conducting and publishing the results of inde­pen­dent audits, and by opening your data or source code to outside inspec­tion. 

  • Don’t exag­gerate what your algo­rithm can do or whether it can deliver fair or unbi­ased results. Under the FTC Act, your state­ments to busi­ness customers and consumers alike must be truthful, non-decep­tive, and backed up by evidence. 

  • Tell the truth about how you use data. In our guid­ance on AI last year, we advised busi­nesses to be careful about how they get the data that powers their model. 

  • Do more good than harm. To put it in the simplest terms, under the FTC Act, a prac­tice is unfair if it causes more harm than good. 

  • Hold your­self account­able—or be ready for the FTC to do it for you.

In recog­ni­tion of a trained AI model’s ability to retain training data indef­i­nitely, the FTC has also pioneered the remedy of algo­rithmic destruc­tion, which is just what it sounds like—viola­tors have to erad­i­cate their infringing AI models and training data.

In addi­tion to the guide­lines above, I would also favor an opt-out or “patdown” rule for AI. By analogy—when you go to the airport, you can opt out of the stan­dard metal detector and ask the TSA to perform a manual patdown. Like­wise, I think compa­nies that rely on AI for key func­tions should be required to offer customers the option to have their request—whether a résumé, or loan appli­ca­tion, or medical record—reviewed by a human instead.

Still, much of the FTC’s call to action is built on an appeal to ethics. “Hold your­self account­able” has been a prin­ciple with mixed success in the pres­ence of billions and tril­lions of dollars. For instance, the fact that internal AI ethics and safety teams are already being disbanded is not an encour­aging sign. Private liti­ga­tion will fill some of the gap. But if we want the rule of law to survive the advent of AI, we will all have a role to play in holding these compa­nies account­able.

Further reading

update, 10 days later

The White House has proposed an AI Bill of Rights that includes AI opt-out. That’s good. But it does not recom­mend manda­tory dataset trans­parency. That’s bad.

From 2016, some similar thoughts from Microsoft CEO Satya Nadella.

update, 15 days later

National trea­sure Al Jaffee dies at age 102, after offi­cially retiring from cartooning at age 99. That’s pretty much my dream exit too. Please enjoy my all-time favorite Jaffee cartoon.

update, 19 days later

The AI Now Insti­tute has released policy proposals for regu­lating AI. (Also, nice job on the typog­raphy and logo.)

update, 32 days later

The EU is proposing manda­tory AI dataset trans­parency. Of course, this would benefit US resi­dents even without a parallel US rule, because any US company that oper­ates in the EU would have to comply.

update, 113 days later

A short webcomic by Tom Humber­stone on the ethics of the actual Luddites. I am compli­mented to have been depicted in cartoon form nearby William Morris and Huey Newton.