A viral paper sounds the alarm by abandoning nuance
A paper titled “How AI Destroys Institutions” by two Boston University Law professors, posted to SSRN and forthcoming in the UC Law Journal, has gone viral, with around 17,000 downloads. (The other day it was 9.) Papers in law do well if they get over a few hundred hits. This one was aided in part by Gary Marcus, a well-known gen AI skeptic, boosting it on social media, which is how I came across it.
After reading it, I don’t think it’s a cynical attempt to take an extreme position in the hope of going viral. But the paper is so sweeping, so feverish in its claims, that despite my reluctance to draw further attention to it, I feel compelled to comment.
Sometimes articulating a fear — facing it head on — helps us come to terms with it. The authors, Woodrow Hartzog and Jessica Silbey, point to real dangers here, across many fronts. But their certainty about the doom that lies ahead, about how serious a threat AI poses, comes at the expense of abandoning all nuance — of assuming that because something is possible in theory, it is likely to occur in practice.
The core (overheated) claim
Democratic life, they argue, is grounded in civic institutions: the rule of law, universities, a free press. They rely on “transparency, cooperation, and accountability.” These, in turn, rest on interpersonal relationships among people with shared “civic goals.”
But:
Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. […] In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.
Expanding on the core claim at the outset, the authors write:
To clarify, we are not arguing that AI is a neutral or general purpose tool that can be used to destroy these institutions. Rather, we are arguing that AI’s current core functionality—that is, if it is used according to its design—will progressively exact a toll upon the institutions that support modern democratic life. The more AI is deployed in our existing economic and social systems, the more the institutions will become ossified and delegitimized.
The first part of the paper offers a well-sourced overview of the sociology of civic institutions and their role in democracy. It reads as scholarship.
The second part quickly devolves into a lengthy op-ed masquerading as scholarship.
Analysis or advocacy?
The pattern throughout is to hold up outlying events as evidence of the coming apocalypse. DOGE “will be a textbook example of how the affordances of AI lead to institutional rot.” Human judgment was set aside there in reliance on AI; power was “centralized in an opaque way that encouraged abuse, self-dealing, and oppression.” Yes, but how common was DOGE?
A few courts are using AI for bail and sentencing, ergo AI is taking over the law. Some hospitals are using AI for triage or insurance decisions, ergo all of medicine is in the process of being co-opted. Many university teachers are using AI to create material and students to complete assignments — ergo, AI has conquered education, without anything we can do about it.
“At stake in the AI takeover of institutions critical to human flourishing are the values of: the rule of law, the pursuit of knowledge, free expression, and democratic, civic life.” The key word here is “takeover.” We’re not witnessing a transformation, an evolution, a period of adjustment, or a negotiation. It’s a takeover. Full stop.
This is the tenor of the entire second half.
The segment on the rule of law notes the importance of “juries and an independent judiciary with appellate review—to assure conformity with democratic rules and equal justice.” It then cites a few examples of AI at the margins (experiments with sentencing, bail, and benefit calculations) only to conclude: “AI’s proliferation in our legal system bodes badly for the future of the rule of law and its practice on which we rely for a peaceful and just society.”
“AI’s proliferation”? In law? Is it really coming for juries and appellate review? Is it really beyond our control?
AI poses analogous threats to higher education. But again, note how sweeping and categorical the threat is framed:
AI is anathema to the institutional structure of higher education because [of] its affordances: [it] undermines expertise by encouraging cognitive offloading, knowledge ossification, and skill atrophy; short circuits decisionmaking by flattening beneficial hierarchies of authority, sowing distrust, and removing humans from important points of contestation; and isolates humans, depriving institutions of the interpersonal bonds it needs to foster common purpose and adapt to changed circumstances.
Okay, yes, it can do all of the above. And it certainly does do this some of the time. Much of the time? Maybe. But is AI fundamentally “anathema to the institutional structure of higher education”?
There’s more. “The destructive affordances of AI augur havoc for the press. First, the AI slop phenomenon has already devalued and undermined the expertise and legitimacy of trusted outlets and has polluted the public sphere….”
Okay, let’s slow down. AI has already devalued the legitimacy of trusted outlets? Which ones? How badly? (I’m still reading The New York Times, The Atlantic, etc.) The segment ends:
But AI systems rob journalism of authority the less relevant and responsive are its outputs; and AI outputs acculturate readers to expect compliant and copacetic reading. Human-produced journalism will be disregarded, and a bedrock of our First Amendment—the purpose of which is to enable self- government and resist tyranny—will be gutted.”
We’re now into full-blown alarmism. Not “could be gutted”, but “will be”. Unless we destroy all the machines, this, we’re told, is what will happen.
The final segment on democracy is fully in op-ed terrain: “If we continue to embrace AI unabated, social capital and norms of reciprocity will abate, and our center—democracy and civil life—will not hold.” …
The more governments and other civic institutions become intertwined with AI systems, the more these systems’ pathologies around expertise, decision-making, and human connection will stunt and decay the institution. Hierarchies of authority within institutions will flatten, lessening opportunities for knowledge development and transmission and ossifying or degrading collective expertise. Humans will be taken out of the loop, depriving the institution of opportunities for contestation.”
In a short, perfunctory conclusion, just over a page long, the authors write that “without rules to mitigate AI’s cancerous spread, the only remaining roads lead to institutional dissolution.”
Trying to govern AI through “ethics principles” won’t work (consent, risk management guardrails). We need bright-line rules: certain things, like facial recognition surveillance or the bulk sale of personal data, should be outlawed. Beyond this, we should “focus on corporate governance, infrastructure, and systemic and foundational reforms” — but we’re given no specifics.
Turn back the clock?
Hartzog and Silbey concede, then, that we have some agency, but the thrust of their position is clear. We would be better off if the cancer were removed before it spreads.
The service they’ve done for us here is to spell out an opposing view to a common assumption about AI — one that many of us (me included) harbour to maintain our optimism about it. It’s not inherently bad, we tell ourselves. You’re just using it wrong.
On this view, AI is harmful if used without care, understanding, and intention. But if used deliberately and with restraint, it can be a powerful tool for good. For example, just this month the Carnegie Endowment released a report on AI and Democracy finding that: “AI poses substantial threats and opportunities for democracy in an important year ahead for global democracy. Despite the threats, AI technologies can also improve representative politics, citizen participation, and governance.”
Pretty much all the other scholarship I’ve come across on AI and democracy attempts to strike a similar balanced view of the prospects, including these three recent books on point and articles like this one.
But for Hartzog and Silbey, AI is inherently and only bad. There is no upside. They rely here on the notion of affordances.
Affordance theory assumes there are properties of a tool or object that encourage or foster its being used in some ways over others. Reading on paper is more conducive to detachment, patience, and reflection. Reading on screens is more conducive to interaction, impatience, and emotion.
As the authors write: “AI systems have essential features that demand specific responses and foreclose other kinds of engagements.” AI, we’re told, “facilitates the displacement of mental and physical labor.” It “acclimates people to their… diminished power,” it “amplif[ies] biases, pollut[es] our information ecosystem”, “ravages the environment,” and hides “normative judgments behind a Wizard-of-Oz-esque curtain that masks engineered calculations.” And so forth.
It is not neutral. It’s not that you’re using it wrong. There is no right way to use it. Any use will tend, over time, to move us in a certain direction inevitably.
The best thing we can do is curtail its use altogether. Cabin it. Shun it. Treat it as nothing more than a cancer or poison, a deeply anti-human technology.
I don’t share this view, and clearly I’m not alone. You don’t have to be a naive AI booster to believe there are good and bad ways to use it, as there are with smartphones or the web generally. And any evidence one might point to of AI (or smartphones) fostering certain harmful effects through inadvertent use wouldn’t disprove the neutrality theory.
No one is calling for the end of smartphones. We’re not going back. What people call for is etiquette, education, and better habits. We’re encountering the same challenge with AI.
It doesn’t necessarily pose a social threat, but a challenge.
Yes, there is a lot of AI slop out there, but not nearly as much as people feared after the advent of ChatGPT. The much-feared AI zombie-apocalypse, in which most online content would become fake, hasn’t come to pass. More than half of the total content on the web may now be AI, but not on trusted venues. If anything, that makes those venues more valuable.
In each of the spheres the authors point to — law, education, journalism, democracy — the advent of generative AI poses challenges, but not a mortal threat. The damage has been limited in each case, despite at least two years of living with very advanced capabilities to generate text, images, and video.
There are still so many things jurists, educators, journalists, and lawmakers can do to confront the challenge of AI and contain the threat it poses. Our vital social institutions will no doubt be transformed in the wake of AI, just as they have been with smartphones and network technology more broadly.
But we’re not helpless here. We can shape the way change unfolds. We can make rules.
It’s just hard to see this at the moment in Canada and the US, where making rules about AI is currently out of fashion.
