Literary Theory for Robots by Dennis Yi Tenen
/Literary Theory for Robots: How Computers Learned to Write
By Dennis Yi Tenen
WW Norton 2024
Literary Theory for Robots, the new book by Columbia English professor and former Microsoft engineer Dennis Yi Tenen, starts with a lie and just goes downhill from there.
The first line of the book is, “Computers love to read.” Both Tenen’s current and his former job well equip him to know this is wrong. Computers don’t “read” in the sense that the word will mean to literally anybody who sees it. Computers don’t “love” to do anything in the sense that the world will mean to literally anybody who sees it. Computers don’t love. Computers don’t read. Computers don’t love to read.
Which raises obvious question right there on Page 1: why would Tenen, who certainly knows better, open his book in such a way?
There are two possible answers: Tenen could be an idiot, or he could be a giggling tech-grifter intentionally ginning up rhetoric and twisting definitions in order to get the dopamine rush of alarm-outraging the marks in the audience (cf. people like Tristan Harris, Daniel Schmachtenberger, etc) – the real-life tech-bro equivalent of Revenge of the Nerds.
Literary Theory for Robots is about artificial intelligence (AI), “smart” devices, and the deep history of human collaborative learning, which Tenen extends back through the ages of print encyclopedias and codexes to Plato and Aristotle. The book is part of the “Norton Shorts” series, which in this case means cramming an entire barnyard of horse poop into about 130 pages. Tenen brings up Ada Lovelace, Charles Babbage, Alan Turing, and a whole cast of other “lovely weirdos” in order to draw a picture of collaborative cognition. No one in this cast is drawn well or particularly accurately, nor is the whole concept of collaborative cognition, since Tenen routinely describes things like print-and-paper books as “smart” technology even though they aren’t, routinely says things like airplanes think and dream even though they don’t, and routinely makes claims that are flatly wrong. Take just one passage among many:
Artificial intelligence thrives in the gap between the average and the exceptional. There’d be no need for calculators if we were all mathematical geniuses. AI was created specifically to make us smarter. Spell-checkers and sentence autocompletion tools make better (at least, more literate) writers.
How much of this is even in the ballpark of being right, much less actually right? What the Hell is the gap between the average and the exceptional? There’d be no calculators if we were all math genuses? Does Tenen actually think calculators are performing math that only geniuses can do, rather than doing bulk-calculations of ordinary math in order to save people time, as they clearly do? Does Tenen actually think spell-checkers and autocompletion make writers more literate, even though such a notion is utterly ridiculous? Or is all of this junk a reflection of the fact that perhaps Tenen is a fast-talking tech-grifter who never reads and doesn’t personally know or want to know anybody outside of Silicon Valley?
Lodged in this book’s skinny length here and there are observations that are thought-provoking. When Tenen describes the very different emergent intelligence of AI, for instance, he urges his readers to see that intelligence as a collection of things: “a talking library, a metaphor, a personification of a chorus.” If we call this “intelligent,” he writes, “it is intelligent in the way of a collective. It remembers as a family does. It thinks like a state. It understands like a corporation.”
But little nuggets like this don’t turn this mess into a goulash. The underlying problem at the heart of Literary Theory for Robots is the irreconcilable conflict between the goal of the book and the goal of its author. The goal of a book like this is to educate the non-specialist reader about the nuances of AI’s interaction with the literary and artistic worlds of human creativity. But the goal of its author is pretty clearly to thrill-frighten a TED talk audience already tipsy on afternoon white wine.
As Tenen regularly implies, his subject is a serious one, more serious than most people realize. But instead of writing a serious book or even a mostly honest one, he’s written (generated?) a “Norton Short” about Boeing aircraft dreaming about Russian folktales while they’re sleeping. At least an AI might have balked at quite this level of snickering condescension.
Steve Donoghue is a founding editor of Open Letters Monthly. His book criticism has appeared in The Washington Post, The American Conservative, The Spectator, The Wall Street Journal, The National, and the Daily Star. He has written regularly for The Boston Globe, the Vineyard Gazette, and the Christian Science Monitor and is the Books editor of Georgia’s Big Canoe News