Humanities After the Algorithm

In 1959, the novelist and scientist C.P. Snow in his Rede Lecture “The Two Cultures” bemoaned that the pinnacles of human intellectual activity the sciences and the arts were growing so separate in the spheres that never the twain shall meet. To put a ice cube in Snow’s steaming hot take, his argument is at least as old as the public university. When tasked with revitalizing state-funded education after Germany’s humiliating loss to Napoleon, the brothers von Humboldt, following Immanuel Kant’s critique of academia, restructured pedagogy’s aims from an emphasis on elocution, the schöne wissenschaften (the beautiful sciences), to more technical skillsets, emblematized by unifying academic study under the word Wissenschaft. At various points in the university’s history several fields of study were seen as the ideal mixture of science and art; in the mid-19th century, Philology, in the early 20th century, Linguistics, and more recently, Cognitive Science.

Robert Rauschenberg, “Open Score”

Rather than seeing these two human activities as diametrically opposed, there are many who have attempted to explore how their field’s modes of inquiry could have a more symbiotic relationship with modern technologies. Since the 1960s, a popular fixture of both corporate and university laboratories has been staging collaborations between scientists and artists, the most famous being Bell Labs’ Experiments in Art and Technology (E.A.T.) founded by Billy Klüver, which featured artists John Cage, Yvonne Rainer, and Robert Rauschenberg. In her book How We Became Posthuman, the philosopher N. Katherine Hayles traces how the pre-history of algorithmic culture and its three foundational fields being Systems Theory (Blavatsky), Cybernetics (Norbert Weiner), and Information Theory (Claude Shannon), has caused us to constantly redraw the bounds of the humanist subject to the point where its enlightenment era basis can no longer support it. The modern renaissance man, scientist and comparative literature scholar Douglas R. Hofstadter has written several books exploring how computers model and embed cognitive processes, most notably in his tentacular 1979 study of interweaving braids in the work of a mathematician, artist, and composer, Godel, Escher, Bach.

More recent examples that have staged collaborations between the arts and sciences are curators Hans Ulrich Obrist and Simon Castets’s 89plus collaborations with the Google Cultural Institute in Paris and the Transformations of the Human program at the Berggruen Institute and directed by the philosopher Tobias Rees, which places artists and philosophers in both artificial intelligence and biotech research sites. The computer scientist and artist Lev Manovich as head of the Cultural Analytics lab has also created ways in which art historical, aesthetic trends, or an artist’s entire output could be modeled using data visualization.

Lev Manovich, “Mondrian Vs. Rothko” Software Studies Institute

I would argue that one exemplary digital humanist is the Shakespeare scholar Michael Witmore who as director of the Folger Shakespeare Library in Washington D.C., has seemingly singlehandedly transformed it into one of our country’s most forward-thinking cultural institutions. His scholarly work has been one of the most interesting and to my mind, successful in rearticulating the aims of the study of literature in light of machine learning and artificial intelligence, particularly his 2008 hands-on work with primitive forms pf word embeddings to explore genre. In his 2018 Plenary lecture for the Institute of the Humanities and Global Culture “What Should Humanists Think About in the Age of the Algorithm?” Witmore provides several demonstrations of his own collaborations with computer scientists and machine learning specialists. In one visualization, Witmore charted the appearances of the words “if,” “and,” or “but” in the corpus of Shakespeare’s plays finding that doing so caused a statistically significant distribution of the plays along genre lines. Using this visualization one could classify the plays accordingly; plays with less occurrences of “if” and “but” and more occurrences of the word “and” were typically histories while plays with more occurrences of “if” and “but” were typically comedies. Witmore then extends this data insight into an argument that would be more familiar to a humanities department. He states that, “if you are telling a story of frustrated courtship you are staging a drama of conditions. E.g. “I would do this but x” while Elizabethan dramaturgy requires that vastness of time and events like battles and complex court intrigue must be reported and not staged, so this “and” must be employed more.

One example of Michael Witmore’s word usage modeling

Of course, his lecture’s initial argument noticeably omits Shakespeare’s tragedies. However, he continues by pinpoints one play in his data that is so embedded in the comedies that it would be classified as a comedy by a K nearest neighbors model and that play is Othello. Much has been made of the heterogenous nature of Shakespeare’s tragedies, how part of their cathartic power is in subverting expectations of genre, but why would Othello have so many occurrences of “if” and “but.” Witmore provides a two-fold interpretation. The first is that the initial acts are staged as a comedy in which Iago seems to help Roderigo cuckold Othello. When Roderigo is killed, the play takes on its more tragic dimension in which Iago hypnotizes Othello into entertaining delusions of betrayal by employing fantasies articulated through conditional statements. His further use of word embeddings to analyze the play helps concretize the racist under-pinnings of the play by placing words pertaining to race or nationality along axes of vice/virtue and masculine/feminine, a gesture that moves the study of literature from one wedded to the specific instance to one capable of broader generalizations.

Witmore ends his lecture with a set recommendations for collaborations between humanities scholars and machine learning specialists, by first labeling three areas of debate in the humanities that survive: labels and descriptions, more nuanced areas of word-mapping such as context-specificity and cultural connotations in translation, and how examples are employed to unearth bias. For Wimore, humanities scholars can become problem factories for machine learning communities. For example, a data scientist could work on a way to model a certain relationship assumed in the humanities which developed over the course of the early modern period where private and public life became divided in a way that put more emphasis on one’s inner life than one’s corporeal belonging and possessions.

So much of the evolution and the scholarly work of literary studies has been in how texts could be re-described in ways that critique dominant biases and ideology that Witmore stages the algorithmic turn as one in which humanities scholars can collaborate with machine learning specialists on this problem of re-description. Data visualization becomes one more tool in the humanities scholar’s toolkit while the computer scientist must model language sets that are forced to account for more nuance.

Data Scientist and Writer, passionate about language