Midway through The Political Unconscious, Frederic Jameson writes that the novel cannot be seen as a “finished object whose ‘structure’ one might model and contemplate.” Rather, it is “something that happens to its primary materials,” an “interminable set of operations and programming procedures” performed on the genres, forms, and other materials the novel has historically incorporated into itself (138).

Proponents of surface reading and post-critique tend to treat The Political Unconscious as the poster child for a model of literature as mystifying surface cloaking hidden ideological depths. And yet, as I read the book in its entirety for the first time several months ago, it struck me that another set of figures seemed far more prevalent than the exoskeletons and excavations cited in post-critique. Jameson repeatedly describes the novel in terms of experimentation, programming, coding/decoding/transcoding, and the processing of raw materials. This is not the language of surface and depth, but the language of computation, of the algorithmic transformation of texts.

I’ve been interested for some time in potential intersections between computational and Marxist methods of interpreting literature. Often presented as diametrically opposed, these approaches seem to me to share an aspiration to systematicity, and to historicizing the development of literary forms and tastes. This summer, I hope to explore some of these potential overlaps in more depth.

How might work in (e.g.) topic modeling or natural language processing help to reverse-engineer or defamiliarize the “operations” and “programming procedures” that Jameson describes as constitutive of the novel? What would it mean to use computational methods not to reject or supersede the often deliberately unfalsifiable claims of Marxist literary theory, but rather to deepen and thicken them?

Recently, I’ve been slowly working my way through a reading list that includes canonical works of Marxist criticism and novel theory; pre-digital works of literary criticism that employ quantitative methods of one sort or another; and more contemporary scholarship on distant reading, cultural analytics, machine learning, and the like.

I also hope to spend some time with tutorials and exercises on machine learning and (possibly) natural language processing. Finally, I’ll be experimenting with constructing a corpus of texts that can be used to test out metanarratives of the novel and capitalism by Lukacs, Watt, McKeon, and others.

I’ll be using this space to document this process. To anyone reading this: I welcome your thoughts, questions, and critiques!