
Work together: New Reading Scenes: On Machine Reading and Reading Machine Learning Research (SFB “Virtuelle Lebenswelten”)
February 28 @ 8:00 – 20:00
Reading has undergone dramatic transformations over the past few decades. Media and literary theorist N. Katherine Hayles has discussed how forms of reading, modes of attention, and even neurological architecture are heavily influenced by the medium of reading—on screen vs. on print—and its media-specific features such as layout, typography, and the presence of hyperlinks (Hayles 2012; 2021). Under “machine reading,” Hayles refers to machines’ ability to process vast amounts of text and uncover patterns that would be imperceptible to a human reader. Additionally, the ability to search for keywords in digital texts facilitates a form of “distant reading,” enabling readers to engage with texts in new ways by adopting abstract, visual, quantifying approaches (Moretti 2013; Jänicke et al. 2015).
Recently, literary scholar Julika Griem has proposed to analyze what she calls “reading scenes,” where the practice of reading is explicitly thematized in literary texts and visual media. This media reflexivity enables us to analyze the changing forms, valuations, and norms assigned to reading as a cultural practice (Griem 2021). Griem’s approach asks us to attend to the technical, social, and cultural contexts of the practice of reading in addition to its cognitive dimensions.
The emergence of large language models has transformed modes of reading and introduced new forms of attention and valuations. Traditional reading methods such as “close reading” that relies on an inquisitive and cautious analysis of a short passage—a reading that pays attention to the formal and rhetorical dimension of a piece of text—competes with automated tools that establish the relevance of the components of a text through the statistical weighing of its constitutive elements. The shifts in reading raise a series of critical questions:
- What forms of reading are automatized through machine processing? What cultural, technical, ethical, and economic valuations are encoded into these machine reading scenes? What are the normative implications behind machines’ “interpretation” of what counts in a text and the reduction of texts to containers for information?
- Do close reading and the reading of longer texts, both of which require sustained attention, lose their status as foundational skills to be learned in educational settings? Do reading competencies become superfluous as machines automate the reading process?
- Since computer science literature is a scene on which AI gives an account of its “paradigmatic worldview” (Amoore et al. 2023), what forms of reading might researchers of the humanities and social sciences develop in order to engage with computer science research, which is often outside their traditional fields of expertise?
- How does machine translation, as the foundational problem of large language models, relate to human translation as a reading-writing practice that accounts for specific temporal, geographical, and affective contexts? What are the implications of reducing these contexts to the “highest likelihood” norm of machine learning?
The workshop is a public event. If you want to participate please register here.