A blind spot for large language models: Supradiegetic linguistic information


TitleA blind spot for large language models: Supradiegetic linguistic information
Publication TypeJournal Article
Year of PublicationUnder Review
AuthorsZimmerman, JW, Hudon, D, Cramer, K, Onge, JSt., Fudolig, MI, Trujillo, MZ, Danforth, CM, Dodds, PS
Abstract

Large Language Models (LLMs) like ChatGPT reflect profound changes in the field of Artificial
Intelligence, achieving a linguistic fluency that is impressively, even shockingly, human-like. The
extent of their current and potential capabilities is an active area of investigation by no means
limited to scientific researchers. It is common for people to frame the training data for LLMs as
“text” or even “language”. We examine the details of this framing using ideas from several areas,
including linguistics, embodied cognition, cognitive science, mathematics, and history. We propose
that considering what it is like to be an LLM like ChatGPT, as Nagel might have put it, can help us
gain insight into its capabilities in general, and in particular, that its exposure to linguistic training
data can be productively reframed as exposure to the diegetic information encoded in language,
and its deficits can be reframed as ignorance of extradiegetic information, including supradiegetic
linguistic information. Supradiegetic linguistic information consists of those arbitrary aspects of the
physical form of language that are not derivable from the one-dimensional relations of context—
frequency, adjacency, proximity, co-occurrence—that LLMs like ChatGPT have access to. Roughly
speaking, the diegetic portion of a word can be thought of as its function, its meaning, as the
information in a theoretical vector in a word embedding, while the supradiegetic portion of the
word can be thought of as its form, like the shapes of its letters or the sounds of its syllables. We
use these concepts to investigate why LLMs like ChatGPT have trouble handling palindromes, the
visual characteristics of symbols, translating Sumerian cuneiform, and continuing integer sequences.

URLhttps://arxiv.org/pdf/2306.06794.pdf
Refereed DesignationRefereed
Status: 
Under Review
Attributable Grant: 
SOCKS
Grant Year: 
Year1
Acknowledged VT EPSCoR: 
Ack-Yes