A Thousand Lines of Nonsense

One thousand sentences, each starting with the word ``ai,`` were generated by a mathematical predictive model built on teen participants' chat histories. These waves of algorithmically generated text rush toward us like a flood. Do we still care about the meaning of the text? Why do we need so much content in the first place?

Installation, 2024

Digital print and folding chair, 11 inches by 18 feet.

The text in the installation was generated by a Markov chain model built on chat histories collected from the Whose AI? workshops.

As part of the Whose AI? workshop series, young participants, aged 14-20, engaged in an experiment to explore the relationship between data and text-based AI. The experiment started with a casual chat discussing AI and other topics. The chat histories were later collected and compiled into a database of approximately 9,000 words. Using Markov chain, the artist generated 1,000 “new” lines of text from the database, each beginning with the word “ai.” The resulting text is largely nonsensical.

Markov chain, an early mathematical model used for prediction, generates new text by identifying probabilities within the original text (in this case, the chat histories). While not nearly as sophisticated as today’s Large Language Models (LLMs), Markov chain is thought to have inspired the development of contemporary predictive AI models.

In the pursuit of bigger, more powerful AI models that can generate content faster and in greater quantities, there lies a danger of losing sight of the very reason why we produce and consume literature, visual arts, and music—to connect with and appreciate the thoughts, feelings, and creativity of other human beings.