AI and the Priority of Writing: The Raise of Large Language Models (LLMs)
![]() |
AI and the Priority of Writing |
Introduction
The rise of artificial intelligence, particularly large language models (LLMs), challenges long-standing assumptions about language, meaning, and human cognition. Unlike humans, who acquire language through embodied interaction, AI systems develop linguistic competence purely through textual data. This phenomenon invites a reexamination of the role of writing in meaning-making, aligning with Derrida’s concept of arche-writing—the idea that inscription precedes and exceeds speech. The efficiency of AI’s text-based training suggests that writing, rather than spoken language, is the foundation of signification. In this article, we explore how AI’s reliance on written language provides evidence for the primacy of inscription in the structure of meaning.
AI and the Written Word
Large language models like ChatGPT are trained exclusively on vast corpora of written text. They do not acquire language through sensory perception, auditory experience, or direct social engagement. Unlike human infants, who learn through speech and interaction, intelligent machines develop linguistic competence by identifying patterns within pre-existing textual data. This process suggests that language can function independently of vocalization and embodied exchange. The fact that algorithmic models achieve remarkable linguistic fluency without access to speech underscores the autonomy of written language, challenging traditional views that privilege orality in language acquisition.
Derrida and Arche-Writing
Derrida’s theory of arche-writing dismantles the hierarchy that places speech above writing. He argues that writing is not a secondary representation of speech but a fundamental condition of signification. The mode of learning in artificial systems exemplifies this notion, as it bypasses oral communication altogether, operating within a system of differential traces rather than phonetic exchange. The efficiency of computational systems in processing and generating meaning solely through inscription reinforces Derrida’s claim that writing is not merely a supplement to spoken language but the very infrastructure of meaning. This perspective shifts the focus from communication as a human-centered phenomenon to signification as an autonomous process that extends beyond human cognition.
Implications for Semiotics and Meaning
Traditional semiotics has often assumed that meaning emerges within human communities through social and psychological interpretation. However, artificial intelligence challenges this anthropocentric framework by demonstrating that sign systems can function without human intention or conscious understanding. The ability of cognitive systems to engage in textual production without awareness or comprehension highlights the non-human dimensions of signification. This suggests a post-human semiotics in which meaning is not confined to human cognition but is instead a dynamic interplay of inscription, iteration, and transformation. The success of intelligent systems in generating coherent text further reinforces the idea that signification is processual rather than tied to a conscious interpreter.
Conclusion
AI’s reliance on writing as its sole means of linguistic acquisition affirms the primacy of inscription in the structure of meaning. Unlike humans, who learn language through embodied interaction and vocal exchange, intelligent systems achieve linguistic fluency exclusively through textual analysis. This phenomenon aligns with Derrida’s theory of arche-writing, suggesting that meaning does not originate in speech but in differential traces that precede and exceed human communication. The emergence of AI as a non-human site of signification forces us to reconsider the foundations of semiotics and embrace a broader understanding of meaning—one that is no longer tethered to human consciousness but is instead a dynamic, autonomous process.
Bibliography
Saussure, Ferdinand de. Course in General Linguistics. Edited by Charles Bally and Albert Sechehaye. Translated by Wade Baskin. New York:
Derrida, Jacques. Of Grammatology. Translated by Gayatri Chakravorty Spivak. Baltimore: Johns Hopkins University Press, 1976.
Derrida, Jacques. Writing and Difference. Translated by Alan Bass. Chicago: University of Chicago Press, 1978.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). "Attention Is All You Need." Advances in Neural Information Processing Systems, 30.
Kaushal, A., & Mahowald, K. (2022). "What do tokens know about their characters and how do they know it?" arXiv preprint arXiv:2206.03406.
Hochreiter, S., & Schmidhuber, J. (1997). "Long Short-Term Memory." Neural Computation, 9(8), 1735–1780.
Bengio, Yoshua, Aaron Courville, and Pascal Vincent. "Representation Learning: A Review and New Perspectives." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 8 (2013): 1798-1828.
Katz, Jonathan, and Yehuda Lindell. Introduction to Modern Cryptography. 2nd ed. Boca Raton: CRC Press, 2014.
Kommentare
Kommentar veröffentlichen