Introduction
Large Language Models (LLMs), such as those based on the GPT architecture, represent a significant advancement in artificial intelligence, particularly in the field of natural language processing. These models, trained on vast datasets of human-generated text, can generate coherent and contextually relevant responses, mimicking aspects of human communication. The “art of speaking,” in a linguistic context, refers to the nuanced skill of using language effectively, encompassing elements like syntax, semantics, pragmatics, and rhetoric to convey meaning, persuade, or engage an audience (Crystal, 2008). This essay explores how LLMs illustrate this art by demonstrating capabilities in generating human-like speech, while also highlighting their limitations, which underscore the complexities of genuine human linguistic artistry. From the perspective of a linguistics student, this analysis draws on key concepts in computational linguistics and pragmatics to argue that LLMs serve as both a mirror and a foil to human speaking skills. The discussion will be structured around the models’ handling of linguistic structures, their pragmatic competencies, and the broader implications for understanding language as an art form. Ultimately, this examination reveals that while LLMs excel in simulating speech, they fall short in embodying the creative and intentional essence of human expression.
Large Language Models and Linguistic Structures
At the core of LLMs’ ability to illustrate the art of speaking lies their proficiency in handling linguistic structures, particularly syntax and semantics. Syntax refers to the rules governing sentence structure, while semantics deals with meaning (Fromkin et al., 2018). LLMs, powered by transformer architectures, process and generate text by predicting the most probable next token based on patterns learned from training data (Vaswani et al., 2017). For instance, when prompted with a incomplete sentence, an LLM can complete it in a grammatically correct and semantically coherent manner, much like a skilled speaker improvising during conversation.
This capability illustrates the art of speaking by replicating the fluidity and precision that humans employ in constructing utterances. Consider how a model like GPT-3 can generate persuasive arguments or narrative prose, drawing on rhetorical devices such as metaphor or parallelism. In linguistic terms, this mirrors Aristotle’s concept of rhetoric as the art of persuasive speaking, where structure enhances impact (Aristotle, trans. 1991). However, LLMs do not truly “understand” these structures; instead, they rely on statistical correlations. Bender et al. (2021) argue that such models are essentially “stochastic parrots,” repeating patterns without genuine comprehension, which highlights a limitation in their illustration of speaking as an art. Indeed, this parroting can produce eloquent outputs, but it lacks the intentional creativity of a human speaker adapting to novel contexts.
Furthermore, LLMs demonstrate semantic versatility, generating responses that align with implied meanings. For example, in handling polysemy—words with multiple meanings—they select appropriate interpretations based on context, akin to how speakers navigate ambiguity in everyday discourse (Cruse, 2004). A linguistics student might observe that this process illustrates Grice’s cooperative principle, where effective speaking assumes mutual understanding (Grice, 1975). Yet, errors in disambiguation reveal the models’ brittleness; a subtle shift in phrasing can lead to nonsensical outputs, underscoring that the art of speaking involves not just pattern-matching but intuitive judgment. Thus, while LLMs provide a practical demonstration of structural linguistics in action, they also expose the gaps between mechanical generation and artistic human expression.
Pragmatics and Contextual Adaptation in LLMs
Beyond syntax and semantics, the art of speaking is profoundly illustrated through pragmatics, the study of how context influences meaning (Levinson, 1983). LLMs excel in this area by incorporating contextual cues from prompts to generate responses that appear conversationally apt. For instance, when engaged in dialogue, models like ChatGPT can maintain topic coherence, respond to implicatures, and even simulate politeness strategies, reflecting Brown and Levinson’s (1987) theory of politeness in face-to-face interaction. This capability allows LLMs to illustrate the pragmatic artistry of speaking, where speakers adjust language to suit social norms, audience expectations, and situational demands.
In educational contexts, such as linguistics tutorials, LLMs can generate examples of speech acts—declaratives, interrogatives, or imperatives—that demonstrate performative language use, as theorised by Austin (1962). A student might use an LLM to simulate a debate, observing how it constructs arguments with hedging phrases like “arguably” or “typically” to soften assertions, mirroring human rhetorical finesse. However, this illustration is limited; LLMs often fail in nuanced pragmatics, such as detecting sarcasm or cultural idioms without explicit cues (Shanahan, 2022). For example, a prompt involving irony might yield a literal response, revealing a lack of true inferential ability. This shortfall emphasises that the art of speaking involves not merely generating text but interpreting unspoken intentions, a skill rooted in human cognition.
Moreover, LLMs’ handling of discourse markers—words like “however,” “therefore,” or “indeed”—further illustrates transitional artistry in speech, facilitating smooth flow in arguments (Schiffrin, 1987). By integrating these elements, models produce essays or speeches that feel natural, providing linguistics students with tools to analyse discourse structure. Yet, as Bender et al. (2021) caution, over-reliance on such models risks perpetuating biases from training data, which can distort pragmatic illustrations. Generally, while LLMs offer a window into pragmatic mechanisms, their context-dependence is superficial, highlighting the deeper, embodied nature of human speaking art.
Limitations and Implications for the Art of Speaking
Despite their strengths, the limitations of LLMs profoundly illustrate the art of speaking by contrasting mechanical output with human creativity. One key issue is the absence of intentionality; humans speak with purpose, emotion, and originality, whereas LLMs generate based on probability (Searle, 1980). This distinction is evident in creative tasks, where models can produce poetry or stories, but these often lack the innovative spark of human artistry, relying instead on recombining existing patterns.
From a linguistics viewpoint, this limitation underscores Chomsky’s (1957) generative grammar theory, which posits that humans possess an innate capacity for novel sentence creation, something LLMs approximate but do not replicate authentically. For example, while an LLM might generate a sonnet, it does so without personal experience or emotional depth, illustrating that speaking as an art involves subjective expression. Additionally, ethical concerns arise, as models can propagate misinformation or harmful stereotypes, challenging the responsible artistry of speech (Weidinger et al., 2021).
Problematically, LLMs also struggle with long-term coherence in extended discourse, often deviating from initial themes, which contrasts with a skilled speaker’s ability to maintain narrative control. This reveals the art of speaking as a dynamic, adaptive process, not a static computation. In addressing these complexities, linguistics students can draw on LLMs as case studies to evaluate theories of language acquisition and use, fostering a critical understanding of both artificial and human communication.
Conclusion
In summary, Large Language Models illustrate the art of speaking through their adept handling of linguistic structures, pragmatic adaptation, and discourse generation, providing tangible examples of syntax, semantics, and rhetoric in action. However, their reliance on statistical patterns exposes limitations in intentionality, creativity, and contextual depth, which highlight the uniquely human aspects of linguistic artistry. For linguistics students, this duality offers valuable insights into computational models’ role in studying language, while emphasising the irreplaceable qualities of human speech. Looking forward, as LLMs evolve, they may further bridge the gap between artificial and natural communication, but they will likely continue to serve as illustrations rather than embodiments of the art. This analysis not only underscores the relevance of linguistics in AI but also invites ongoing debate on the boundaries of machine-mediated expression. Ultimately, LLMs remind us that speaking is an art form deeply intertwined with human cognition and culture.
References
- Aristotle. (1991) On Rhetoric: A Theory of Civic Discourse. Translated by G.A. Kennedy. Oxford University Press.
- Austin, J.L. (1962) How to Do Things with Words. Oxford University Press.
- Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
- Brown, P. and Levinson, S.C. (1987) Politeness: Some Universals in Language Usage. Cambridge University Press.
- Chomsky, N. (1957) Syntactic Structures. Mouton.
- Cruse, A. (2004) Meaning in Language: An Introduction to Semantics and Pragmatics. Oxford University Press.
- Crystal, D. (2008) A Dictionary of Linguistics and Phonetics. 6th edn. Blackwell Publishing.
- Fromkin, V., Rodman, R. and Hyams, N. (2018) An Introduction to Language. 11th edn. Cengage Learning.
- Grice, H.P. (1975) ‘Logic and Conversation’, in P. Cole and J.L. Morgan (eds) Syntax and Semantics, Vol. 3: Speech Acts. Academic Press, pp. 41-58.
- Levinson, S.C. (1983) Pragmatics. Cambridge University Press.
- Searle, J.R. (1980) ‘Minds, Brains, and Programs’, Behavioral and Brain Sciences, 3(3), pp. 417-457.
- Schiffrin, D. (1987) Discourse Markers. Cambridge University Press.
- Shanahan, M. (2022) ‘Talking About Large Language Models’, arXiv preprint arXiv:2212.03551. Available at: https://arxiv.org/abs/2212.03551.
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I. (2017) Attention is All You Need. Advances in Neural Information Processing Systems, 30.
- Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, A., Birhane, A., Haas, J., Rimell, L., Hendricks, L.A., Isaac, W., Legassick, S., Irving, G. and Gabriel, I. (2021) ‘Ethical and social risks of harm from Language Models’, arXiv preprint arXiv:2112.04359. Available at: https://arxiv.org/abs/2112.04359.

