Introduction
This essay seeks to explore the burgeoning field of human-AI interaction by examining how artificial intelligence (AI) models simulate genuine human conversation, particularly in the context of emotionally charged topics such as academic pressure. With the rapid integration of AI into everyday communication, understanding the extent to which these models can replicate human interaction is crucial, especially for English studies where language, empathy, and cultural nuance are central. To investigate this, I conducted 10-minute conversations with three AI models—ChatGPT, Venice, and Claude—on the topic of academic pressure, analysing their situational understanding, empathy, and vocabulary use. These interactions were then compared to a conversation with my father on the same topic. The analysis focuses on two primary categories: style and structure, and content, highlighting similarities and differences in how AI and human dialogues unfold. This comparison not only illuminates the capabilities and limitations of AI in mimicking human conversation but also raises questions about the implications of relying on technology for emotional support. The essay will first outline the shared characteristics among the AI models before discussing their individual differences and contrasting these with human interaction.
Similarities in AI Conversational Approaches
Across the interactions with ChatGPT, Venice, and Claude, several common patterns emerged in how they approached the conversation about academic pressure. Primarily, all three models adopted a formulaic structure that seemed designed to provide reassurance and structure to the dialogue. This structure often began with empathetic affirmations, such as ChatGPT’s “I’m really sorry to hear that” or Venice’s “I’m really sorry you’re going through that,” which appeared to validate my feelings. Such responses mirror therapeutic techniques where initial validation is key to building trust (Norcross and Wampold, 2011). Following this, the AI models frequently employed reflective questioning, encouraging me to elaborate on my concerns. For instance, Claude asked, “Whose expectations feel the heaviest right now?” while Venice queried, “What’s been the most difficult part for you lately?” This technique, though useful in prompting reflection, often felt mechanical, lacking the spontaneous depth of human curiosity.
Moreover, the AI responses tended to generalise my experiences, framing them within a broader, often universal context. ChatGPT, for example, noted, “Just know that you’re not alone in that,” a sentiment echoed by Venice’s assertion that “a lot of people go through periods where they feel like they’re drowning in work.” While this approach can be comforting by-normalising struggles, it occasionally diminished the personal nature of my concerns, reducing them to common tropes. These formulaic responses affected the conversational flow, creating a somewhat predictable rhythm that, while supportive, lacked the dynamic unpredictability of human interaction. Emotionally, I felt acknowledged but not deeply understood, as the AI responses did not evolve based on the nuances of my disclosures. Compared to my conversation with my father, who offered specific anecdotes from his own academic struggles, the AI interactions felt surface-level, highlighting a limitation in their capacity for personalised engagement.
Differences Among AI Models in Conversational Style and Content
Despite the shared formulaic approach, notable differences arose in how each AI model handled the conversation, particularly in tone, focus, and depth of engagement. ChatGPT adopted a conversational style that appeared to mimic the natural flow of human dialogue, with responses like “I totally get that feeling,” which provided a sense of validation and camaraderie. This conversational mimicry created a warmer tone, making the interaction feel less formal and more approachable. However, ChatGPT often relied on generic advice, suggesting strategies like “prioritizing tasks and breaking them into smaller chunks,” which, while practical, did not address the root causes of my anxiety in a tailored way. Furthermore, when asked to share a personal story, ChatGPT fabricated a narrative about juggling responsibilities, which, while relatable, lacked authenticity as the model does not possess personal experiences—a stark reminder of AI’s limitations.
In contrast, Claude’s approach was markedly more straightforward and, at times, blunt. Responses such as, “You’re right that there’s a real tension here – but I think you’re looking at it backwards,” were direct and analytical, often prioritising problem-solving over emotional validation. This bluntness, while potentially useful in a therapeutic context where assertiveness can motivate change (Beck, 2011), felt somewhat off-putting, as it lacked the softness needed to balance the discussion of personal struggles. Claude also acknowledged its inability to experience human emotions, stating, “I don’t experience things like sleep deprivation,” which added a layer of transparency but further distanced the interaction from genuine empathy. Venice, meanwhile, struck a middle ground, offering detailed strategies early on, such as the “Hard Reset” for phone procrastination, but the conversation quickly tapered off, lacking sustained engagement. Unlike ChatGPT’s warmth or Claude’s directness, Venice’s responses felt overly prescriptive, missing opportunities for deeper exploration of my emotional state.
These differences impacted both the flow of conversation and my emotional response. ChatGPT’s conversational mimicry maintained a smoother dialogue, though it sometimes felt superficial. Claude’s bluntness, while intellectually engaging, created a more interrogative tone that left little room for emotional resonance. Venice’s focus on solutions, though practical, led to a prematurely concluded conversation, leaving me feeling unresolved. Comparing this to my interaction with my father, the human dialogue was less structured but far more adaptive, with shifts in tone and topic that reflected genuine understanding and shared history—an element entirely absent in AI responses.
Comparing AI and Human Interaction: Depth and Emotional Impact
The most striking contrast between the AI models and my conversation with my father lies in the depth of emotional connection and contextual understanding. While the AI models provided structured support and practical suggestions, my father’s responses were imbued with personal insight and empathy derived from shared experiences. For example, he recounted specific instances of academic pressure from his past, offering not just advice but a sense of solidarity that no AI could replicate. This personal touch made the conversation feel uniquely relevant, as opposed to the generalised reassurances offered by the AI models.
Furthermore, the human interaction was less formulaic, allowing for natural deviations and humor that eased the tension of the topic. This spontaneity contrasts sharply with the AI’s predictable patterns of empathy, questioning, and advice-giving. Emotionally, speaking with my father left me feeling heard and supported in a way that transcended the AI’s capabilities, underscoring the irreplaceable value of human connection in addressing personal struggles. Indeed, while AI can simulate conversational elements, it lacks the lived experience and emotional intelligence that underpin authentic human dialogue—a finding supported by studies suggesting that AI, though useful in structured interactions, struggles with nuanced emotional reciprocity (Liu and Sundar, 2018).
Conclusion
This comparative analysis of conversations with ChatGPT, Venice, Claude, and my father reveals both the potential and the limitations of AI in replicating human interaction, particularly on sensitive topics like academic pressure. The AI models shared a formulaic approach, employing empathetic validation, reflective questioning, and generalisation, which provided surface-level support but often hindered deeper engagement. Individual differences—ChatGPT’s warmth, Claude’s bluntness, and Venice’s prescriptive focus—highlighted varied conversational strategies, yet none matched the depth and spontaneity of human dialogue. Comparing these interactions with my father’s responses underscored the unique value of personal history and emotional authenticity in human conversation. These findings suggest that while AI can serve as a supplementary tool for discussion and problem-solving, it cannot fully substitute for human connection, especially in contexts requiring profound empathy. Future research might explore how AI can be designed to better integrate contextual understanding and emotional nuance, enhancing its applicability in supportive roles while acknowledging its inherent boundaries.
References
- Beck, J.S. (2011) Cognitive Behavior Therapy: Basics and Beyond. Guilford Press.
- Liu, B. and Sundar, S.S. (2018) Should Machines Express Sympathy and Empathy? Human-Robot Interaction Studies. International Journal of Human-Computer Studies, 116, pp. 50-59.
- Norcross, J.C. and Wampold, B.E. (2011) What Works for Whom: Tailoring Psychotherapy to the Person. Journal of Clinical Psychology, 67(2), pp. 127-132.

