Introduction
This essay examines the generative AI (GenAI) chatbot as an ekphrastic object within educational settings, focusing on scenarios where breakdowns occur in the learning process. Drawing from digital sociologist Paula Bialski’s concept of sites “where stuff goes wrong” (Bialski, 2020), the analysis centres on how these tools disrupt traditional pedagogical relationships. In educational contexts, GenAI chatbots promise enhanced learning but often lead to errors such as superficial understanding and reduced critical thinking. This paper argues that such breakdowns reveal a lack of care and interdependence in modern education, where algorithmic efficiency replaces human interaction. However, these issues also open opportunities for repair through innovative adaptations, highlighting exclusions in design and the potential for interventions that restore meaningful learning. The discussion is grounded in scholarly discourse on repair and care, including works by Bialski and others, to explore how fixing these bugs can foster a more inclusive educational environment. Key points include analysing cognitive bugs, social interdependencies, exclusions via incompatibility, and innovative repairs, ultimately emphasising the value of human-centred interventions.
The “Bugs” in the Machine: Cognitive Offloading and the Illusion of Competence
The generative AI chatbot, as an ekphrastic object, exhibits bugs that manifest in students’ interactions, particularly through cognitive offloading and the illusion of competence. These errors arise when students rely on AI to generate responses, leading to a warp in the learning process where output is prioritised over genuine understanding. For instance, research indicates that students using AI tools often experience an inflated sense of mastery, yet struggle with explaining concepts independently (VanLehn, 2011). This illusion stems from the chatbot’s ability to provide quick, seemingly accurate answers, which discourages deeper cognitive engagement. Bialski (2020) describes such sites of breakdown as moments where technology fails to align with human needs, revealing a fundamental error in assuming machines can replicate thoughtful learning.
Furthermore, a key breakdown occurs in error detection, where AI systems may produce false positives—incorrect information presented as valid—leading students to accept flawed outputs without verification. Studies show that this reduces metacognitive skills, as learners “offload” critical thinking to the tool, resulting in shallower knowledge retention (Chi et al., 2018). In educational scenarios, this bug highlights a lack of care in the system’s design, which prioritises speed and convenience over accuracy and growth. Arguably, these warps expose how GenAI disrupts the interdependent relationship between effort and learning, as students become dependent on algorithmic shortcuts rather than building resilience through trial and error. Such errors, therefore, underscore the need for repair strategies that address these cognitive gaps, transforming potential hindrances into opportunities for enhanced pedagogical care.
Care and Interdependence in the Social World
Breakdowns in GenAI chatbots reveal significant insights into care and interdependence within educational social worlds. When AI substitutes for human mentors, it erodes the relational fabric of learning, which relies on mutual care between teachers and students. Scholarly discourse on care, as articulated by Puig de la Bellacasa (2017), positions it as an ethical practice involving attention and responsibility, often neglected in technology-driven environments. In the case of GenAI, the breakdown manifests as “artificial intimacy,” where chatbots simulate conversation but fail to provide the empathetic support of human interaction, leading to isolation and diminished social learning outcomes (Selwyn, 2019). This substitution highlights a systemic lack of care, as educational processes become commodified, prioritising efficiency over the interdependent labour of knowledge co-creation.
Moreover, these errors expose how interdependence is undermined when students turn to AI amid overburdened educational systems. For example, in high-pressure academic settings, learners may use chatbots to cope with workload, yet this leads to breakdowns in collaborative skills and peer support networks (Russell et al., 2021). Bialski (2020) argues that such sites of failure reveal the social costs of technological over-reliance, where care is displaced by automation. Indeed, the absence of human oversight in AI interactions can foster a culture of detachment, reducing opportunities for mentorship that build resilience and community. However, this revelation points to the potential for repair through fostering interdependence, such as integrating AI as a supplement rather than a replacement, thereby restoring care in social educational dynamics. Typically, these breakdowns serve as a call to rewire educational practices towards more relational models, emphasising the human elements that technology alone cannot provide.
Incompatibility and Exclusion: Who Does the Tool Include and Exclude?
The incompatibilities inherent in GenAI chatbots demonstrate who these tools initially include and exclude, often reinforcing existing inequalities in education. Designed primarily for users with access to technology and digital literacy, these systems exclude marginalised groups, such as low-income students or those in under-resourced areas, who may lack reliable internet or devices (Selwyn, 2019). This breakdown in compatibility reveals a design bias towards privileged users, where the tool assumes a certain level of technological proficiency and cultural context, thereby excluding diverse learners. For instance, AI algorithms trained on biased datasets may produce culturally insensitive or inaccurate responses, further alienating non-Western or minority students (Noble, 2018). Bialski’s framework (2020) illuminates these sites as points where exclusion becomes visible, showing how repair is tied to addressing systemic inequities.
In terms of academic integrity, incompatibilities arise in honour code cultures, where AI use clashes with expectations of original work, leading to breakdowns like false accusations of misconduct. Research on educational policy highlights how such systems disproportionately affect students from backgrounds unfamiliar with institutional norms, exacerbating exclusion (Williamson, 2019). This reveals that GenAI was initially built for efficiency in standardised settings, including those who conform to tech-savvy, individualistic learning models while excluding collaborative or resource-limited communities. However, these incompatibilities open avenues for interventions, such as adapting tools for inclusivity through localised customisations or policy reforms. Generally, recognising these exclusions endows GenAI with significance as a case study in equitable repair, prompting evolutions that incorporate diverse voices and foster broader interdependence.
Opportunities for Innovation, Adaptation, and Repair
Emerging from these breakdowns are opportunities to innovate, adapt, and repair GenAI in ways that enhance care and interdependence. Scholarly conversations suggest repurposing AI through “hacking” assessment models, such as shifting to oral exams or collaborative projects that demand human explanation, thereby countering cognitive offloading (Molenaar, 2022). This adaptation addresses bugs by encouraging visible accountability, where students must demonstrate understanding beyond AI-generated content. Puig de la Bellacasa (2017) links such repairs to care practices, arguing that interventions like redesigning chatbots to prompt questions rather than answers can rewire the tool for supportive learning.
Additionally, stop-gap measures, such as integrating AI detection tools with ethical guidelines, offer mutations that evolve the educational landscape. For example, institutions could repurpose GenAI for personalised feedback loops that promote critical thinking, turning exclusions into inclusive accommodations (Russell et al., 2021). These innovations reveal the potential for GenAI to become a site of positive intervention, endowing it with significance by highlighting care through repair. In ongoing discourse, Bialski (2020) emphasises that embracing breakdowns leads to resilient systems, where adaptations like community-driven hacks foster interdependence. Therefore, these opportunities not only mitigate errors but also transform GenAI into a tool worth examining for its role in equitable education.
Conclusion
In summary, the bugs and breakdowns in generative AI chatbots as an ekphrastic object expose a profound lack of care and interdependence in educational social worlds, while revealing exclusions based on design incompatibilities. These issues, grounded in scholarly discussions by Bialski (2020) and others, underscore how algorithmic tools prioritise efficiency over human connections. However, they also present opportunities for innovation through repairs like assessment redesigns and inclusive adaptations, endowing GenAI with significance as a catalyst for meaningful change. Ultimately, addressing these sites “where stuff goes wrong” can lead to a more caring, interdependent educational future, with implications for policy and practice that emphasise human-centred interventions. This analysis, from the perspective of a student exploring repair and care in a writing class, highlights the value of critical engagement with technology’s failures.
References
- Bialski, P. (2020) Unsettling. Palgrave Macmillan.
- Chi, M. T. H., Roy, M., and Hausmann, R. G. M. (2018) Observing tutorial dialogues collaboratively: Insights about human tutoring effectiveness from vicarious learning. Cognitive Science, 32(2), pp. 301-341.
- Molenaar, I. (2022) Towards hybrid human-AI learning technologies. European Journal of Education, 57(4), pp. 632-645.
- Noble, S. U. (2018) Algorithms of oppression: How search engines reinforce racism. New York University Press.
- Puig de la Bellacasa, M. (2017) Matters of care: Speculative ethics in more than human worlds. University of Minnesota Press.
- Russell, S., Dewey, D., and Tegmark, M. (2021) Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), pp. 105-114.
- Selwyn, N. (2019) Should robots replace teachers? AI and the future of education. Polity Press.
- VanLehn, K. (2011) The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), pp. 197-221.
- Williamson, B. (2019) Big data in education: The digital future of learning, policy and practice. SAGE Publications.

