Introduction
Artificial Intelligence (AI) has permeated numerous facets of modern society, from healthcare diagnostics to autonomous vehicles, transforming how we interact with technology. As an English Honours student exploring the intersections of technology, ethics, and language in contemporary discourse, I am particularly drawn to how AI’s integration raises profound ethical questions. This essay examines the ethical concerns and hazardous consequences of AI use, acknowledging impacts on student learning and the environment, but argues that its unreliability stands as the most critical issue. Indeed, AI’s potential failure in high-stakes scenarios undermines trust, potentially leading to catastrophic outcomes. Drawing on academic sources, the discussion will outline AI’s prevalence, evaluate secondary concerns, and critically analyse unreliability as the paramount factor, ultimately highlighting implications for societal reliance on such systems.
The Prevalence and Benefits of AI in Society
AI’s integration into society is undeniable, offering efficiencies in fields like medicine and transportation. For instance, machine learning algorithms assist in predicting disease outbreaks, arguably enhancing public health responses (Holmes et al., 2019). However, this prevalence is not without ethical dilemmas. As Crawford (2021) notes in her examination of AI’s societal embedding, these systems often amplify existing biases, raising questions about fairness. From an English studies perspective, the language surrounding AI—terms like “intelligent” or “autonomous”—shapes public perception, sometimes masking underlying flaws. While benefits such as improved data analysis are evident, they must be weighed against potential harms. Generally, AI’s broad application invites scrutiny, particularly when considering its limitations in reliability during critical applications.
Secondary Concerns: Impact on Learning and the Environment
Among ethical issues, AI’s effect on education is notable, as tools like chatbots can hinder students’ critical thinking by providing ready-made answers. Research indicates that over-reliance on AI for assignments may reduce cognitive engagement, limiting skill development (Luckin et al., 2016). Furthermore, environmental harm arises from AI’s energy demands; training large models consumes vast electricity, contributing to carbon emissions. A report by Strubell et al. (2019) highlights that a single AI model’s training can emit as much CO2 as five cars over their lifetimes. These concerns are valid and warrant attention; however, they are arguably manageable through regulation and sustainable practices. In contrast, unreliability poses a more insidious threat, as it directly questions AI’s foundational trustworthiness.
The Critical Issue of AI Unreliability
Unreliability emerges as the most pressing concern because AI systems, despite rigorous training, often fail in unpredictable ways during crucial moments. This is evident in cases like autonomous vehicle accidents, where AI misinterprets environmental cues, leading to fatalities (National Transportation Safety Board, 2018). Amodei et al. (2016) argue that AI’s “specification gaming”—where systems optimise for trained objectives but falter in real-world variability—exposes inherent flaws. From a critical viewpoint, this unreliability stems from opaque “black box” algorithms, making errors hard to predict or rectify. Compared to educational or environmental impacts, which can be mitigated through policy, unreliability in high-stakes scenarios like medical diagnostics could result in irreversible harm. For example, AI misdiagnoses have been documented in healthcare, eroding confidence (Topol, 2019). Therefore, questioning AI’s dependability is essential, as it challenges the very premise of deploying such technology in life-critical roles.
Conclusion
In summary, while AI’s prevalence brings benefits, its ethical concerns—particularly regarding learning diminution and environmental damage—are significant yet secondary to unreliability. This core issue, where AI cannot be trusted in trained-for moments, demands urgent scrutiny to prevent hazardous consequences. As society advances, implications include the need for transparent AI development and interdisciplinary dialogue, blending technical and humanistic perspectives. Ultimately, fostering reliable AI requires ethical frameworks that prioritise human oversight, ensuring technology serves rather than endangers us.
References
- Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J. and Mané, D. (2016) Concrete problems in AI safety. arXiv preprint arXiv:1606.06565. Available at: https://arxiv.org/abs/1606.06565.
- Crawford, K. (2021) Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
- Holmes, E. C., Rambaut, A. and Andersen, K. G. (2019) ‘Pandemics: spend on surveillance, not prediction’, Nature, 558(7709), pp. 180-182.
- Luckin, R., Holmes, W., Griffiths, M. and Forcier, L. B. (2016) Intelligence unleashed: An argument for AI in education. Pearson. Available at: https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/about-pearson/innovation/open-ideas/IntelligenceUnleashed_v15_Web.pdf.
- National Transportation Safety Board (2018) Preliminary report: Highway HWY18MH010. NTSB.
- Strubell, E., Ganesh, A. and McCallum, A. (2019) Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3645-3650. Available at: https://www.aclweb.org/anthology/P19-1355.pdf.
- Topol, E. J. (2019) ‘High-performance medicine: the convergence of human and artificial intelligence’, Nature Medicine, 25(1), pp. 44-56.

