Introduction
The rapid advancement of artificial intelligence (AI) tools, such as ChatGPT, has sparked intense debate in higher education, particularly regarding their impact on academic integrity. From an engineering student’s perspective, where AI is increasingly integral to fields like software development, simulation modelling, and data analysis, this issue is especially relevant. This essay argues that universities should not ban AI in academic writing tools but instead integrate them with robust guidelines to enhance learning while preserving integrity. It will explore arguments for and against banning, drawing on evidence from educational research, and conclude with implications for engineering education. By examining these perspectives, the essay highlights the need for balanced adaptation rather than outright prohibition, acknowledging AI’s potential benefits and risks in a discipline that relies on technological innovation.
Arguments in Favour of Banning AI Tools
Proponents of banning AI in academic writing tools often cite threats to academic integrity, arguing that such technologies enable plagiarism and undermine genuine learning. In engineering, where assignments frequently involve technical reports, design proposals, or coding tasks, AI can generate content that mimics student work, potentially allowing undetected cheating. For instance, tools like GPT models can produce coherent essays or even simulate engineering calculations, raising concerns about authenticity (Cotton et al., 2023). This is particularly problematic in assessed work, as it erodes the foundational skills engineers need, such as critical thinking and problem-solving, which are essential for real-world applications like structural analysis or circuit design.
Furthermore, evidence suggests that unregulated AI use can exacerbate inequalities. Students with access to advanced AI might gain unfair advantages, while others lag behind, challenging the equity of educational assessment (Rudolph et al., 2023). A study by Rudolph et al. (2023) evaluates how AI disrupts traditional evaluations, noting that it can produce plausible but sometimes inaccurate outputs, which could mislead learners in technical subjects. In engineering contexts, this might manifest as flawed simulation results or erroneous code, potentially fostering a reliance on AI over independent understanding. Supporters of a ban argue that prohibition is necessary to maintain a level playing field and ensure assessments truly reflect student capabilities, thereby upholding the credibility of qualifications in a profession where errors can have serious consequences, such as in civil or aerospace engineering projects.
Arguments Against Banning AI Tools
However, banning AI tools overlooks their educational benefits and fails to prepare students for an AI-driven future, especially in engineering. Rather than prohibiting these tools, universities could regulate their use to promote ethical integration. AI can assist with brainstorming, editing, or generating initial drafts, allowing students to focus on higher-order skills like analysis and innovation (Perkins, 2023). In engineering education, for example, AI-powered tools like grammar checkers or code assistants (e.g., GitHub Copilot) can enhance efficiency in tasks such as report writing or algorithm development, mirroring professional practices where engineers collaborate with AI systems for complex designs.
Moreover, evidence indicates that with proper guidelines, AI can support learning without compromising integrity. Perkins (2023) discusses how AI large language models can be leveraged for personalised tutoring, helping students grasp difficult concepts like fluid dynamics or thermodynamics. Banning them might stifle innovation in a field where AI is already transforming practices, such as in predictive maintenance or autonomous systems. Instead, institutions could implement detection tools and educate on ethical use, fostering responsibility. This approach aligns with engineering ethics codes, which emphasise integrity alongside technological adoption (Royal Academy of Engineering, 2017). Arguably, outright bans could drive underground usage, worsening integrity issues, whereas integration encourages transparency and skill development.
Conclusion
In summary, while concerns about plagiarism and equity support arguments for banning AI in academic writing tools, the potential benefits—particularly in engineering—for enhancing learning and preparing for professional realities outweigh these risks. Universities should therefore avoid bans and focus on regulatory frameworks, such as mandatory disclosure of AI use and integration into curricula, to safeguard integrity. This balanced strategy not only addresses limitations but also equips engineering students with essential skills for an AI-augmented workforce. Ultimately, embracing AI thoughtfully can strengthen academic practices, ensuring graduates are innovative and ethically grounded.
References
- Cotton, D.R.E., Cotton, P.A. and Shipway, J.R. (2023) Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International.
- Perkins, M. (2023) Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2), Article 7. Available at: https://ro.uow.edu.au/jutlp/vol20/iss2/07.
- Royal Academy of Engineering (2017) Engineering ethics in practice: A guide for engineers. Royal Academy of Engineering.
- Rudolph, J., Tan, S. and Tan, S. (2023) ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?. Journal of Applied Learning and Teaching, 6(1).

