Introduction
In the rapidly evolving field of artificial intelligence (AI), humanity stands at a crossroads where technological advancement promises immense benefits but also poses significant risks. As a student exploring this topic in ENG 101, I believe that we are unprepared to live in harmony with AI, not because it is evil, but because human selfishness, global division, and a huge background of environmental destruction ensure that we will use this power to create a dystopian future. This essay examines these factors through a structured analysis, drawing on historical patterns and current trends to argue that without addressing these human flaws, AI integration could exacerbate inequality, conflict, and ecological harm. The discussion will cover three main arguments—human selfishness, global division, and environmental destruction—followed by a consideration of counterarguments and their refutations. By highlighting these issues, the essay aims to underscore the need for a more ethical approach to AI development, supported by evidence from academic sources and real-world examples.
Human Selfishness and the Exploitation of AI
Human selfishness has long driven the misuse of groundbreaking technologies, often prioritising individual or corporate gain over collective well-being, and this pattern is evident in the current trajectory of AI development. Historically, innovations like the internet were initially envisioned as tools for global connectivity and knowledge sharing, yet they have been co-opted for profit-driven surveillance and data monetisation by tech giants. In the context of AI, this selfishness manifests in the rush to deploy advanced.
A key example is the way companies like Google and Meta have integrated AI algorithms to maximise user engagement, often at the expense of mental health and privacy. As Zuboff (2019) argues in her seminal work on surveillance capitalism, “The extraction and analysis of data about our behaviour is the new gold rush” (Zuboff, 2019, p. 75). This exploitation reflects a broader human tendency to prioritise short-term profits, arguably leading to societal divisions where AI amplifies existing inequalities. For instance, AI-driven hiring tools have been shown to perpetuate biases, disadvantaging marginalised groups and reinforcing economic selfishness.
Furthermore, the environmental costs of AI training, which require massive energy consumption, are often overlooked in favour of competitive advantages. Reports indicate that training a single AI model can emit as much carbon as five cars over their lifetimes (Strubell et al., 2019). This selfish disregard for long-term consequences mirrors past exploitations, such as the tobacco industry’s denial of health risks for decades to protect profits. In AI, we see similar patterns where ethical considerations are secondary to market dominance. Indeed, the concentration of AI power in a few corporations, such as OpenAI and Google, allows them to dictate terms that favour shareholders over global equity.
This selfishness ensures that AI, rather than alleviating poverty or disease, could widen the gap between the haves and have-nots. A study by the World Economic Forum (2020) warns that without intervention, AI could displace 85 million jobs by 2025, primarily in low-wage sectors, while creating only 97 million new ones in high-tech fields accessible mainly to the educated elite. Therefore, human selfishness, rooted in capitalist incentives, positions AI as a tool for dystopian control rather than harmonious progress, supporting the thesis that our unpreparedness stems from innate human flaws.
Global Division and the AI Arms Race
Global division further compounds our unpreparedness for AI, as nations and entities compete rather than collaborate, echoing historical arms races that have led to catastrophic outcomes. The current “AI arms race” between superpowers like the United States, China, and Russia prioritises military and economic dominance over shared humanitarian goals, much like the nuclear arms race of the 20th century divided the world into opposing blocs.
This division is starkly illustrated by the refusal of major powers to agree on international regulations for autonomous weapons. As highlighted in a report by Human Rights Watch (2018), ” Killer robots would be unable to distinguish between combatants and civilians, leading to unlawful killings” (Human Rights Watch, 2018). Despite campaigns by organisations like the Campaign to Stop Killer Robots, countries such as the US and Russia have blocked treaties at the United Nations, prioritising strategic advantages. This mirrors the Cold War era, where nuclear technology, initially promising unlimited energy, was weaponised, resulting in ongoing global tensions.
In the AI domain, this division fosters an environment where technology is developed in silos, exacerbating cyber threats and misinformation. For example, state-sponsored AI has been used in disinformation campaigns, as seen in the 2016 US election interference attributed to Russian actors employing AI-enhanced bots (Badawy et al., 2018). Such actions deepen global rifts, making harmonious AI integration implausible. Moreover, economic divisions mean that developing nations are left behind; a UNESCO report (2021) notes that AI investment is concentrated in high-income countries, with Africa receiving less than 1% of global AI funding, perpetuating a cycle of inequality.
Typically, this leads to a dystopian scenario where AI becomes a tool for surveillance states, as in China’s social credit system, which uses AI to monitor and control citizens (Creemers, 2018). Here, global division ensures that AI’s potential for solving issues like climate change or pandemics is squandered in favour of power struggles. The thesis is thus reinforced: without bridging these divisions, AI will entrench a fragmented, hostile future rather than a unified, beneficial one.
Environmental Destruction and AI’s Ecological Footprint
Our history of environmental destruction provides a grim backdrop that ensures AI will contribute to, rather than mitigate, ecological collapse, highlighting humanity’s unpreparedness for responsible stewardship. From the Industrial Revolution’s unchecked pollution to modern deforestation, humans have consistently sacrificed the planet for progress, a pattern now repeating with AI’s massive resource demands.
AI systems require vast data centres that consume enormous electricity, often powered by fossil fuels. A study by the University of Massachusetts Amherst found that the carbon footprint of training large language models equals that of transatlantic flights for hundreds of passengers (Strubell et al., 2019). This is particularly alarming given the ongoing climate crisis, where global warming—exacerbated by decades of denial and exploitation—threatens biodiversity and human survival. As the Intergovernmental Panel on Climate Change (IPCC, 2022) states, “Human influence has unequivocally warmed the atmosphere, ocean and land” (IPCC, 2022, p. 4), yet AI development proceeds with little regard for these impacts.
Furthermore, the mining of rare earth minerals for AI hardware devastates ecosystems in regions like the Democratic Republic of Congo, where child labour and habitat destruction are rampant (Sovacool et al., 2019). This selfishness ties back to the thesis, as environmental destruction is not an isolated issue but intertwined with human greed. Instead of using AI to optimise renewable energy or predict disasters, it is often deployed in ways that accelerate harm, such as in oil exploration algorithms that enable more efficient fossil fuel extraction.
Generally, this creates a vicious cycle: AI could model climate solutions, but embedded in a destructive paradigm, it amplifies problems. For instance, while AI has been used in conservation efforts, like monitoring endangered species via drones, the net environmental cost of its infrastructure outweighs these benefits in many cases (Jarić et al., 2020). Thus, our legacy of environmental neglect ensures that AI will push us toward a dystopian, uninhabitable future unless profound changes occur.
Counterarguments: The Potential Benefits of AI
Despite these concerns, some argue that AI’s inherent potential for good outweighs human flaws, suggesting we are more prepared than pessimists claim. Proponents, such as those in the effective altruism movement, contend that AI can address global challenges efficiently, from healthcare diagnostics to climate modelling, fostering harmony rather than dystopia. For example, AI has revolutionised medical imaging, improving cancer detection rates by up to 30% in some studies (Topol, 2019). This view posits that human selfishness can be mitigated through ethical frameworks, and global divisions overcome via international collaborations like the EU’s AI Act, which aims to regulate high-risk applications (European Commission, 2021).
Moreover, environmental advocates highlight AI’s role in sustainability, such as optimising energy grids to reduce waste. A report by the World Resources Institute (2019) notes that AI could cut global emissions by 10% through efficient resource management. These counterarguments suggest that AI is not doomed by human nature but can be steered toward positive outcomes, challenging the thesis by emphasising innovation and adaptability.
Refuting the Optimism on Human Selfishness and Global Division
However, this optimism underestimates the depth of human selfishness and global division, which historical evidence shows are not easily overcome. While ethical frameworks exist, they are often voluntary and ignored when profits are at stake, as seen in the tech industry’s track record of data breaches despite regulations (Zuboff, 2019). The EU’s AI Act, though progressive, lacks global enforcement, leaving divisions intact; China and the US continue independent paths, prioritising national interests (Creemers, 2018). Furthermore, AI’s benefits in healthcare are unevenly distributed, favouring wealthy nations and exacerbating selfishness-driven inequalities (World Health Organization, 2021). Thus, without systemic change, these flaws will dominate, leading to the dystopian misuse predicted in the thesis.
Refuting Claims on Environmental Benefits
Similarly, claims of AI’s environmental benefits are overstated when viewed against the backdrop of destruction. While AI can optimise energy use, its own footprint—equivalent to entire countries’ emissions for large models—counters these gains (Strubell et al., 2019). The World Resources Institute’s projections assume ideal conditions, but real-world implementation is hindered by corporate resistance to green transitions (IPCC, 2022). Rare earth mining’s ecological toll further undermines sustainability arguments, as it perpetuates the very destruction AI purportedly fights (Sovacool et al., 2019). Therefore, without addressing our environmental legacy, AI will accelerate collapse, affirming the thesis that harmony remains elusive.
Conclusion
In summary, human selfishness, global division, and environmental destruction collectively render us unprepared to harmonise with AI, steering us toward a dystopian future unless profound shifts occur. The arguments presented demonstrate how these factors, supported by historical precedents and current evidence, override AI’s potential benefits. Counterarguments, while acknowledging positives, fail to refute the entrenched nature of these issues. As ENG 101 students, we must advocate for ethical, collaborative AI governance to avert this trajectory. The implications are clear: without change, AI could amplify humanity’s worst tendencies, but with awareness, it might yet foster a more equitable world. This essay calls for urgent reflection and action in our increasingly AI-integrated society.
References
- Badawy, A., Ferrara, E. and Lerman, K. (2018) Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign. In: 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, pp. 258-265.
- Creemers, R. (2018) China’s social credit system: An evolving practice of control. Available at SSRN.
- European Commission (2021) Proposal for a regulation on artificial intelligence. European Commission.
- Human Rights Watch (2018) Heed the call: A moral and legal imperative to ban killer robots. Human Rights Watch.
- Intergovernmental Panel on Climate Change (IPCC) (2022) Climate change 2022: Impacts, adaptation, and vulnerability. IPCC.
- Jarić, I., Courchamp, F., Correia, R.A., Crowley, S.L., Meinard, Y., Roberts, B.R., Roll, U., Sherren, K., Soriano-Redondo, A., Veríssimo, D. and Zellmer, A. (2020) The role of species charisma in biological conservation. Biological Conservation, 245, p.108580.
- Sovacool, B.K., Ali, S.H., Bazilian, M., Radley, B., Nemery, B., Okatz, J. and Mulvaney, D. (2019) Sustainable minerals and metals for a low-carbon future. Science, 367(6473), pp. 30-33.
- Strubell, E., Ganesh, A. and McCallum, A. (2019) Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243.
- Topol, E.J. (2019) High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), pp. 44-56.
- UNESCO (2021) AI and education: Guidance for policy-makers. UNESCO.
- World Economic Forum (2020) The future of jobs report 2020. World Economic Forum.
- World Health Organization (2021) Ethics and governance of artificial intelligence for health. WHO.
- World Resources Institute (2019) How AI can enable a sustainable future. World Resources Institute.
- Zuboff, S. (2019) The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

