Introduction
In the rapidly evolving landscape of social media, the rise of artificial intelligence (AI) has introduced new challenges to content creation and monetisation. This essay argues that channels relying on AI-generated content should be prohibited from earning revenue on social media platforms, as this practice undermines human creativity, promotes misinformation, and exacerbates economic inequalities among creators. Drawing on academic sources, the discussion will explore the origins and implications of AI in content generation, evaluate counterarguments, and propose regulatory measures. By examining these aspects, the essay highlights the need for ethical guidelines to preserve the integrity of online ecosystems.
The Emergence of AI-Generated Content on Social Media
The integration of AI into social media content creation has transformed how information is produced and disseminated, often at the expense of authenticity. AI tools, such as large language models, can generate text, images, and videos with minimal human input, leading to a surge in automated channels that monetise this content. This phenomenon originated in the early 2010s with advancements in machine learning, particularly following the development of generative adversarial networks (GANs) by Goodfellow et al. (2014), which enabled realistic synthetic media. As social media platforms like YouTube and TikTok introduced monetisation features, AI-generated content began flooding these spaces, raising concerns about its impact on genuine creators.
The topic centres on the ethical and economic ramifications of allowing such channels to profit. For instance, AI can produce vast quantities of content cheaply, outpacing human efforts and potentially displacing jobs in creative industries (Brynjolfsson and McAfee, 2014). Furthermore, the origins trace back to the democratisation of AI technologies, accelerated by open-source models like GPT series from OpenAI, which became publicly accessible around 2018 (Radford et al., 2018). This accessibility has led to an explosion of AI-driven accounts that earn through ads and sponsorships, often without disclosure, thereby deceiving audiences and devaluing original work.
Arguments Against Monetising AI-Generated Content
A primary argument is that permitting monetisation of AI content erodes the value of human labour and creativity. Human creators invest time, skill, and personal insight into their work, fostering authentic connections with audiences. In contrast, AI outputs are derivative, often trained on vast datasets of human-generated material without compensation to original authors, which raises intellectual property issues (Crawford, 2021). This not only discourages innovation but also creates an uneven playing field, where AI channels can scale production effortlessly, flooding markets and reducing visibility for human creators.
Moreover, AI-generated content poses significant risks of misinformation. Algorithms can perpetuate biases embedded in training data, leading to the spread of false narratives on social media. For example, studies show that AI models can amplify societal prejudices if not carefully managed (Bender et al., 2021). Allowing monetisation incentivises the creation of sensational, inaccurate content to maximise engagement, thereby harming public discourse and trust in online information.
Counterarguments suggest that AI enhances efficiency and accessibility, enabling more diverse voices. However, this view overlooks the ethical dilemmas; while AI can assist, outright replacement for profit undermines fairness. Generally, platforms should prioritise human-centric policies to mitigate these issues.
Potential Solutions and Regulatory Frameworks
Addressing this requires robust regulations. Platforms could implement mandatory disclosure for AI content and restrict monetisation to human-verified channels. Governments, particularly in the UK, might draw from existing frameworks like the Online Safety Bill (2023) to enforce transparency (UK Government, 2023). Such measures would protect creators and audiences, arguably fostering a more equitable digital environment. Problem-solving in this context involves identifying key risks—such as economic displacement—and applying evidence-based policies.
Conclusion
In summary, channels using AI-generated content should not be allowed to monetise on social media due to threats to creativity, misinformation risks, and economic inequities. The origins in AI advancements underscore the urgency for intervention. By enforcing restrictions, platforms can safeguard human ingenuity and promote ethical content ecosystems. This approach not only addresses immediate concerns but also sets precedents for future technological integrations, ensuring social media remains a space for genuine expression. Ultimately, prioritising human over automated content will enhance the overall quality and trustworthiness of online interactions.
References
- Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
- Brynjolfsson, E. and McAfee, A. (2014) The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company.
- Crawford, K. (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
- Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014) Generative Adversarial Nets. In Advances in Neural Information Processing Systems.
- Radford, A., Narasimhan, K., Salimans, T. and Sutskever, I. (2018) Improving Language Understanding by Generative Pre-Training. OpenAI.
- UK Government (2023) Online Safety Bill. UK Parliament.

