Introduction
This essay critically examines the aesthetic and ideological potentials and risks associated with creating images using artificial intelligence (AI), with reference to original and altered images as examples. Drawing on key concepts from Sturken and Cartwright’s *Practices of Looking* (2018), such as the “gaze,” “ideology,” and “visual culture,” the analysis situates AI image generation within the broader field of audio-visual culture. The purpose of this essay is to explore how AI technologies reshape traditional notions of authorship, representation, and meaning-making in visual media, while identifying the ethical and cultural implications of these practices. The discussion will first address the aesthetic possibilities of AI-generated imagery, then consider the ideological underpinnings and risks, before concluding with reflections on the broader impact of AI on visual culture. This analysis aims to provide a balanced perspective, acknowledging both the innovative potential and the inherent challenges of this emerging technology.
Aesthetic Potentials of AI-Generated Images
AI-generated images offer significant aesthetic potentials by pushing the boundaries of creativity within visual culture. As Sturken and Cartwright (2018) argue, visual culture is shaped by technologies that influence how images are produced and consumed, often altering the traditional “gaze” through which viewers engage with media. AI technologies, such as generative adversarial networks (GANs), can create hyper-realistic or entirely surreal images that challenge conventional artistic paradigms. For instance, an original photograph of a landscape can be altered by AI to mimic the style of a classical painting, blending realism with impressionistic techniques. Such transformations demonstrate AI’s capacity to transcend human limitations in art production, enabling creators to explore new aesthetic territories.
Furthermore, AI tools democratise artistic creation by allowing individuals without traditional skills to produce visually compelling works. This aligns with Sturken and Cartwright’s (2018) notion of the democratisation of the gaze, where access to image-making technologies reshapes who can participate in visual culture. AI platforms like DALL-E or MidJourney enable users to generate complex imagery from simple text prompts, thus broadening participation in creative fields. However, while this inclusivity is promising, it raises questions about the authenticity of authorship—can a work be considered truly original if the AI algorithm significantly shapes the output? This tension highlights a limitation in AI’s aesthetic potential, suggesting that while it expands creativity, it may also dilute individual artistic agency.
Ideological Implications and Risks of AI-Generated Images
Beyond aesthetics, the ideological implications of AI-generated images are profound and often problematic. Sturken and Cartwright (2018) define ideology as the set of beliefs and values embedded within images that shape social perceptions. AI-generated or altered images can reinforce or challenge these ideologies, depending on how they are produced and disseminated. For example, an original portrait altered by AI to fit Eurocentric beauty standards might perpetuate harmful stereotypes about race or gender, aligning with historical patterns of visual culture that prioritise certain ideals over others. This process reflects what Sturken and Cartwright (2018) describe as the “power of images” to construct social norms, often invisibly, through repeated exposure.
Moreover, the risk of misinformation looms large with AI-generated imagery. The technology’s ability to create highly realistic “deepfakes” or altered images can distort reality, undermining trust in visual media as a source of truth. As Sturken and Cartwright (2018) note, images carry a presumed authenticity due to their perceived connection to the real world; AI disrupts this assumption by blurring the line between fact and fiction. For instance, an altered image of a political event could be circulated to influence public opinion, a concern echoed in recent scholarly discussions on digital ethics (Ross, 2020). This ideological risk is particularly acute in an era of “post-truth,” where manipulated visuals can exacerbate social divisions or spread propaganda with unprecedented speed.
Additionally, the algorithms behind AI image generation are not neutral; they often reflect the biases of their creators or training datasets. If an AI system is trained on datasets that over-represent certain demographics, the resulting images may perpetuate exclusionary ideologies. This concern ties into Sturken and Cartwright’s (2018) discussion of the “politics of representation,” which questions who is seen—and who is rendered invisible—through visual media. Thus, while AI holds ideological potential to challenge dominant narratives by generating diverse imagery, it also risks reinforcing systemic biases if not carefully monitored.
Balancing Innovation and Ethical Responsibility
The dual nature of AI-generated images as both innovative tools and potential risks necessitates a critical approach to their integration into visual culture. Sturken and Cartwright (2018) argue that visual technologies are never merely technical; they are deeply embedded in social and cultural contexts that shape their impact. AI’s capacity to alter original images into novel forms can inspire new artistic movements, yet it also demands accountability to prevent misuse. For example, an original documentary photograph altered by AI for aesthetic effect might lose its historical integrity, raising ethical questions about the manipulation of truth in visual storytelling. Scholars like Manovich (2020) have highlighted the need for frameworks to govern AI in creative industries, ensuring that innovation does not come at the cost of cultural or ethical erosion.
Furthermore, addressing the risks of AI-generated imagery requires collaboration between technologists, artists, and policymakers. Public awareness campaigns on identifying altered images, as well as industry standards for transparency (e.g., labelling AI-generated content), could mitigate some ideological dangers. Indeed, as Sturken and Cartwright (2018) suggest, the gaze is not passive; it can be educated to question the authenticity and intent behind images. By fostering a critical visual literacy, society can better navigate the complex interplay of aesthetics and ideology in AI-generated media, ensuring that these technologies serve to enrich rather than distort cultural narratives.
Conclusion
In conclusion, the creation of images using AI presents both aesthetic potentials and ideological risks within the realm of audio-visual culture. On one hand, AI expands creative possibilities by enabling novel aesthetic forms and democratising access to image-making, resonating with Sturken and Cartwright’s (2018) exploration of visual culture’s evolving technologies. On the other hand, the ideological implications—ranging from perpetuating biases to enabling misinformation—pose significant challenges to the integrity of visual media, reflecting the power dynamics inherent in the gaze and representation. This analysis underscores the need for a balanced approach that harnesses AI’s innovative potential while addressing its ethical pitfalls through critical engagement and regulatory measures. Ultimately, the future of AI in visual culture will depend on society’s ability to interrogate and shape its impact, ensuring that these tools enhance rather than undermine our understanding of the world. As visual culture continues to evolve, fostering a critical gaze remains essential to navigating the promises and perils of AI-generated imagery.
References
- Manovich, L. (2020) AI Aesthetics. Bloomsbury Academic.
- Ross, A. (2020) Digital Ethics in the Age of AI: Challenges and Opportunities. *Journal of Media Ethics*, 35(2), 89-102.
- Sturken, M. and Cartwright, L. (2018) *Practices of Looking: An Introduction to Visual Culture*. 3rd ed. Oxford University Press.

