Introduction
In the field of international public relations (PR), the integration of artificial intelligence (AI) has transformed practices, offering tools for media monitoring, crisis management, and stakeholder engagement. However, this advancement brings significant ethical challenges, particularly when AI is misused, as in the scenario where a PR firm discovers its global tech client’s AI systems are monitoring journalists in a foreign country. This essay discusses the dangers of AI in PR as a practice, exploring risks such as privacy invasion, misinformation, and ethical dilemmas. It also examines how ethical PR practitioners should balance professional loyalty to clients with public accountability, drawing on Zambian examples to illustrate these issues in a developing context. By analysing recent references, the essay highlights the need for responsible AI use in international PR, arguing that practitioners must prioritise transparency and ethical standards to maintain public trust. Key points include the potential for AI to undermine democratic processes, the role of ethical frameworks, and practical strategies for balancing loyalties, informed by cases from Zambia’s media landscape.
The Dangers of AI in Public Relations Practice
AI technologies have become integral to PR, enabling automated sentiment analysis, predictive analytics, and real-time monitoring of public opinion (Galloway, 2018). However, these tools pose substantial dangers, particularly in international contexts where regulatory oversight may vary. One primary risk is privacy invasion, as AI systems can collect vast amounts of data without consent, potentially violating individual rights. For instance, when AI is used to monitor journalists, it can lead to surveillance that stifles free speech and journalistic independence, arguably eroding democratic foundations.
Furthermore, AI in PR can amplify misinformation. Algorithms trained on biased datasets may perpetuate stereotypes or spread false narratives, damaging reputations and public discourse. In a global setting, this is exacerbated by cultural differences; what seems neutral in one country might be harmful in another. A study by Ward (2019) highlights how AI-driven PR campaigns can inadvertently fuel disinformation, especially in regions with limited digital literacy. Indeed, the lack of transparency in AI decision-making processes—often referred to as the “black box” problem—makes it difficult for PR practitioners to verify outputs, leading to unintended ethical breaches.
Another danger is the potential for AI to facilitate unethical client practices, as in the given scenario. If a tech company’s AI is repurposed for monitoring, the PR firm risks complicity in human rights violations. This is particularly relevant in international PR, where firms represent clients across borders, sometimes in countries with authoritarian tendencies. Typically, such dangers underscore the limitations of AI, which, while efficient, lacks human judgment and ethical nuance (Coombs and Holladay, 2014). Without proper safeguards, AI can thus transform PR from a communicative practice into a tool for manipulation, highlighting the need for critical evaluation of its applications.
Ethical Challenges in Balancing Professional Loyalty and Public Accountability
Ethical PR practice requires navigating the tension between loyalty to clients and accountability to the public, a core principle in codes like those from the International Public Relations Association (IPRA). Professional loyalty involves protecting client interests, such as managing reputations during crises, but this must not override public accountability, which demands transparency and truthfulness (Parsons, 2016). In the scenario, the PR firm faces a dilemma: disclosing the AI misuse could breach client confidentiality, yet silence might enable harm to journalists and society.
An ethical practitioner should employ frameworks like the Potter Box model, which involves defining the situation, identifying values, applying principles, and choosing loyalties (Christians et al., 2017). Here, values such as honesty and social responsibility take precedence, suggesting that public accountability—ensuring actions benefit society—should outweigh blind loyalty. However, this balance is complex; practitioners might use non-disclosure agreements strategically while advocating internally for ethical reforms. Generally, fostering a culture of ethical awareness through training can help, but limitations arise in high-stakes international environments where legal repercussions vary.
Moreover, public accountability involves engaging stakeholders transparently, perhaps through whistleblowing if internal resolutions fail. Yet, this risks professional repercussions, such as job loss or legal action, illustrating the personal costs of ethics. Arguably, PR bodies like the Chartered Institute of Public Relations (CIPR) advocate for such balances by emphasising integrity, but real-world application often falls short due to commercial pressures (CIPR, 2020). Therefore, ethical practitioners must draw on diverse perspectives, evaluating both client needs and societal impacts to address complex problems effectively.
Zambian Examples Illustrating AI Dangers and Ethical Balancing
Zambia provides pertinent examples of AI’s dangers in PR, particularly in media monitoring and surveillance, reflecting broader African challenges. In recent years, the Zambian government has faced accusations of using digital tools, including AI, to monitor journalists and opposition figures, often under the guise of national security. For instance, a 2022 report by Amnesty International detailed how surveillance technologies, potentially AI-enabled, were deployed to track media personnel during the 2021 elections, leading to arrests and self-censorship (Amnesty International, 2022). This mirrors the essay’s scenario, where AI misuse threatens press freedom, a danger amplified in Zambia’s context of transitional democracy under President Hakainde Hichilema.
A specific case involves the 2023 controversy surrounding the Cyber Security and Cyber Crimes Act of 2021, which critics argue enables AI-driven monitoring without adequate oversight. Human Rights Watch (2023) reported instances where journalists were targeted, with AI tools allegedly used to analyse social media for “anti-government” content, raising privacy concerns. In PR terms, if a global tech firm supplied such AI, its representing PR agency would need to balance loyalty by advising the client on ethical risks while ensuring public accountability through transparent reporting. However, a local PR practitioner, perhaps from a Zambian firm partnering internationally, might face loyalty pressures from government clients, highlighting the ethical tightrope.
Recent references further illustrate this. For example, a study by MISA Zambia (2023) on media freedom noted how AI in public communication strategies has been used to manipulate narratives, such as during the COVID-19 pandemic, where government PR employed AI for sentiment analysis but risked spreading biased information. An ethical practitioner in this context should prioritise public accountability by advocating for AI audits and collaborating with NGOs to expose dangers, thus addressing key aspects of the problem. These examples demonstrate AI’s limitations in PR, where cultural and political factors in Zambia exacerbate risks, requiring practitioners to apply specialist skills like ethical decision-making with minimal guidance.
Conclusion
In summary, the dangers of AI in PR—ranging from privacy invasions and misinformation to facilitating unethical surveillance—pose significant threats, as seen in the scenario of monitoring journalists. Ethical practitioners must balance professional loyalty with public accountability by employing frameworks that prioritise transparency and societal good, even at personal cost. Zambian examples, such as the 2021-2023 surveillance cases, underscore these issues in an international context, revealing how AI can undermine democracy without proper oversight. The implications for international PR are profound: practitioners should advocate for stronger regulations and ethical training to mitigate risks. Ultimately, while AI offers efficiencies, its responsible use demands a critical approach that values public trust over unchecked loyalty, ensuring PR contributes positively to global discourse.
(Word count: 1182, including references)
References
- Amnesty International. (2022) Zambia: Human rights overview. Amnesty International.
- Christians, C.G., Fackler, M., Richardson, K.B., Kreshel, P.J. and Woods, R.H. (2017) Media ethics: Cases and moral reasoning. 10th edn. Routledge.
- CIPR. (2020) Code of conduct. Chartered Institute of Public Relations.
- Coombs, W.T. and Holladay, S.J. (2014) It’s not just PR: Public relations in society. 2nd edn. Wiley-Blackwell.
- Galloway, C. (2018) ‘Artificial intelligence and public relations: A critical review’, Public Relations Review, 44(5), pp. 734-740.
- Human Rights Watch. (2023) World report 2023: Zambia. Human Rights Watch.
- MISA Zambia. (2023) State of the media report: Zambia 2022-2023. Media Institute of Southern Africa Zambia.
- Parsons, P.J. (2016) Ethics in public relations: A guide to best practice. 3rd edn. Kogan Page.
- Ward, S.J.A. (2019) Disrupting journalism ethics: Radical change on the frontier of digital media. Routledge.

