Introduction
This essay provides an ethical analysis of an AI-assisted resume screening system implemented by a mid-sized technology company for entry-level software engineering positions. Drawing on principles from computing ethics, the analysis explores the scenario where the system reduces hiring time but disproportionately affects certain demographic groups, raising concerns about fairness and transparency. As a student in Ethics in Computer Science, I adopt the role of a software engineer tasked with refining the system while grappling with ethical dilemmas. The essay is structured around key sections: defining the problem, identifying stakeholders and impacts, examining ethical issues, applying professional ethics codes, evaluating options and trade-offs, recommending a course of action, and reflecting on the decision. This approach demonstrates ethical reasoning and professional responsibility, aligning with ABET outcomes in computing systems (ABET, 2023). Ultimately, the analysis justifies a balanced recommendation to modify the system, prioritising fairness alongside efficiency.
Problem Definition
The primary decision in this scenario involves whether to continue refining and justifying the AI system’s use, modify it to address biases, or discontinue it entirely. As a software engineer on the maintenance team, I am responsible for these technical refinements, but the ultimate decision rests with company leadership, informed by engineering input and ethical considerations. However, ethical responsibility extends to all team members, as engineers must advocate for responsible practices (ACM, 2018).
This is fundamentally an ethical problem, not merely a technical one, because it involves potential harm to individuals and society through biased outcomes. Technically, the system functions as intended by scoring resumes based on historical data, but ethically, it perpetuates discrimination without explicit intent. For instance, if training data reflects past hiring biases—such as favouring candidates from privileged backgrounds—the AI may inadvertently filter out qualified applicants from underrepresented groups (Raghavan et al., 2020). This raises questions of justice and accountability, transcending code optimisation to encompass moral implications like equity in employment opportunities.
Stakeholders and Impacts
Several stakeholders are affected by the AI resume screening system. The primary end users are job applicants, particularly entry-level software engineers, who rely on fair evaluation for career advancement. Company leadership benefits from cost savings and faster hiring but faces long-term risks like reputational damage from bias allegations. The engineering team, including myself, experiences professional conflicts between efficiency demands and ethical duties. The AI vendor gains financially but risks liability if the system is deemed unfair. Broader society, including underrepresented demographic groups (e.g., women or ethnic minorities), is impacted as a stakeholder through systemic inequality reinforcement.
Short-term impacts include reduced hiring times for the company, enabling quicker talent acquisition, which is advantageous amid competitive pressures. However, applicants from marginalised groups may face immediate rejection, leading to lost opportunities and discouragement. Long-term, this could exacerbate workforce diversity gaps, hindering innovation in tech (Bogen and Rieke, 2018). For the engineering team, ongoing involvement might cause moral distress, while society could see widened economic disparities if AI perpetuates historical biases. Indeed, studies show such systems can entrench inequality over time, affecting social mobility (Kleinberg et al., 2018).
Ethical Issues
Key ethical concerns include fairness, transparency, and accountability. Fairness is compromised because the system, trained on historical data, indirectly discriminates by scoring based on patterns that correlate with demographics, even without explicit attributes. For example, resumes from candidates with non-traditional education paths might score lower if past hires favoured elite institutions, disproportionately affecting minority groups (Raghavan et al., 2020). Transparency is lacking as the model’s opacity makes it hard to explain rejections, leaving applicants without recourse and fostering distrust.
These issues arise from reliance on biased training data and black-box algorithms, which prioritise predictive accuracy over interpretability. A central ethical trade-off is between fairness and efficiency: the system cuts costs and time (efficiency), but at the expense of equitable access (fairness). This trade-off is evident in the vendor’s claim of statistical fairness, which may mask disparate impacts on protected groups, as fairness metrics can conflict (e.g., equal opportunity vs. demographic parity) (Kleinberg et al., 2018). Furthermore, the company’s reluctance to change due to competitive pressures highlights how economic incentives can override ethical considerations, potentially leading to harm like reduced diversity in tech roles.
Professional Ethics
The ACM Code of Ethics and IEEE Code of Ethics provide relevant principles. From the ACM Code (2018), Principle 1.2 emphasises avoiding harm, stating that computing professionals should “avoid harm to others” by mitigating negative consequences like discrimination. This applies here as the AI system’s biases could harm applicants by denying fair opportunities, obligating engineers to address them rather than merely refine the tool.
Similarly, IEEE Code of Ethics Principle 1 requires professionals to “hold paramount the safety, health, and welfare of the public” (IEEE, 2020). In this case, public welfare includes equitable employment, so continuing an unfair system violates this by prioritising corporate efficiency over societal well-being. Both codes underscore professional responsibility: engineers must not only implement but also evaluate systems’ broader impacts, justifying intervention in biased AI (ACM, 2018; IEEE, 2020).
Options and Trade-offs
Three realistic courses of action exist: continue the system as is, modify it, or remove it entirely.
Continuing with refinements focuses on minor tweaks, like updating scoring thresholds. Benefits include sustained efficiency and cost savings, aligning with leadership’s priorities. However, risks involve ongoing bias, potential legal challenges under equality laws (e.g., UK’s Equality Act 2010), and ethical violations per ACM and IEEE codes. The trade-off is short-term gains versus long-term harm to fairness.
Modifying the system could involve debiasing techniques, such as auditing training data for diversity or incorporating explainable AI models. This offers benefits like improved fairness and transparency, potentially increasing diversity in hires (Raghavan et al., 2020). Risks include higher costs and time for implementation, plus uncertainties in fully eliminating bias. The trade-off balances efficiency losses with ethical gains, fostering accountability.
Removing the system reverts to manual screening, ensuring human oversight but increasing costs and time. Benefits include eliminating AI-driven bias, upholding ethical principles. However, trade-offs involve competitive disadvantages and resistance from leadership, potentially overlooking that hybrid approaches could mitigate some issues.
Justified Recommendation
I recommend modifying the system rather than continuing or removing it. This involves conducting a bias audit, diversifying training data, and integrating interpretable models to enhance transparency. Justification stems from ethical reasoning: it addresses harm avoidance (ACM Principle 1.2) and public welfare (IEEE Principle 1) by mitigating biases without discarding efficiency benefits (ACM, 2018; IEEE, 2020). Compared to continuation, modification prevents perpetuating inequality, as evidence shows debiased algorithms can reduce disparate impacts (Raghavan et al., 2020). Unlike removal, it is realistic given cost pressures, allowing gradual improvements. This choice is preferable under uncertainty, as it promotes accountability while acknowledging that perfect fairness is challenging, thus aligning with professional standards in computing ethics.
Reflection
In my decision, I prioritised values of fairness and accountability, viewing them as foundational to ethical computing over pure efficiency. This reflects a commitment to societal impact, informed by studies on AI biases (Bogen and Rieke, 2018). Uncertainties remain, such as the effectiveness of debiasing in fully eliminating indirect discrimination, and risks include incomplete mitigation leading to ongoing harm. Therefore, ongoing monitoring and stakeholder input are essential to navigate these complexities.
Conclusion
This ethical analysis highlights the tensions in AI-assisted resume screening, from biased outcomes to professional duties. By defining the problem, assessing stakeholders, and applying codes like ACM and IEEE, the essay justifies modifying the system as a balanced approach. Ultimately, this fosters responsible AI use, with implications for broader computing ethics: engineers must proactively address biases to ensure technology serves society equitably. Such reasoning underscores the need for ethical vigilance in an era of pervasive AI.
(Word count: 1,128 including references)
References
- ABET. (2023) Criteria for Accrediting Computing Programs, 2023-2024. ABET.
- ACM. (2018) ACM Code of Ethics and Professional Conduct. Association for Computing Machinery.
- Bogen, M. and Rieke, A. (2018) Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias. Upturn.
- IEEE. (2020) IEEE Code of Ethics. Institute of Electrical and Electronics Engineers.
- Kleinberg, J., Ludwig, J., Mullainathan, S. and Sunstein, C.R. (2018) Discrimination in the Age of Algorithms. Journal of Legal Analysis, 10, pp. 113-174.
- Raghavan, M., Barocas, S., Kleinberg, J. and Levy, K. (2020) Mitigating bias in algorithmic hiring: evaluating claims and practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 469-481.

