Introduction
This essay critically examines the use of AI chatbots in healthcare, drawing on the perspectives presented in a New York Times article by Rosenbluth and Astor (2025). Informed by Paul and Elder’s critical thinking framework, which emphasizes the importance of identifying stakeholder perspectives and evaluating arguments through reason and evidence, the analysis focuses on whose viewpoints are emphasized and whose are excluded in the narrative surrounding AI-driven health advice. As a computer science student, this essay approaches the topic from a technical and ethical standpoint, considering the implications of AI systems in a domain traditionally dominated by human expertise. The discussion will explore the dominant perspectives of patients and select medical professionals, identify excluded voices such as technologists and regulatory bodies, and evaluate how their inclusion might alter the understanding of this complex issue. Ultimately, the essay aims to provide a balanced reflection on the opportunities and challenges posed by AI chatbots in healthcare, with a focus on logical argumentation and evidence-based analysis.
Dominant Perspectives in the Narrative
The New York Times article primarily emphasizes the perspectives of patients who have turned to AI chatbots like ChatGPT for health advice, often out of frustration with the traditional medical system. Individuals such as Wendy Goldberg and Jennifer Tucker express disillusionment with the lack of personalized care, long waiting times, and perceived dismissive attitudes from healthcare providers (Rosenbluth and Astor, 2025). Their stories highlight the appeal of chatbots, which offer immediate, empathetic, and seemingly authoritative responses at little to no cost. This patient-centric narrative underscores a growing reliance on AI tools, with survey data indicating that one in six American adults, and a quarter of those under 30, use chatbots for health information at least monthly (Rosenbluth and Astor, 2025).
Additionally, the article includes insights from select medical professionals, such as Dr. Robert Wachter and Dr. Adam Rodman, who acknowledge systemic healthcare failings—such as limited access to specialists and insufficient consultation time—as drivers of chatbot use. Dr. Wachter’s comment that “if the system worked, the need for these tools would be far less” encapsulates a critical view of healthcare inefficiencies (Rosenbluth and Astor, 2025). However, these professionals also express concern about the accuracy and reliability of AI advice, pointing to risks such as misdiagnosis and patient over-reliance on unverified information.
From a computer science perspective, it is noteworthy that the article frames AI chatbots as both a solution and a problem. The empathetic tone and accessibility of chatbots are presented as technological advantages, yet their propensity for errors raises questions about algorithms, training data, and design ethics. While these patient and medical perspectives dominate, they provide only a partial view of the issue, leaving significant gaps in the discourse.
Excluded Perspectives and Their Importance
A critical application of Paul and Elder’s framework reveals that several key stakeholder perspectives are absent from the narrative, including those of AI developers, computer scientists, and regulatory bodies. Firstly, the voices of technologists and AI developers—those responsible for creating and maintaining chatbots like ChatGPT—are largely missing. The article notes statements from OpenAI and Microsoft representatives, who stress that their tools are not substitutes for medical advice, but it lacks deeper insight into the technical challenges of ensuring accuracy in health-related responses (Rosenbluth and Astor, 2025). As a computer science student, I recognize that chatbot outputs are shaped by machine learning models trained on vast datasets, which may include outdated or biased medical information. The exclusion of developer perspectives obscures critical discussions about algorithmic transparency, error mitigation strategies, and the ethical design of AI systems for sensitive applications like healthcare (Topol, 2019).
Secondly, regulatory bodies and policymakers are conspicuously absent from the narrative. In the UK, for instance, organizations like the National Health Service (NHS) and the Medicines and Healthcare products Regulatory Agency (MHRA) play a crucial role in overseeing digital health technologies. Their perspectives on data privacy, patient safety, and legal liability are vital, especially given high-profile cases of AI-generated harmful advice, such as the incident involving sodium bromide cited in the article (Rosenbluth and Astor, 2025). Without regulatory input, the narrative risks overemphasizing individual experiences while neglecting systemic safeguards that could address the unchecked proliferation of AI tools in healthcare (Cohen, 2020).
Finally, the perspectives of marginalized patient groups—those with limited digital literacy or access to technology—are also excluded. The article focuses on tech-savvy individuals who can navigate AI platforms, but it does not consider how digital divides might exacerbate health inequalities. From a computer science viewpoint, this raises questions about the inclusive design of AI systems and their potential to widen gaps in healthcare access if not carefully managed (Obermeyer et al., 2019).
Impact of Including Excluded Perspectives
Incorporating the perspectives of AI developers and computer scientists would likely shift the narrative from a user-centric focus to a more technical and ethical examination of chatbot capabilities. For instance, highlighting challenges in natural language processing (NLP) and the limitations of training data could provide a more balanced view of why chatbots sometimes produce inaccurate or harmful advice. Research by Bender et al. (2021) notes that large language models can perpetuate biases present in their training corpora, a concern that could explain some of the overconfidence and errors in chatbot responses. Including this perspective might challenge the dominant narrative of chatbots as empathetic saviors by introducing accountability for their design flaws and prompting discussions on iterative improvement and user education.
Similarly, the inclusion of regulatory perspectives would broaden the understanding of AI in healthcare by addressing systemic risks and governance needs. In the UK context, the NHS has outlined frameworks for evaluating digital health tools, emphasizing evidence-based validation before public use (NHS Digital, 2022). Introducing such perspectives could counterbalance patient frustrations by advocating for structured integration of AI tools within healthcare systems, rather than as standalone alternatives. This might reframe the issue as a policy challenge, rather than solely a technological or medical one, and encourage collaborative solutions.
Lastly, considering marginalized patient groups would challenge the narrative’s implicit assumption that AI chatbots are a universal solution. Research indicates that digital health interventions often fail to reach underserved populations due to barriers in access and literacy (Sieck et al., 2021). Acknowledging these perspectives could highlight the need for equitable AI deployment, prompting a more nuanced discussion on how technology might reinforce, rather than alleviate, disparities in healthcare.
Conclusion
In summary, while the New York Times article by Rosenbluth and Astor (2025) effectively captures the perspectives of frustrated patients and concerned medical professionals regarding AI chatbots in healthcare, it overlooks critical voices such as those of AI developers, regulatory bodies, and marginalized groups. Applying Paul and Elder’s critical thinking framework reveals that these exclusions limit the depth of the discourse, focusing predominantly on individual experiences rather than systemic and technical dimensions. As a computer science student, I argue that integrating these missing perspectives would provide a more comprehensive understanding of the issue, challenging the narrative of chatbots as mere substitutes for doctors and emphasizing the need for ethical design, regulation, and inclusivity. The implications of this analysis are significant: as AI continues to reshape healthcare, a multidisciplinary approach that balances user needs with technical and policy considerations is essential to mitigate risks and maximize benefits. Indeed, the future of AI in this domain depends on addressing these complexities to ensure that technology serves as a supportive tool rather than a divisive force.
References
- Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021) On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610-623.
- Cohen, I.G. (2020) Informed consent and medical artificial intelligence: What to tell the patient? Georgetown Law Journal, 108(5), pp. 1425-1469.
- NHS Digital (2022) Digital Technology Assessment Criteria (DTAC). NHS England.
- Obermeyer, Z., Powers, B., Vogeli, C. and Mullainathan, S. (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), pp. 447-453.
- Rosenbluth, T. and Astor, M. (2025) Empathetic, Available, Cheap: When A.I. Offers What Doctors Don’t. The New York Times, 16 November.
- Sieck, C.J., Sheon, A., Ancker, J.S., Castek, J., Callahan, B. and Siefer, A. (2021) Digital inclusion as a social determinant of health. NPJ Digital Medicine, 4(1), p. 52.
- Topol, E.J. (2019) High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), pp. 44-56.

