Introduction
This essay examines two causal factors from Nancy Leveson’s analysis of the Therac-25 medical device accidents, as outlined in her article “Medical Devices: The Therac-25,” and explores their relevance to the Boeing 737 MAX accidents. Specifically, it focuses on ‘Overconfidence in Software’ and ‘Confusing Reliability with Safety,’ assessing how these factors contributed to the tragic outcomes in both cases. By comparing the Therac-25, a radiation therapy machine linked to patient overdoses in the 1980s, and the Boeing 737 MAX, involved in fatal crashes in 2018 and 2019, this discussion highlights shared ethical concerns in technology development. The essay argues that overreliance on software and misconceptions about reliability pose persistent risks in safety-critical systems, while also identifying contextual differences in their application.
Overconfidence in Software
Leveson identifies ‘Overconfidence in Software’ as a critical factor in the Therac-25 accidents, where engineers assumed software was inherently fail-proof, leading to complacency and inadequate safety checks (Leveson, 1995). This overreliance meant that software issues were overlooked, with initial investigations focusing solely on hardware failures. Similarly, in the Boeing 737 MAX case, overconfidence in software, particularly the Maneuvering Characteristics Augmentation System (MCAS), played a pivotal role. Designed to automatically adjust the aircraft’s pitch to prevent stalls, MCAS was trusted to operate flawlessly without sufficient pilot training or transparency about its functionality. This trust proved disastrous when faulty sensor data triggered erroneous MCAS activations, contributing to the crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302, which claimed 346 lives (Gates, 2020). Both cases share a common thread of assuming software invulnerability, reflecting an ethical failure to prioritise rigorous oversight. However, the situations differ in scope: while Therac-25’s software issues stemmed from coding errors unnoticed due to limited testing, the 737 MAX failures were exacerbated by systemic pressures to rush production and minimise costs, arguably deepening the ethical breach by sidelining safety for profit. Indeed, Boeing’s decision to make MCAS data optional for pilots highlights a more deliberate neglect compared to the Therac-25’s unintended oversights.
Confusing Reliability with Safety
The second factor, ‘Confusing Reliability with Safety,’ is equally pertinent. Leveson notes that the Therac-25 software operated correctly thousands of times before failing, creating a false sense of safety based on reliability (Leveson, 1995). This complacency delayed recognition of potential hazards. Likewise, the 737 MAX was perceived as safe due to the Boeing 737 family’s long-standing reliability record, leading regulators and engineers to underestimate risks associated with the new MCAS system. Despite its reliable performance in testing, MCAS failed under specific real-world conditions, revealing that reliability does not equate to safety (Johnston & Harris, 2019). A striking similarity lies in the ethical implication: in both cases, stakeholders rested on historical success rather than proactively addressing worst-case scenarios. Nevertheless, a key difference emerges in accountability. The Therac-25’s manufacturer, AECL, faced limited immediate scrutiny due to less publicised impacts, whereas Boeing encountered intense global criticism and legal repercussions, reflecting heightened societal expectations for aviation safety in the 21st century. Furthermore, the 737 MAX’s failures involved complex interactions between software and human operators, unlike the more isolated software defects in Therac-25.
Conclusion
In summary, the causal factors of ‘Overconfidence in Software’ and ‘Confusing Reliability with Safety’ from the Therac-25 accidents resonate strongly with the Boeing 737 MAX crises. Both cases expose ethical lapses in assuming software infallibility and equating past reliability with ongoing safety, underscoring the need for rigorous testing and transparency in safety-critical systems. While similarities highlight universal challenges in technology ethics, differences—such as the scale of public accountability and systemic pressures in Boeing’s case—reveal evolving contexts and expectations. These lessons suggest that engineers and policymakers must prioritise proactive risk assessment over complacency, ensuring that safety, rather than reliability or expediency, remains paramount. Generally, addressing these issues requires a cultural shift in how technology is developed and regulated, with ethical considerations at the forefront.
References
- Gates, D. (2020) What led to Boeing’s 737 MAX crisis: A look at the history of decisions and errors. The Seattle Times.
- Johnston, P. and Harris, R. (2019) The Boeing 737 MAX saga: Lessons for software organizations. Software Quality Professional, 21(3), 4-12.
- Leveson, N. (1995) Safeware: System Safety and Computers. Addison-Wesley.

