31 March 2026
Rethinking Technological Idealism in Healthcare
An examination of AI in medical diagnostics through the lens of technological idealism and its counternarrative.
In recent years, artificial intelligence has rapidly integrated into healthcare, particularly in medical diagnostics. The use of large datasets and machine learning models in imaging and disease detection has assisted healthcare professionals in early and accurate identification of diseases. In Narratives of Technology, van der Laan presents the dominant narrative of technological idealism, and a counternarrative that highlights the risks it poses. This essay introduces both narratives, explores their relation to AI-based diagnostics, and argues that the counternarrative is more convincing for three reasons: algorithmic bias, the fragmentation of human values through overuse, and the socioeconomic divide.
Technological Idealism
Technological idealism is the belief that technological advancement is inevitable and in humanity's best interests. With the help of Thomas Hobbes' comparison of the human body to a clock — describing its joints as "wheels" and nerves as "strings" — van der Laan introduces the mechanistic view of the human body. In contemporary discourse, writers like Jeremy Rifkin imagine a world where interconnected systems simplify human life. Recent advances reinforce the belief that technology reduces error, improves human health, and is a superior alternative to traditional medicine.
The Counternarrative
The counternarrative argues that technology erodes human values like memory, freedom, and intelligence, and provides humanity with an "illusion" of progress. Technical systems are heavily reliant on their training data and, as a result, reproduce existing patterns — meaning embedded logic becomes harder to correct once deployed, requiring strict guardrails. Furthermore, instead of bridging inequalities, new technology creates socioeconomic divides by favouring wealthier institutions in the highest-income countries first. Collectively, the counternarrative argues that technological development is not a universally beneficial force.
Algorithmic Bias
Medical diagnostic tools rely on algorithms and the nature of the data they are trained on, which can produce algorithmic bias — output that discriminates based on race, gender, and socioeconomic status (Aquino, 2023). For example, pulse oximeters exhibit lower accuracies in people with darker skin, a direct effect of overrepresenting white participants in the training set (Sjoding et al., 2020). Van der Laan also frames Langdon Winner's argument that bias is embedded into the system's operating design, meaning it is not incidental but structural.
The dominant narrative believes technology will make "the deaf hear, the blind see, and the lame walk," leading to a more equal society. However, this is not necessarily true unless particular care is given to collecting data and training the model. Without human-centred design, transparency, and explainability, diagnostic tools risk reinforcing stereotypes and weakening the dominant narrative's claims of neutrality, equality, and fairness.
Fragmentation of Human Values
Overuse of technology, according to the counternarrative, leads to a fragmentation of the human condition. In medical diagnostics, this fragmentation erodes deep technical expertise, weakens doctor-patient relationships, and reduces patients to datapoints. A growing concern is the deployment of AI in hospitals to address the lack of clinicians trained to interpret EEG scans (Khan, 2025). Khan suggests that this does not address the shortage of technical education and risks cases of shortsighted misdiagnoses.
Moreover, rather than enabling meaningful doctor-patient conversations, patient details are increasingly processed by automated systems that generate diagnoses for doctors. While this reflects the claim that humans can be mechanistically optimised, van der Laan argues that forgoing decision-making and thinking abilities renders humans obsolete. However, framing this as a drive towards self-destruction — as van der Laan suggests — treats technology in quasi-religious terms rather than portraying it as a tool.
The Socioeconomic Divide
New technology like artificial intelligence creates new opportunities and accelerates growth, at the risk of deepening inequalities. Thinkers like Byron Reese envision a future where everyone will have access to "knowledge, healthcare and wealth." However, researcher De Miranda has found no evidence that technology contributes to closing the socioeconomic divide, which is evident in the disparity in healthcare technology quality between low-income and high-income countries.
Data from the WHO shows that rising inflation and the decrease in COVID-19 patients have reduced healthcare investment globally, with the largest impact felt in low-income regions (Human Rights Watch, 2025). Van der Laan's argument that technology assumes its own trajectory also signals that it has the potential to grow more useful for wealthy people and institutions, while leaving the rest of the world behind — exposing the dominant narrative's promise of universal access as precisely the "illusion of progress" the counternarrative describes.
Conclusion
While AI has advanced healthcare diagnostics, the counternarrative is more compelling. It exposes the algorithmic bias that disenfranchises the marginalised and reveals how human values are lost and fragmented through the overuse of technology. The dominant narrative is not wrong in desiring a "God-like" force to relieve humans of suffering, but it fails to answer the deeper question of who it truly serves.
References
- van der Laan, J.M. (2016). The Dominant Narrative. In: Narratives of Technology (pp. 41–73). Palgrave Macmillan, New York. https://doi.org/10.1057/978-1-137-43706-8_3
- van der Laan, J.M. (2016). A Counter-Narrative. In: Narratives of Technology (pp. 74–110). Palgrave Macmillan, New York. https://doi.org/10.1057/978-1-137-43706-8_4
- Aquino, Y. S. J. (2023). Making decisions: Bias in artificial intelligence and data-driven diagnostic tools. Australian Journal of General Practice, 52(7). https://doi.org/10.31128/AJGP-12-22-6630
- Sjoding, M. W., Dickson, R. P., Iwashyna, T. J., Gay, S. E., & Valley, T. S. (2020). Racial bias in pulse oximetry measurement. New England Journal of Medicine, 383(25), 2477–2478. https://doi.org/10.1056/NEJMc2029240
- Khan, F. A. (2025). AI in clinical diagnostics: Is overreliance eroding clinical expertise? PLOS Digital Health, 4(8), e0000959. https://doi.org/10.1371/journal.pdig.0000959
- Human Rights Watch. (2025, April 10). New data exposes global healthcare funding inequalities. https://www.hrw.org/news/2025/04/10/new-data-exposes-global-healthcare-funding-inequalities