Artificial Intelligence Risks in the Health Care Sector

4 minutes to read
Luna-Anastasia Riedel

Artificial Intelligence (AI) refers to innovations in systems or machines that show human-like intelligence. Until now it has been used for different tasks in sectors like health care, finance, education or commerce, and it gets further developed every day to be able to help humankind even more in the future. AI is being used more frequently nowadays, especially in health care, which is why it is important to discuss the ethical issue that could arise with its further development and usage in the future.

Imagine a scenario where an AI helps out doctors with the diagnosis of patients to lessen their work in the office - as they often are overcrowded and waiting times are long. This would work through telling the robot the symptoms one has, and it being connected to the databank of the doctor’s office to be able to have access to all data needed: it hence acts as an autonomous agent. The robot then diagnoses a patient and either sends them home with medication or to the doctor if needed or wanted. This scenario is connected to a world situation like the current pandemics, in which a new virus occurs, but the mentioned robot wrongly diagnoses it as the flu or cold due to the identical symptoms, and the new virus ends up not being registered in the medical databank.

In this assignment, I will analyse this imagined issue through a philosophical debate about the Responsibility Gap.

The problem of Artificial Intelligence's wrong diagnosis

The responsibility gap is described by Matthias (2004) as the occurrence of the event happening when an AI shows faulty behaviour, but nobody can be completely held responsible for it. In the described scenario the AI acted wrongly due to a new virus occurring which is not registered in any databank yet, meaning there would have been no way that the robot could have diagnosed the new virus as what it really is. Now, imagining the virus is deadly and contagious, who will be held responsible if the patient infects others or even dies because they trusted the AI and did not to the doctor?

To be held accountable for the matter, one must be at least causally responsible for it. That is, one must be part of the chain of events that led to that outcome. But, having causal responsibility does not itself make one accountable. For instance, if an individual has a disease in which their arm moves independently of their will, then they can hardly be held accountable when their autonomous arm hits someone. To be held accountable for that act, they should also be considered to have control over it. Only then can they be said to be morally responsible for an outcome, and so accountable for it. In other words, to be held accountable for an outcome, an individual should be causally and morally responsible for it.

To be held accountable for an outcome, an individual should be causally and morally responsible for it

The engineers who worked on the AI, can hence only be held causally responsible as they indeed programmed the AI but can easily argue that they have no responsibility as the occurrence of a new virus and the AI wrongly diagnosing the new virus has nothing to do with the programming as it is out of their reach of control. Furthermore, the AI acts as an autonomous agent meaning they act without human supervision and develop further through its environment over which the engineers also do not have control over (Matthias, 2004). The doctor can also argue with the same statement as well with that the virus is new and they could have not known especially if the symptoms are identical to something as simple as the cold or flu. The company selling the AI, in this case, can neither be held morally nor causally responsible as all they do is sell the product; likewise, they are not responsible for the wrong diagnosis of a new occurring virus and its consequences. In general, companies can often be indeed held responsible though due to actively choosing what they sell and knowing the implied consequences that come with it like selling weapons to people during a war. This would mean the company does not kill anyone in the war but they most likely know people will die through their sold products. The patient technically could have still gone to have a chat with the doctor but if the AI is normality in the described scenario - and an everyday occurrence - can they be blamed for trusting the AI? And would the doctor have diagnosed them differently, given that the virus is new and not in their knowledge yet? Lastly, the AI itself can also not be held morally nor causally responsible, simply because they are an artificial intelligence and hence not an actual being which excludes them from being a party that could be held responsible in the first place.

Who is responsible for AI mistakes

The given scenario describes a typical occurrence of a responsibility gap in which it is hard to figure out who is responsible for the mistake an AI does. It also shows that sometimes it is not possible to determine the one who is responsible for occurring situations like this. Sometimes a partial responsibility can be detected but this is not enough to completely hold someone accountable.


Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3): 175-183.