Why AI shouldn’t make life or death decisions

0

Allow me to introduce Philip Nitschke, also known as “Dr. Death” or “the Elon Musk of assisted suicide”.

Nitschke has a curious goal: He wants to “demedicalize” death and make assisted suicide as unassisted as possible through technology. As my colleague Will Heaven reports, Nitschke has developed a coffin-sized machine called Sarco. People seeking to end their lives can enter the machine after undergoing an algorithm-based psychiatric self-assessment. If they pass, the Sarco releases nitrogen gas, which suffocates them within minutes. A person who has chosen to die must answer three questions: Who are you? Where are you? And do you know what will happen when you press this button?

In Switzerland, where assisted suicide is legal, candidates for euthanasia must demonstrate their mental capacity, which is usually assessed by a psychiatrist. But Nitschke wants to completely exclude people from the equation.

Nitschke is an extreme example. But as Will writes, AI is already being used to triage and treat patients in a growing number of healthcare fields. Algorithms are becoming an increasingly important part of care, and we must try to ensure that their role is limited to medical, not moral decisions.

Will explores the messy morality of efforts to develop an AI that can help make life or death decisions here.

I’m probably not the only one feeling extremely uncomfortable to let algorithms decide whether people live or die. Nitschke’s work resembles a classic case misplaced trust in the capabilities of algorithms. It tries to avoid complicated human judgments by introducing technology that could make supposedly “unbiased” and “objective” decisions.

It’s a dangerous path, and we know where it leads. AI systems mirror the humans who build them, and they’re riddled with biases. We have seen facial recognition systems that do not recognize black people and label them as criminals or gorillas. In the Netherlands, the tax authorities used an algorithm to try to eliminate benefit fraud, only to penalize innocent people, mainly low-income people and members of ethnic minorities. This had devastating consequences for thousands of people: bankruptcy, divorce, suicide and placement of children in foster care.

As AI is deployed in healthcare to help make some of the most important decisions there are, it is more crucial than ever to critically examine how these systems are constructed. Even if we manage to create a perfect algorithm without bias, the algorithms lack the nuances and complexity to make decisions about humans and society on their own. We have to carefully ask ourselves how much of the decision-making we really want to entrust to AI. There is nothing inevitable about letting it go deeper and deeper into our lives and our societies. It is a choice made by humans.

Share.

Comments are closed.