What do doctors think about the ethics of machine learning in healthcare?

“AI will not replace doctors, [but] doctors who use AI will replace doctors who do not”

Jeffrey Boschman
One Minute Machine Learning

--

Photo by National Cancer Institute on Unsplash

If you were to ask a doctor what they thought of the ethics of AI in healthcare, what is the first thing that would come to their mind?

A recent 2021 study (“A healthy debate: Exploring the views of medical doctors on the ethics of artificial intelligence”) from researchers at TU Delft surveyed medical doctors about their opinions on AI in healthcare, especially the ethics of it. Most previous studies about AI in healthcare focussed on image-based diagnostics, but not enough is known about the views of ethics. From 77 doctors (from 13 different specialities) from The Netherlands, Portugal, and the U.S., the authors identified four specific opinions. Although the opinions have overlap, there are four distinct ideas that the groups focus on:

1. AI is a helpful tool: Let physicians do what they were trained for

The main focus of doctors in this group is that AI will be helpful for repetitive, easy tasks to free up time for physicians to work on more complex situations. However, they think the doctor should definitely have the final decision; AI is simply a tool. A good quote from the paper is: “[these doctors think that] AI will not replace doctors, [but] doctors who use AI will replace doctors who do not”

They do not have strong opinions on private corporations, other than the thought that medical doctors should be involved in the technology development process.

2. Rules & Regulations are crucial: Private companies only think about money

When asked about the ethics of AI in healthcare, the doctors in this group focus on their distrust of private corporations that develop the AI technology, thinking that they need to be very regulated. These doctors are very worried about patient privacy, companies becoming monopolistic, and generally think that the values of healthcare do not align with the tech industry (i.e. money-focussed, “fail fast and fix later”). They are of the opinion that technology companies should definitely hold the liability.

3. Ethics is enough: Private companies can be trusted

This group focussed on the fact that current health systems already rely heavily on technology and automation, and therefore we already have a system in place for dealing with the ethics of AI in healthcare. In the current system, private companies that develop technology for healthcare are generally trusted and do not need extensive rules and regulations, and thus the same should apply to companies who produce AI technologies for healthcare.

4. Explainable AI tools: Learning is necessary and inevitable

The doctors in this group focussed heavily on the necessity for clinicians to understand how the AI decisions are made (i.e. AI cannot be a “black box”). They also believe doctors should be involved in the AI design process.

They do not have strong opinions on private corporations, other than the thought that medical doctors should be involved in the technology development process.

If you would like to get unlimited access to my posts, please consider joining Medium with my referral link (disclaimer: part of the monthly fee will directly support me).

--

--

Jeffrey Boschman
One Minute Machine Learning

An endlessly curious grad student trying to build and share knowledge.