Can we create AI ethics before we finish creating AI?

5 questions with: Dr. James A. Brink, MD, Radiologist-in-Chief, Massachusetts General Hospital

By Kieran Murphy

Sometimes in today’s tech-driven world, in the race to be first with the next big breakthrough, it can seem like we’ve ended up retrofitting the rules and regulations for how these innovations (platforms, media, devices) should operate only once we are too far dependent on them, and all the positive roles they play – once they are an integral part of our society, or have changed it altogether. What would happen if we instead could create the principles and guidelines for an entire innovation or industry before it becomes standard, before its even fully invented? In healthcare we can. Actually, in healthcare it’s required. Doctors have long taken the Hippocratic Oath. Those who work in the field do so in service of the patient. This type of selfless ethical conduct is built into the fabric of the system. It is its purpose. So we who create the technology for this industry need to meet our partners at their level. Today at the World Medical Innovation Forum, my colleagues and I are sharing GE Healthcare’s AI Principles – the guidelines for our AI work and the tangible actions we will and must take to ensure the safe and effective use of AI. They were created by the same engineers, data scientists and team members who created our Edison platform that we’re opening up to radiologists and developers across the industry to build AI solutions and are working on our algorithms in partnership with clinicians each day. They are: AI Systems exist to augment human intelligence and must:

Be designed for the benefit, safety and privacy of the patient

Be a trusted steward of the data and insights

Be transparent and deliver robust and reproducible results

Guard against creating or reinforcing bias

I expect these will continue to evolve as we seek to incorporate more perspectives and learnings and as the AI landscape progresses. But their root will remain the same: deliver technology that benefits and protects the patient and empowers the clinician to improve cost, quality and access across the healthcare system.   I welcome your thoughts as we continue to build and learn with our industry partners. To start, I asked Radiologist-in-Chief at Massachusetts General Hospital, Dr. James Brink, to weigh in on ethics in AI and offer another expert perspective for us to discuss. Here’s what he had to say:
  Doctors have long taken the Hippocratic Oath. Ethics in healthcare are nothing new. Why is it so newly relevant when it comes to AI? Dr. Brink: A widely held ethical tenet among physicians is ‘Primum non nocere’ - ‘First do no harm.’ Artificial Intelligence makes possible an entirely new suite of tools that have the potential to touch every aspect of healthcare. With these tools come remarkable new capabilities, leveraging the power of big data as applied to human health and disease. However, they are relatively opaque to the end user, in part owing to limited data science knowledge among healthcare workers, and in part owing to the ‘black box’ nature of the algorithms themselves. Application of tools without full understanding of their derivations and vulnerabilities may risk unintended consequences and potential patient harms. What do you think is most misunderstood about AI ethics in healthcare? Dr. Brink: The rights of the patients from whom clinical data is used to train the AI algorithms is the most controversial aspect of AI in healthcare. A recent symposium at the European Congress of Radiology highlighted remarkable differences on this issue from around the world. When you look at GE Healthcare’s AI principles, where would you like to see the most progress or discussion? Dr. Brink: First, I’d like to congratulate GE Healthcare for planting its flag firmly in the ethical principles that it will uphold as it pursues AI solutions for healthcare. All of these principles are important, but I think the one that is most often forgotten or ignored is the one focusing on ‘guarding against creating or reinforcing bias.’ All data sets come with some bias as every patient sample is exactly defined to reflect the demographics of the patients from whom they were collected. Understanding and controlling for these inherent biases is critical for equitable development and use of AI tools in healthcare. As the radiologist-in-chief at your organization, how are you developing or adapting radiologists’ skills for a new AI world? Dr. Brink: With the MGH/BWH Center for Clinical Data Science (CCDS), we have created an AI ecosystem where data engineers and scientists may collaborate with clinical fellows and attending radiologists. AI development requires all stakeholders to contribute, and our radiology residents, fellows and faculty are encouraged to pursue AI development projects in partnership with data engineers and scientists through CCDS or other radiology labs with AI capabilities. Through these organic interactions, as well as a host of conferences and other learning opportunities, I am confident that our radiologists are well prepared to guide development of AI tools for the clinical practice of radiology. Describe the future of healthcare in 3 words. Dr. Brink: consumer-driven, rapidly-evolving, positively-disruptive  
Kieran Murphy is President and CEO of GE Healthcare. This article originally appeared on LinkedIn. Follow Kieran @KieranMurphyCEO