Healthcare

Artificial Intelligence in Healthcare

Introduction 

Artificial Intelligence (AI) has already begun to transform industries around the world and is continuously evolving in both the public and private sectors. AI has been shown to increase business productivity by up to 40% and is predicted to help grow the global GDP $15.7 trillion in total by 2030.  In health care, AI applications are projected to improve outcomes by 30-40% and reduce treatment costs by nearly 50%. Despite these seemingly positive outcomes, the ethical and legal challenges of bias, safety, and liability surrounding the use of AI in health care will need to be addressed if AI is to continue its success. 

Background: What is AI? 

Artificial Intelligence, otherwise known as “AI”, is a widely used term, yet there is often disagreement as to its exact meaning. When many people hear “AI,” they might immediately think of the robots portrayed in science fiction movies and novels that have human-like characteristics and wreak havoc on Earth. Although this conception may be suitable in American pop-culture, it is far from the actual truth. A more appropriate definition is outlined by Accenture: “Artificial intelligence is a constellation of many different technologies working together to enable machines to sense, comprehend, and learn with human-like levels of intelligence.” This definition describes AI as an all-encompassing mix of technologies designed for problem-solving and decision-making similar to what the human mind is capable of. AI represents a complex intersection of technology making its various definitions dependent on context. 

Two types of AI have been identified: 

Weak AI, otherwise known as Narrow AI, is designed to focus on a specific task and be very good at it. This AI is most common today in the form of applications like Apple’s Siri, Amazon’s Alexa, and autonomous vehicles.  

Strong AI is more complex and made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) which are designed to allow machines to surpass the intellectual capabilities of the human brain. No actual examples of strong AI exist currently, but future research may change that.  

 AI Uses in Health Care 

The use of AI in health care is not a new idea, though it has drastically improved over the past 5 decades through the development of deep learning. Deep learning is composed of algorithms that create artificial neural networks in machines, allowing machines to learn and make decisions on their own, in a fashion similar to humans. Below are a few examples of AI currently being used in a health care context: 

Buoy Health is an AI-based symptom checker that uses algorithms to diagnose and treat patients and is currently being used by many hospitals and health care providers as a second opinion diagnostic tool.  

Enlitic has developed deep learning medical tools that can be used as second opinions to help read and analyze studies alongside radiologists and help determine the best plan of treatment for patients. 

Qventus is an AI-based platform designed to improve real-time patient flow by prioritizing patient illnesses and tracking waiting times. 

Ethical and Legal Implications 

One overarching concern about the use of AI in health care is that certain algorithms have been found to have discriminatory effects against women and minorities.  An article featured in Nature in 2019 revealed that an algorithm used in many US hospitals to allocate health care resources had been systematically discriminating against Black patients. Specifically, Black patients were less likely to be referred to management programs aimed to improve health than White patients deemed equally as sick. This is a result of bias in algorithm designers and training data and is an important issue that needs to be recognized. AI will only be as fair as the data it is trained with and as unbiased as the creators who make it.  

Health care professionals have also highlighted safety issues related to the use of AI in the health care industry. The safety issues recently came to head in 2017 when IBM’s Watsons oncology AI software gave multiple unsafe and incorrect treatment recommendations for cancer patients. Though these errors were blamed on the usage of “synthetic,” or hypothetical, rather than real cancer cases to train the software, it highlights an important consideration previously stated: AI will only be as fair and accurate as the data used to train it. To ensure that AI used in health care is safe, fair, and effective, datasets used to develop algorithms need to be valid, or accurate at measuring what they are intended to measure. This will enable AI to perform better and provide treatment recommendations that are in the best interest of the patient. Additionally, policies requiring AI software developers and health care providers to be more transparent in what data is used in developing the AI and how decisions are made using the AI will allow any deficiencies in data bias to be better addressed. 

AI technology liability is also a trending topic of conversation in the industry. Who is at fault when AI-derived treatment recommendations result in harm to the patient? Currently, the clinician would likely be held liable for medical malpractice. This is because AI technologies are considered tools that health professionals can use to help make the treatment decision. For now, the health professional is responsible for the final decision and bears the brunt of liability. As AI systems become more advanced, however, AI makers may face calls for product liability to be increased on their end. Thus far, courts have refused to recognize health care AI as more than just a supporting tool.  

Conclusion 

AI is an astonishing collection of technologies that is predicted to save the United States money in the health care sector and improve patient outcomes.  As AI technology continues to develop, concerns about bias, safety, and liability will become more important and need to be addressed by stakeholders and policy makers.