Regulatory Challenges to AI in Medicine

         Imagine you’re a cancer patient in 2050 sitting in a white leather recliner alongside a few other patients in a window-lit chemo room in a hospital somewhere. The scene seems familiar. You may have seen it in a movie or been there yourself. The sterile smell, awkward small-talk, and somehow bleak but simultaneously hopeful outlooks will remain. The medical landscape and the design of your treatment, though, will likely be unrecognizable, transformed by emerging artificial intelligence technologies.

         As we move forward into the twenty-first century, artificial intelligence (AI) technologies offer the promise to change industries. Their adoption could promise new insights and recommendations that may help companies become more efficient than ever. In every industry, however, major concerns underlie the road to the adoption of AI and the realization of these promises. Issues such as privacy and accuracy loom large in discussions about their viability. 

         This is especially true when the product an industry sells is abstract and ethically convoluted. This makes healthcare especially tough when thinking about the implications of AI. When the product you sell is a longer, healthier life, changes to your business model become changes to how people live their lives, and thus the triumphs and pitfalls these changes offer can be of a grave nature.

         AI’s use in medicine has the potential to bring about massive change, but it is not entirely new. Early examples can be seen in clinical decision support systems (CDSS’s), which have been around for nearly fifty years. The Center for Disease Control defines CDSS’s as “computer-based programs that analyze data within EHRs (electronic health records) to provide prompts and reminders to assist health care providers in implementing evidence-based clinical guidelines at the point of care.“

         These systems rely on patient data to give recommendations on care and reminders of guidelines for best practice often issued by third party organizations. Traditionally, this happened through simple programs of if-then statements. For instance, if a CDSS detected that a set amount of time had elapsed since a patient received medication, it might remind a provider to administer more.

These capabilities are harnessed by a growing number of American hospitals for diverse reasons ranging from streamlining management, to dosing, to aiding in diagnosing. CDSS’s have existed in a regulatory gray area since their inception. In 1987, the Food and Drug Administration (FDA) made clear that “software products… not used with existing medical devices” were exempt from “registration, listing, premarket notification, and premarket approval requirements.”

         Since then, the FDA has taken no real action on the regulation of these kinds of devices. They are much more sophisticated thirty-five years later. As time goes on, advanced clinical decision support systems, powered by datasets and capabilities that are larger and more efficient than ever may begin to play a larger role in medicine. Unless these software systems are connected to an already regulated medical device, they would remain unregulated.

         Such systems would fall under the umbrella of what legal scholar Nicholson Price has referred to as “black-box medicine.” Going beyond personalized medicine, black box medicine uses “opaque computational models to make decisions related to healthcare.” 

         This could look something like an AI algorithm analyzing millions of electronic medical records (EMRs) and pieces of healthcare-related data, like metrics from platforms like Apple’s HealthKit, to recommend a treatment plan for our hypothetical cancer patient from earlier. These recommendations are not based on an understanding of chemistry or biology, but instead on correlations between data points in the set. While this can make for new and interesting observations, it cannot explain biological phenomena. 

Experts have speculated that as AI makes recommendations and observations on these data sets, scientists will be spurred on into new research aimed at discovering why black box medicine’s recommendations do, or worse, don’t work. 

         One can already begin to foresee potential problems with some of these systems. Scholars point to issues of the systems’ accuracy and how a lack of trust in them could hinder adoption. In a separate vein, the enormity of the kinds of data sets enabling these systems raise significant privacy concerns. 

         So, how do we regulate these systems in a way that best balances people’s privacy and their health? This question received some attention about five years ago when a group of legal scholars collaborated to publish Big Data, Health Law, and Bioethics, a collection of articles on the titular topic. Since then, though, the technology has received little attention from regulators or scholars.

         This lack of attention is concerning, as Tal L. Zarsky, an Israeli law professor who writes on data issues, stated in “Correlation versus Causation in Health-Related Big Data Analysis,” “policymakers and legislators must clearly establish when and whether law and policy should ignore, encourage, accept, or reject mere correlations while distinguishing between the interests and rights of data subjects, affected individuals, investors, and society in general.”

         Currently, this is not happening. Meanwhile, software and data companies like Hitachi march ahead toward more sophisticated and potentially dangerous AI algorithms. As they do so, the current regulatory schemes hardly leave the patient data they are built on private. 

         Data privacy is regulated on a sector-by-sector basis. Data brokers in the business sector can obtain healthcare data from the healthcare sector so long as the healthcare entity first de-identifies it. After that, because data brokers are not healthcare entities, the data is no longer subject to HIPAA regulations. At this point, “the products they sell are essentially unregulated.”

         Without regulation, it is entirely possible that someone could re-identify the patients or patient group the data was taken from, something scholars have been aware of since at least 2009. If we are to compile the vast data sets necessary to fuel this next step in healthcare, we should at least get the privacy issues right. Though it has been aware of its worth noting that some scholars hail the potential benefits of black box medicine,  there may be a moral duty to share healthcare data. This argument centers on the idea that black box medicine’s immense potential can only be realized if enough data is collected and sees those who prefer not to share their information as essentially holding back medical progress. 

These systems are still largely on the horizon, though, and their benefits, in the realm of speculation. They are coming, though, and the present offers us a great chance to work out the issues they present through regulation in order to implement them successfully. If we don’t, then our future medical devices won’t only be unrecognizable. They’ll be unacceptable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s