top of page
Hannah Comiskey

Doctor AI: The Legal Risks and Challenges

The use of artificial intelligence (AI) in healthcare is forecasted to enjoy explosive global growth in the upcoming years. Robot-assisted surgery, early diagnosis of disease, and recommendations for specific cancer treatments are already showing promise for going beyond the scope of human doctors. As a result, the standardisation and increased reliance on technology within our healthcare systems are as inevitable as it will be revolutionary to their operations on a fundamental level. However, with these new benefits come the need to address the ethical and legal questions posed by such newfound reliance on technology.

With self-learning mechanisms built into its systems, AI offers a sophisticated, efficient, and reliable solution to increasingly complex modern-day medical issues. This offers the potential to free up as much as GBP 12.5 billion a year in staff time. if widely adopted within the United Kingdom's National Health Service. AI also offers to reduce the burden on medical professionals within increasingly strained healthcare systems. Accordingly, medical AI has exciting potential which will continue to propel its accelerating development and adoption.

However, issues surrounding algorithmic biases, data privacy, safety, and liability illustrate how the growing use of AI in healthcare requires shifts in the current legal systems on a scale to match such drastic changes in practices.


Despite the fact that the law is still majorly lagging on addressing these up-and-coming issues, AI is already creeping into medical practices across both the United States and Europe. Under Trump’s presidency, the US adopted a free-market orientated approach to AI, accelerating Federal and Drug Administration (FDA) market authorisation for AI software. Despite this, the White House only published a draft for guidance in the regulation of AI applications in January 2020. This highlights the concerning lack of preparation in the country's regulatory framework for such changes.


Similarly, a 2017 resolution by the European Parliament questioned how Civil law will adapt to the increasing use of robotics in healthcare. Specifically, it pointed to the inadequacy in the current Council Directive concerning liability for defective products (85/374/EEC — Product Liability Directive), for covering robotic or software malfunction-related issues within healthcare. Key questions also remain around the regulatory classifications of medical software solutions as medical devices.


Two key areas of regulation that need to be explored and developed - liability risks and data privacy - due to the threats they pose to medical practitioners and patients, will be explored further below.


Liability


As long as the development of healthcare AI continues to outpace legal reform, clinicians and patients are being left in an increasingly vulnerable position. From the perspective of medical professionals, there are growing concerns relating to liability in a potential world where healthcare outcomes could be routinely reached by an algorithm. Under current US and European laws, clinicians would be liable for following through on an incorrect treatment recommendation from an AI-based software. Even in the case of a software malfunction, any problems associated with the use of AI in healthcare remain firmly under the umbrella of medical malpractice.


However, as AI continues to develop in sophistication, there is growing concern for the liability implications associated with "black-box medicine" becoming commonplace. This is where machine learning promotes increasingly opaque clinical decisions, due to the sheer mass of data it uses to reach otherwise inaccessible medical conclusions. When advanced AI software inevitably reaches this point of complexity, it will become increasingly difficult for medical professionals to independently review the reasoning for particular treatments or diagnosis recommendations. While advanced AI of this sort is currently viewed as assistive to medical professionals, in the not-so-far-off future, AI may become the go-to for medical decision-making. However, such reliance on AI generates concerns around potential liability fire for not following software recommendations.

Several aspects of clinical and product liability must be addressed in order to avoid a crash and burn on the furiously accelerating train of healthcare machine learning. Addressing the huge uncertainty surrounding whether AI software satisfies the definition of a product is an essential place to start. If machine learning software involved in medical tools can be constituted as a product, this opens up an additional avenue for injured patients to sue manufacturers or sellers of defective AI products. This casts a high-priced threat with huge potential influence on the cost and accessibility of new healthcare technology within public health sectors.

Where courts have so far been reluctant to apply product liability to software developers within healthcare, this is partly because such software remains an assistive tool, providing information rather than a final decision maker in medical outcomes. However, as the balance between reliance on human medical practitioners and AI begins to shift, the law must also adapt and update. Moreover, it is necessary to look beyond manufacturers and consider hospital liability in the purchase, implementation, and performance monitoring of AI.


Data Privacy


The use of AI in healthcare dramatically alters the doctor-patient relationship as we currently know it and brings into question patient data usage concerns from both a legal and ethical perspective. AI computational systems learn from analysing and comparing huge datasets in order to independently improve their performance. Such analysis is used to undertake tasks in early identification of medical conditions, detection of risk factors, and suggestions for medical treatments for specific cases — creating huge potential for much more precise, personalised, and preventative medicine than we see today. This phenomenon is already transpiring in current AI analysis of specific individual habits such as finger taps on mobile phones, where AI is used in order to detect early signs of Parkinson’s from subtle changes in behaviour, such as decreased texting speed.

Such examples highlight the extensive and revealing nature of the data collection required for AI software to function. With public discomfort already mounting at the idea of such intimate patient health data being sold by governments or private companies for profit, updates in the law surrounding health data protection are imperative for building and maintaining public trust in these systems. Adaptations to current law are crucial for preventing innovation within this field to come at the cost of patient privacy.

In 2017, the UK Information Commissioner’s Office (ICO) ruled that the Royal Free NHS Foundation Trust was in breach of UK Data Protection Act 1998 when it provided 1.6 million patients’ personal data to Google DeepMind during clinical safety testing for an app designed for early diagnosis and detection of acute kidney injuries. This is just one example of where the UK has already fallen into this data protection pitfall. This stands as yet another reminder of the need for pre-emptive legal and ethical legislation in response to fast-developing healthcare software.

Strengthened requirements for the consent and collection of personal data through the 2018 European General Data Protection Regulation (GDPR) have taken steps in the right direction to address data protection issues. However, these advances are a lot broader in scope than the current US data protection laws, which enable technology giants such as Amazon, Facebook, Google, and Apple to collect and invest health information in healthcare AI. This remains a gap in US Federal law covering health data privacy as these companies fall outside of the "covered entities", such as insurance companies or healthcare providers, subject to the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. Once again this highlights the desperate need for a revamp in healthcare-specific legal armour, both in relation to data usage and anti-discrimination laws, in order to protect patients from health data leaks which would be detrimental to insurance premiums or job opportunities.


Conclusion


On multiple levels, the law is lagging behind the pace of AI developments in healthcare on a global scale. While such technology has huge potential for improving health outcomes for the masses, it also comes with newfound risks and questions which the law must, and is currently failing, to address. Creative solutions and innovative legal thinking are desperately required to address the burning questions surrounding the use of algorithms in healthcare, and its implications for both patients and practitioners. Crucially, such developments ultimately shape the incentives, disincentives, and progress of the future Doctor AI.

bottom of page