Artificial intelligence has been playing a critical role in industries for decades. AI has only recently begun to take a leading role in healthcare. According to Frost & Sullivan, AI systems are projected to be a $6 billion dollar industry by 20211. A recent McKinsey review predicted healthcare as one of the top 5 industries with more than 50 use cases that would involve AI, and over $1bn USD already raised in start-up equity2. With such exponential growth, what does this mean for your organisation? How can you benefit the most from this game-changing technology?
What is AI?
Artificial Intelligence was initially conceptualised in the 1950s with the goal of enabling a machine or computer to think and learn like humans. AI is widely used by companies like Facebook (e.g. recognising who is in a photo), and Google (e.g. providing search suggestions, or identifying the fastest route to drive). However, in the healthcare industry, AI has only made small steps towards a vast and multidimensional opportunity.
How is AI used today in healthcare?
There are various capacities where AI is emerging as a game-changer for healthcare industry. Below are a few examples in use today:
Radiology - AI solutions are being developed to automate image analysis and diagnosis. This can help highlight areas of interest on a scan to a radiologist, to drive efficiency and reduce human error. There is also opportunity for fully automated solutions – to automatically read and interpret a scan without human oversight – which could help enable instant interpretation in under-served geographies or after hours. Recent demonstrations of improved tumour detection on MRIs and CTs are illustrating the progress towards new opportunities for cancer prevention. Meanwhile, a company in the USA has already received FDA clearance for an AI-powered platform to analyse and interpret Cardiac MRI images.
Drug Discovery - AI solutions are being developed to identify new potential therapies from vast databases of information on existing medicines, which could be redesigned to target critical threats such as the Ebola virus. This could improve the efficiency and success rate of drug development, accelerating the process to bring new drugs to market in response to deadly disease threats.
Patient Risk Identification - By analysing vast amounts of historic patient data, AI solutions can provide real-time support to clinicians to help identify at risk patients. A current focal point includes re-admission risks, and highlighting patients that have an increased chance of returning to hospital within 30 days of discharge. Multiple companies and health systems are developing solutions at present based on data in the patient’s electronic health record, driven in part by increasing push back from payers on covering hospitalisation costs associated with re-admission. Other recent work has demonstrated the ability to predict risk of cardiovascular disease based purely on a still image of a patient’s retina.
Primary Care/Triage - Multiple organisations are working on direct to patient solutions to triage and give advice via a voice or chat-based interaction. This provides quick, scalable access for basic questions and medical issues. This could help avoid unnecessary trips to the GP, reducing rising demand on primary healthcare providers – plus, for a subset of conditions, provide basic guidance that otherwise wouldn’t be available for populations in remote or under-served areas. While the concept is clear, these solutions still need substantial independent validation to prove patient safety and efficacy.
What are the challenges of AI in healthcare?
In order for an AI solution to be successful, it requires a vast amount of patient data to train and optimise the performance of the algorithms. In healthcare, getting access to these datasets poses a wide range of issues:
Patient privacy and the ethics of data ownership – accessing personal medical records is strictly protected. In recent years data sharing between hospitals and AI companies has generated controversy, highlighting several ethical questions:
Who owns and controls the patient data needed to develop a new AI solution?
Should hospitals be allowed to continue to provide (or sell) vast quantities of their patient data – even if de-identified – to 3rd party AI companies?
How can patients’ rights to privacy be protected?
What are the consequences (if any) should there be a security breach?
What will be the impact of new regulations, like the General Data Protection Regulation (GDPR) in Europe – which includes a person’s right to have their personal data deleted in certain circumstances, with non-compliance generating what could be multi-million dollar penalties?
Quality and usability of data – in other industries, vast amounts of data is generally reliable and accurately measured – e.g. aircraft engine sensors or car location and velocity data to predict highway traffic. In healthcare, data can be subjective, and often inaccurate – with issues including:
Clinician’s notes in electronic medical records are unstructured and can be difficult to interpret and process;
Data inaccuracy - a patient may be listed as a non-smoker, but were they just reluctant to admit they had not been able to quit?
Data sources are siloed across many services providers – making it difficult to capture a full profile and range of determinants for a patient’s health.
Developing regulations for a technology that is cloud-based and constantly evolving poses obvious challenges. How can patients be protected? How do you provide adequate regulatory oversight of a solution that is constantly learning and evolving – rather than a distinct, version-controlled medical device? For AI solutions that involve direct patient interactions without clinician oversight (such as chat-based primary care tools), it poses the question of whether the technology is a 'practitioner of medicine' rather than just a device. In this instance, will it extend to needing some form of medical licence to operate – and would a national medical board agree to actually grant this licence?
This also leads to the question of who is liable should anything go wrong. If diagnosis or treatment is controlled by this technology, does the AI company assume liability for the patient’s wellbeing? In parallel, will insurance companies ever underwrite an AI tool?
User adoption is another barrier to utilisation. The human touch of interacting with a doctor can be lost with these types of tools. Are patients willing to trust a diagnosis from a software algorithm rather than a human? Meanwhile are clinicians willing to embrace these new solutions? In an industry that still widely uses the fax machine, it may be unrealistic to expect rapid adoption rates beyond proof of concept studies.
The future outlook for AI
The best opportunities for AI in healthcare over the next few years are hybrid models, where clinicians are supported in diagnosis, treatment planning, and identifying risk factors, but retain ultimate responsibility for the patient’s care. This will result in faster adoption by healthcare providers by mitigating perceived risk, and start to deliver measurable improvements in patient outcomes and operational efficiency at scale.
With a plethora of issues to overcome, driven by well-documented factors like an aging population and growing rates of chronic disease, the need for new innovative solutions in healthcare is clear.
AI-powered solutions have made small steps towards addressing key issues, but still have yet to achieve a meaningful overall impact on the global healthcare industry, despite the substantial media attention surrounding it. If several key challenges can be addressed in the coming years, it could play a leading role in how healthcare systems of the future operate, augmenting clinical resources and ensuring optimal patient outcomes.