Is Health ready for AI?
Posted: Jun 21, 2018 by Instinctif Partners
Few doubt the potential of AI to revolutionize healthcare, but is the industry ready for its uptake?
Artificial Intelligence has far-reaching applications across many industries, in particular healthcare. While the sector has been slow to adopt some technologies, such as the Internet of Things, AI can be transformative for healthcare services, in terms of improving efficiency and in health outcomes.
As an example, in the US alone, key clinical health AI applications could enable annual savings of $150 billion by 2026 according to Accenture analysis. Although AI growth opportunities are dependent upon considerable investment in new technology and infrastructure, the potential savings are huge. Take robot-assisted surgery for example, which could save the US healthcare industry $40 billion each year.
But health is not only about financial return. The patient experience, quality of care and overall health outcomes must be at the heart of any goals.
So to what extent can AI help meet these goals?
AI offers applications across a diverse set of areas, including administrative support, lifestyle management, wearables, diagnostics and virtual assistants. From virtual nursing to administrative workflow assistance, these multiple applications can improve patient experiences, boost effectiveness of therapies and meet increasing clinical demand.
For instance, AI has numerous applications within radiology, from improving reading scans and x-rays to identifying suspicious lesions and nodules in cancer patients. These not only improve efficiency, but increase accuracy.
AI can also help healthcare providers design better treatments and find the best strategy for each patient. It might assist physicians and nurses in repetitive, monotonous jobs, making their contributions valuable and time-effective.
Nonetheless, there are doubts over the uptake of these emerging applications and whether patients and practitioners can fully realise the potential of AI.
What is slowing down the uptake of AI in healthcare?
Three key factors are behind the scepticism surrounding AI adoption in healthcare: the missing human element, mistrust on data transfer and collection & liability issues.
Artificial Intelligence must be designed for the people’s good. As stated by Roberto Viola, ‘people have the right to own their future’ rather than machines. No patient wants computers alone to decide their fate, without any human input or control. Algorithms do not have any emotions and moral values, they perform what is programmed or learned. We need to make sure that people (i.e. patients and practitioners) remain in control and understand where and when the algorithms cannot replace human intelligence and empathy.
A human element is essential for people to trust AI solutions and data-driven transformation of healthcare system.
It goes without saying that data have a big role to play for the uptake of AI solutions. Collecting, storing, normalizing, and tracing data and patient records represent the first step for starting the AI technology revolution in health systems. Effective data protection is vital for building trust: without it, cross-border deployment, exchange of health data and cooperation between healthcare settings are simply not feasible. Parties in the ecosystem will need to work together to secure in how they manage critical information on patients.
Thirdly, the liability challenge currently poses a significant barrier to AI. Most healthcare organisations lack the capabilities needed to ensure that AI systems act accurately, responsibly and transparently. The vast majority of health providers believe that their settings are not prepared to face the societal and liability issues that will soon emerged from AI-based decisions.
Settings using AI technology must think carefully about the responsibility and liability of the actions the AI systems take on their behalf. When AI tools incorrectly diagnose your medical condition, who is to blame? Is it the designer of the AI responsible or the clinician? Are doctors able to assess the reliability or usefulness of information derived from AI and the consequences of AI actions?
AI raises profound questions (both legally and ethically) that policymakers need to address quickly. The EU is leading the way. In April the Commission released the European approach to Artificial Intelligence based on socio-economic changes brought about by AI and how to better ensure an appropriate ethical and legal framework. AI ethics guidelines are expected to be released by the end of the year. To be based on the Union's values and in line with the Charter of Fundamental Rights of the EU, the guidelines should provide a detailed analysis of emerging challenges and address issues such as the future of work, fairness, safety, security, social inclusion and algorithmic transparency.
By mid-2019, the Commission will also issue:
- A report on the broader implications for, potential gaps in and orientations for, the liability and safety frameworks for AI, Internet of Things and robotics;
- A guidance document on the interpretation of the Product Liability Directive to seek to ensure legal clarity for consumers and producers in case of defective products.
It is positive to see that the Commission is closely monitoring AI revolution and willing to produce moral guidance that maximises safety and safeguards the common good, provided that the human element won’t be scientifically reduced and that patients remain at the heart of any AI application and actions.
There is a great opportunity here to get AI right for healthcare, and in doing so, create a better environment for patients.