After releasing its AI Ethics Guidelines draft paper last year, the European Commission’s AI high-level expert group (AI HLEG) ran an online stakeholder consultation to generate discussion and feedback on how to maximise the benefits of AI while minimising its risks.
The European offices of HIMSS and the Personal Connected Health Alliance (PCHA), a HIMSS innovation company, submitted a joint response last week, looking at the role of AI in health promotion, care provision and the objective of driving sustainable and inclusive health systems for all.
Open up the focus to include a wider concept of health embracing well-being and disease prevention
HIMSS and PCHA welcomed the opportunity to contribute to the European initiative to recognise the value of developing clear, understandable and user-friendly ethical guidelines for the use of AI, with power to change the lives of European citizens across a wide range of sectors and industries.
“The intention of the AI HLEG to use Healthcare Diagnose and Treatment as one of the four use-cases and tailored assessment lists is very well received as it clearly shows the importance of the healthcare sector, however, we would argue that the description should be expanded to include wider concepts of health, notably disease prevention, well-being and mental health,” said Petra Wilson, PCHA European programme director.
New models of dynamic consent should be a core tool to develop more patient centric and research friendly models of informed consent
A particular area of focus of AI in healthcare should include public health, which comprises both population health measures as well as personalised interventions. The power of AI to drive new approaches in public health has been noted, but this in turn raises questions about the ethics of secondary use of data, which require further exploration both in legal guidelines as well as in practical solutions.
“At HIMSS, we would argue that new models of dynamic consent should be a core tool to develop more patient centric and research friendly models of informed consent, as foreseen in framework legislation such as the GDPR,” said Charles Alessi, HIMSS chief clinical officer and Public Health England senior advisor.
Concept of Human Oversight to include the rights of an individual to exclude unwanted actions
One particular element that could be expanded within the guidance on realisation of Trustworthy AI is the concept of Human Oversight to include the rights of an individual to exclude unwanted actions.
It should be noted that this concept is reflected in the case law of the European Convention on Human Rights, which forms part of the ethical base-line of European policy. The of AI in healthcare, or indeed any other sector, should therefore retain as far as possible the right for an individual to refuse a particular treatment or action.
A key step to achieving this could be by ensuring that the patient voice has a place in the stakeholder dialogue, and the inclusion of patients in diverse design teams.
Underlying challenges in technical aspect and education
The concept of interoperability is particularly important in healthcare, where data is obtained from a wide range of sources and incorporates many different types of data sets, containing sensitive medical data as well as non-sensitive data. Interoperability between those data sets is a key tool in driving trust between the players in healthcare and overcoming the shortcomings of siloed data.
The joint opinion covers technical aspects of AI technologies in relation to healthcare, pointing out that without due attention to interoperability of AI solutions and approaches, the full potential will be hard to achieve.
It also recognises the importance of the AI ethics guidelines to be acknowledged in best practices of procurement of AI enabled IT systems, and that training and support for health care professionals, patients and citizens in understanding the power and capacity of how AI is used and may be used in healthcare should not be overlooked.
The current consultation on AI ethics is an important puzzle of “AI made in Europe” strategy – set out in the European Commission’s Coordinated Plan on Artificial Intelligence, published in December last year – which details actions starting in 2019 and 2020, and paves the way for activities that will take place in the following years.
We are only at the beginning of the journey to realise the European Union’s ambitious plan to become the world-leading region for developing and deploying cutting-edge, ethical and secure AI and promoting a human-centric approach in the global context.