sing Artificial Intelligence in Medical Diagnosis: WHO Warns: No Basic Legal Safety Nets to Protect Patients

Geneva: Europe and the Arabs

Who is responsible when an artificial intelligence (AI) system used in medical diagnosis makes a mistake or causes harm? This question was the focus of a report issued by the World Health Organization (WHO) in Europe, which warned that the rapid increase in the use of AI in healthcare is occurring without the necessary legal safety nets to protect patients and healthcare workers. According to the UN Daily News, Dr. Hans Henri Kluge, WHO Regional Director for Europe, cautioned that while AI has become a reality for millions of healthcare workers and patients across the European region, "without clear strategies, data privacy, legal barriers, and investment in AI literacy, we risk deepening inequalities rather than reducing them."

The report revealed that preparedness remains uneven and fragmented, with only four countries in the European region having a dedicated national strategy for AI in health, and seven others working on developing one, out of a total of 50 countries covered in the report. The importance of artificial intelligence (AI) tools in healthcare systems is growing in the European region, with 32 countries already using AI-assisted diagnostics.

“The choice is ours,” said Dr. Natasha Azzopardi-Muscat, Director of Health Systems at the World Health Organization in Europe. She cautioned that AI can either be used to improve people’s health and well-being, alleviate the burden on overworked healthcare workers, and reduce healthcare costs, or it could undermine patient safety, infringe on privacy, and exacerbate inequalities in care.

According to the report, across the region, regulations are struggling to keep pace with the technology, with approximately nine out of ten countries citing legal uncertainty as the main obstacle to AI adoption.

Eight out of ten countries pointed to financial constraints as a major hurdle. Meanwhile, fewer than one in ten countries have liability standards for AI in health, which define who is responsible if an AI system makes a mistake or causes harm. Dr. David Novello Ortiz, the WHO's regional advisor for data, artificial intelligence and digital health, said that "in the absence of clear legal standards, doctors may be hesitant to rely on AI tools, and patients may not have a clear avenue for redress should any problem arise."

Share

Related News

Comments

No Comments Found