An international call for caution regarding the use of artificial intelligence in the field of health

New York - Brussels: Europe and the Arabs
The World Health Organization has called for caution in the use of artificial intelligence in the field of health in order to protect and promote human well-being, safety and independence, and to preserve public health.

According to the United Nations news bulletin, which we received a copy of on Wednesday morning, the organization stressed in a statement yesterday the need to carefully examine the risks when using “Large Language Models” known as (LLMs) to improve access to health information as a tool to support decision-making, or Even to enhance diagnostic capacity in under-resourced settings with the aim of protecting people's health and reducing inequalities.
Large language models are artificial intelligence tools that can read, summarize, and translate text. This enables them to predict words and formulate sentences and texts just as humans do.
Large language paradigms include some of the fastest expanding platforms such as ChatGPT, Bard, Bert, and many others that simulate the understanding of humans and the way human communication is processed and produced.

enthusiasm and anxiety
The rapid and increasing diffusion of these technologies and their experimental use for health-related purposes is generating great excitement about their potential to support people's health needs, according to the World Health Organization.
Although the organization is enthusiastic about the appropriate use of these technologies to support healthcare professionals, patients, researchers and scientists, it expresses concern that the caution that is usually followed with the emergence of any new technology is not being exercised in a manner consistent with these technologies.

Big concerns
The World Health Organization has warned that the rapid adoption of untested systems could lead to errors by health care workers, cause harm to patients, and undermine confidence in AI, thus undermining (or delaying) the potential long-term benefits and uses of these technologies in the whole world.
Concerns that require strict oversight about the use of AI technologies in safe, effective, and ethical ways include:

Data used to train AI may be biased, generate misleading or inaccurate information that could pose risks to health, fairness, and inclusiveness;
LLMs generate responses that can appear reliable and plausible to the end user. However, these responses may be completely incorrect or contain gross errors, especially for health-related responses;
LML models may be trained on data that may not have been previously approved for such use, and these models may not protect users' sensitive data.
Large language models can be misused to generate and spread highly persuasive misinformation in the form of text, audio, or video content that is difficult for the public to distinguish from authentic health content.
While the WHO is committed to harnessing new technologies - including artificial intelligence and digital health to improve human health - it recommends that policymakers ensure patient safety and protection, at a time when technology companies are commercializing large language models.
Six basic principles
The World Health Organization has proposed addressing these concerns, and measuring the clear evidence of benefits before they are widely used in routine health care and medicine - whether by individuals, caregivers, or health system managers and policymakers.
WHO reiterated the importance of applying ethical principles and appropriate governance, as outlined in its guidance on the ethics and governance of AI for health, when designing, developing and deploying AI for health.
The six basic principles identified by the World Health Organization are:
protect independence;
the promotion of human welfare and safety and the public interest;
ensuring transparency, interpretability and clarity;
promoting responsibility and accountability;
ensuring inclusiveness and fairness;
Promoting responsive and sustainable AI.

Share

Related News

Comments

No Comments Found