New reporting guidelines, jointly published in Nature Medicine and the BMJ by Oxford researchers, will ensure that early studies on using Artificial Intelligence (AI) to treat real patients will give researchers the information needed to develop AI systems safely and effectively.
Artificial Intelligence in medicine has shown promising results in numerous simulation studies, but very few AI systems have yet been used in patient care. But as a growing number of AI-driven clinical decision-support systems are progressing from development to implementation, better guidance on the reporting of human factors and early-stage clinical evaluation is needed.
Researchers from Oxford have led the development of a new guideline, DECIDE-AI, which aims to improve the reporting of research on AI systems when they are used for the first time in real clinical settings by doctors treating actual patients. It was developed based on opinions and feedback from over 150 experts across 18 countries, including computer scientists, clinicians, ethicists, patient representatives, and entrepreneurs.
Baptiste Vasey of Oxford University’s Nuffield Department of Surgical Sciences said: “While AI has shown promising results for clinical application in simulations, very few AI systems have had any significant effect on patient care so far. A major problem has been a lack of robust scientific evidence to back widespread use in clinical practice.
“This study is the first to clearly define the minimum reporting standards for the early stage evaluation of AI-based decision support systems in clinical settings.”
Professor Peter McCulloch, the senior author, also based at the Nuffield Department, said: “These guidelines take the principles developed for evaluating new surgical operations by the Oxford-based IDEAL Collaboration, and apply them to an important new field where guidance is lacking. The idea that innovation and evaluation should proceed in tandem at every stage in the development of new treatments applies just as much to AI as it does to surgery.
Ben Hornsby, Project Coordinator for the IDEAL Collaboration at Oxford’s Nuffield Department of Medicine, said: “DECIDE-AI is only the second reporting guideline dedicated to AI systems research, and the first ever to focus on the development stage of this technology.
“We chose to focus on early-stage evaluation results to emphasize the need for researchers to report transparently about four key aspects of AI research, to build confidence in the technology and highlight the practical benefits and improved efficiencies the technology can bring to health care.”
The factors DECIDE-AI evaluates are:
- proof of clinical value in a live care environment
- the AI system’s safety profile and potential risks for patients
- human factors around the use of the system, such as how the AI system is integrated into the existing care pathway and how it influences clinicians’ behavior
- the steps needed to develop the larger-scale trials which will be needed to provide conclusive evidence necessary for an AI system to be accepted into routine patient care.
Source: Read Full Article