Health News

Why healthcare needs an evidence-based AI development and deployment movement

Photo: Joachim Roski

Currently there is a great deal of variability in risk-mitigating AI development and deployment practices. The documentation of unwarranted variability in clinical practices led to the evidence-based medicine movement.

Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about practices. Experts say a similar paradigm and implementation initiative is needed for an evidence-based AI development and deployment movement.

Joachim Roski, a principal in Booz Allen Hamilton’s health business, is one of these experts. 

Next month in his HIMSS22 educational session, “Making a Case for Evidence-based AI,” he will present case studies showing some prominent failures of highly-touted AI initiatives, and how evidence-based AI development and deployment movement practices could have averted them.

Additionally, he’lll describe some key design principles and features for evidence-based AI development. And he will describe how healthcare organizations can rely on them to mitigate potential AI risks.

Healthcare IT News spoke with Roski – who has more than 20 years of experience delivering digital/analytic technologies to enhance care transformation, clinical quality and safety, operations, and population health improvement – to get an advance look at his session.

Q. Please describe some of the features of an evidence-based AI development and deployment approach.

A. In a 2020 report by the National Academy of Medicine, we summarized evidence for potentially promising AI solutions for use by patients, clinicians, administrators, public health officials and researchers.

However, there are warning signs of a “techlash” developing if often-hyped expectations for AI solutions are not met by better performance of those solutions. Examples of these problems include AI solutions systematically underestimating disease burden in non-Caucasian populations, poor performance in cancer diagnostic support, or challenges in trying to deliver at scale.

Many of these areas of concern can be traced to rushed AI development or deployment practices.

Lack of implementation of evidence-based AI development or deployment practices is analogous to erratic reliance on evidence-based clinical practices in healthcare. In the late 1990s/early 2000s, unwarranted variability in clinical practices and associated suboptimal outcomes were consistently and extensively documented.

In turn, available clinical research evidence began to be carefully evaluated. As a next step, professional societies, federal agencies and others translated this evidence into clinical practice guidelines.

Once made available, clinicians and patients alike were able to receive education about these guidelines to ensure they guided clinical practice. This movement is being referred to as evidence-based medicine.

At the same time, current or continuously emerging research evidence and experience with AI development or deployment is available to allow for mitigation of many AI risks. A paradigm and implementation initiative similar to evidence-based medicine is needed today to launch an evidence-based AI development or deployment movement for health and healthcare.

Greater focus on evidence-based AI development or deployment requires effective collaboration between the public and private sectors, which will lead to greater accountability for AI developers, implementers, healthcare organizations and others to consistently rely on evidence-based AI development or deployment practices.

Q. What are examples of evidence-based risk-mitigating AI development and deployment practices?

A. Last April, we published an article in the Journal of the American Medical Informatics Association that mapped known AI risks to evidence-based best-practice mitigation strategies that could alleviate them. These risks include, among others, lack of data security, data privacy, transparency, workflow integration and user feedback.

Evidence-based AI risk mitigation practices are available in three general areas: data selection and management, algorithm development and performance, and trust-enhancing organizational business practices and policies.

Specific risk mitigation practices in these areas include data encryption, securing mobile hardware, keeping detailed provenance records, performance surveillance, AI models accounting for causal pathways, adherence to accepted data governance policies, and human-in-the-loop practices.

Q. How could the federal government promote evidence-based AI development and deployment in healthcare?

A. Both the private and public sectors are critical in driving the field of evidence-based AI development or deployment forward. The private sector, specifically professional organizations or associations, is critical in leading the field in reviewing available evidence, translating the evidence into practice guidelines, and educating AI developers and implementers about the need to adhere to such guidelines.

These efforts could be further amplified if the public sector set market conditions that will make the “right thing to do” the “easy thing to do.”

I see some big areas where the public sector can help shape market conditions.

First, it’s critical that the federal government invests in necessary research that critically evaluates potential AI risks and the means to minimize those risks.

Billions of dollars are expected to be invested in AI by the National Institutes of Health, the Department of Health and Human Services, the Department of Defense, and other agencies. A research agenda needs to be formulated and pursued that establishes evidence-based AI development and deployment practices that mitigate known AI risks.

Second, the government can establish purchasing rules that detail how AI solutions that rely on evidence-based AI development and deployment will be favored for public sector acquisitions. As we have seen time and time again, such a signal to the market can have a significant impact on industries that sell to the public sector.

The government in turn would have to rely on a system that verifies solutions adhere to evidence-based AI development and deployment standards. Often, the government prefers to place the burden of development of these standards on industry itself. Thus, industry has the opportunity to step forward and formulate standards for evidence-based AI development and deployment and the means to verify them.

Third, the government also could regulate AI solutions. As a matter of fact, the U.S. Food and Drug Administration operates the Software as a Medical Device (SaMD) certification program. In this voluntary program, SaMD developers who rely on AI in their software are assessed and certified by demonstrating an organizational culture of quality and excellence and a commitment to ongoing monitoring of software performance in practice.

However, the FDA’s current authority does not extend to most types of AI solutions supporting healthcare needs, such as population health management, patient/consumer self-management, research and development, healthcare operations, etc.

At the same time, some of the most prominent failures of AI solutions pertain to AI solutions not covered by the FDA. For those areas, federal acquisition practices that favor evidence-based AI development and deployment-based AI solutions are needed.

To date, AI-specific legislation, regulation, or established legal standards or case laws largely do not exist worldwide – or they apply only to a narrow subset of AI health solutions. Potential future legislation and regulation across the globe will, in the coming years, likely differ in terms of managing specific AI risks.

However, encouraging the use of evidence-based risk mitigation practices, promulgated through industry self-governance and public sector acquisition practices, could be effective and efficient across national jurisdictions in promoting and sustaining user trust in AI.

Roski’s session, “Making a Case for Evidence-based AI,” will be co-presented with Dr. Michael E. Matheny, associate professor at Vanderbilt University Medical Center. It’s scheduled for Tuesday, March 15, from 4:15-5:15 p.m. in room W204A of the Orange County Convention Center.

Twitter: @SiwickiHealthIT
Email the writer: [email protected]
Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article