For an organization that has no shortage of strongly held opinions on an array of topics affecting providers and the healthcare industry, the American Medical Association currently does not have an artificial intelligence policy.
That will be changing soon, as AMA heads into its annual meeting next week, according to the group's report to its board of trustees.
The association has compiled what it says is a "baseline policy to guide AMA’s engagement with a broad cross-section of stakeholders and policymakers to ensure that the perspective of physicians in various practice settings informs and influences the dialogue as this technology develops."
AI uptake is gaining momentum industry-wide, of course, and transforming fundamental ways of doing things in healthcare. Physicians are starting to take notice – some with interest, some with skepticism, some with alarm. There are big questions – practical and even ethical – that need to be answered.
AI's success in healthcare depends on enthused buy-in from physicians, AMA President James Madara, MD, said at HIMSS18 this past March.
"New technologies that allow physicians to take better care of patients, in an unambiguous way, are rapidly adopted," Madara said. "What’s not rapidly adopted are things that don’t work very well, are highly ambiguous, or create more effort."
According to its new report, "Ensuring the appropriate implementation of AI in healthcare will require that stakeholders forthrightly address challenges in the design, evaluation, implementation, and oversight of AI systems.”
“Through its strategic partnerships and collaborations, the AMA has the capacity to help set priorities for health care AI; integrate the perspective of practicing physicians into the design, validation, and implementation of high-quality, clinically valuable health care AI; and promote greater understanding of the promise and limitations of AI across the healthcare community," the report authors continued.
Interestingly, AMA starts its position paper by asserting a slightly different definition for AI than is most commonly used:
"AI constitutes a host of computational methods that produce systems that perform tasks normally requiring human intelligence. These computational methods include, but are not limited to, machine image recognition, natural language processing and machine learning. However, in healthcare a more appropriate term is 'augmented intelligence,' reflecting the enhanced capabilities of human clinical decision making when coupled with these computational methods and systems."
Especially given the rapidity with which "commercial entities, including IBM, Google, and others, are driving rapid evolution in AI across the board," and the fact that "AI has surfaced as a public policy issue at the federal level in a relatively short period of time."
As AMA officials took stock of the current healthcare landscape – fast-evolving wearable monitoring devices that can transmit patient data; machine learning algorithms to enhance clinical decision support; AI being applied to health system data to improve care outcomes – it was apparent the group needed weigh in on the technology from a policy perspective.
AMA considered user-centered design, patient privacy, clinical workflow, safety and more. Given that AMA "has a unique opportunity to ensure that the evolution of augmented intelligence in medicine benefits patients, physicians and the healthcare community," its board of trustees is recommending that the association:
1. Leverage ongoing engagement in digital health and other priority areas for improving patient outcomes and physicians’ professional satisfaction to help set priorities for healthcare AI.
2. Identify opportunities to integrate the perspective of practicing physicians into the development, design, validation and implementation of healthcare AI.
3. Promote development of thoughtfully designed, high-quality, clinically validated healthcare AI that:
a. is designed and evaluated in keeping with best practices in user-centered design, particularly for physicians and other members of the healthcare team;
b. is transparent;
c. conforms to leading standards for reproducibility;
d. identifies and takes steps to address bias and avoids introducing or exacerbating healthcare disparities including when testing or deploying new AI tools on vulnerable populations;
e. and safeguards individuals’ privacy interests and preserves the security and integrity of personal information.
1. Encourage education for patients, physicians, medical students, other healthcare professionals, and health administrators to promote greater understanding of the promise and limitations of healthcare AI.
2. Explore the legal implications of healthcare AI, such as issues of liability or intellectual property and advocate for appropriate professional and governmental oversight for safe, effective and equitable use of and access to healthcare AI.
"Patients, physicians, and the healthcare system in the U.S. face enormous challenges in the combined impact of a rapidly aging population, a relative decline in the working population that reduces revenue essential for safety net programs and persistent high costs of care that will strain the nation’s ability to support affordable, accessible, high quality care," said AMA in the report.
"With the engagement of physicians to identify needs and set priorities for design, development, and implementation, healthcare AI can offer a transformative set of tools to help patients, physicians and the nation face these looming challenges," AMA added.
Twitter: @MikeMiliardHITN
Email the writer: [email protected]
Source: Read Full Article