Health execs not ready for societal, liability issues from AI

The vast majority of healthcare organizations lack the capabilities needed to ensure that their artificial intelligence systems act accurately, responsibly and transparently, finds a new survey by consulting and professional services firm Accenture.

AI has the potential to be a transformative technology in healthcare. In the Accenture survey, 80% of health executives agree that within the next two years, AI will work next to humans in their organization, as a coworker, collaborator and trusted advisor.

However, 81 percent of health executives say their organizations are not prepared to face the societal and liability issues that will require them to explain their AI-based actions and decisions, should issues arise, according to Accenture’s Digital Health Technology Vision 2018 report.

With the increasing role that AI will play in healthcare decision-making, organizations need to carefully consider the responsibility and liability of the actions their systems take on their behalf, contends Accenture. In addition, the firm warns that healthcare leaders must ensure that the data used to inform AI solutions are created without any embedded bias.

“If the users don’t understand what was behind the AI (decision), we think that’s going to be a real limitation on its adoption,” says Kaveh Safavi, MD, head of Accenture’s global health practice. “Think about a healthcare use case where there’s a recommendation about using a service and you don’t know whether or not the person making that recommendation is economically motivated. That’s really about responsibility and transparency.”

HDM-061918-AI (1).png

Also See: A reality check on AI in healthcare

In the Accenture survey, while the vast majority of health executives (85 percent) agree that every human will be directly impacted on a daily basis by an AI-based decision within the next three years, 86 percent indicated that they have not yet invested in capabilities to verify the data that feeds into their systems, opening the door to inaccurate, manipulated and biased data—and therefore results.

“The artificial intelligence is only as good as the training data,” adds Safavi. “If that data is limited or biased because of the way it was obtained, both of those scenarios could result in inaccurate or incorrect training that potentially could lead to people choosing not to trust AI technology.”

He observes that some AI is designed for clinician use while other technology is meant for consumer and patient use. With either application, Safavi believes that the issues of explainability, transparency and veracity of data are critical—especially as AI increasingly touches the end-to-end care experience.

As a result, Accenture’s survey found that 73% of health executives are planning to develop internal ethical standards related to the use of AI to ensure their systems are designed to act responsibly.

“In healthcare, being able to explain the process used to arrive at a decision can be critical to trust, safety and compliance,” concludes the report. “Given that an AI system is fundamentally designed to collaborate with people, healthcare organizations must build and train their AIs to provide clear explanations for the actions the AI systems decide to take, in a format that people understand.”

This article originally appeared in Health Data Management.
For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Law and regulation Healthcare delivery HR Technology
MORE FROM EMPLOYEE BENEFIT NEWS