Skip to main content

Patient, physician trust gap with AI persists

For AI to work in clinical care, doctors must be involved in decision-making with their patients, Philips report says.
By Nathan Eddy
Doctor engaging with patient
Photo: Marco VDM/Getty Images

While most patients welcome the use of artificial intelligence for administrative tasks, such as making appointments or checking in, their comfort with AI drops – and the gap with healthcare professionals grows – when its use shifts into clinical areas and health risks rise.

Many patients worry about the broader impact of digital technologies, fearing they could make healthcare feel less personal, according to Philips' Future Health Index.

The report found nearly three-quarters (73%) of patients welcome the use of more technology in healthcare if it improves their care, but 52% worry that relying on it will mean less face time with their doctors.

The FHI highlights the importance of reassurance from healthcare professionals regarding AI's oversight and proven safety and effectiveness.

For example, 86% of patients reported being more comfortable with AI in their care when informed by their doctors, underscoring the need for their involvement in building trust.

The report also showed that patients' trust improves when they feel more knowledgeable about how AI is used in their healthcare and understand their information will be kept safe.

Shez Partovi, chief innovation officer at Philips, told HealthcareFinanceNews he believes there will be a significant acceleration in human-AI collaboration, where AI augments the capabilities of healthcare professionals, allowing them to focus on the more human aspects of care.

"We'll also see a growing emphasis on personalized and predictive healthcare, driven by AI's ability to analyze vast datasets for early intervention and improved patient outcomes," he explained.

The report also indicated healthcare professionals see AI as a key to streamlining processes and improving data accessibility, ultimately freeing up more time for patient care.

"They believe AI can positively impact their departments by reducing administrative burdens and improving patient access," Partovi said.

From his perspective, however, the greatest impact may be preventing the need for some types of care altogether.

He pointed out that 82% of healthcare professionals said AI and predictive analytics could save lives by enabling early interventions.

"Seventy-five percent of healthcare professionals said digital health technologies, including AI and predictive analytics, will reduce hospital admissions in the future," Partovi said.

Overall, the report underscores the importance of putting people first in AI design, enhancing human-AI collaboration through training, demonstrating efficacy and fairness with high-quality data, enabling innovation with clear regulatory guardrails, and building strong cross-sector partnerships to drive responsible and trusted AI adoption.
 
To accelerate the delivery of potentially life-saving AI to patients, regulations should evolve to balance speed of innovation with safeguards, Partovi said.

"Worldwide uniformity of an agreed upon regulatory framework can reduce complexity, enable faster access to innovation without compromising on patient safety, and enable the responsible development and monitoring of AI regulations," he explained. 

The 10th annual Future Health Index Global Healthcare published May 15, compiles interview responses from almost 2,000 healthcare professionals and more than 16,000 patients from around the world. Of these, 200 healthcare professionals and 1,000-plus patients surveyed are based in America.