Skip to main content

Health leaders need AI know-how to avoid cyberattacks

More malicious actors than ever can target healthcare organizations due to AI lowering the bar for hackers.
By Jeff Lagasse , Editor
Etay Maor, chief security strategist at Cato Networks, speaks at HIMSS25 on Wednesday, March 5, 2025
Photo: Jeff Lagasse/Healthcare Finance News

LAS VEGAS – Artificial Intelligence in healthcare comes with a lot of hype, but there's also a lot of concern – about how it can be used to attack, and how it can be abused. 

Etay Maor, chief security strategist at Cato Networks, said at HIMSS25 in Las Vegas on Wednesday that hospital leaders and clinicians should be aware of the potential risks and pitfalls of artificial intelligence, particularly as it pertains to potential hacking and fraud.

"I believe AI is not very close to replacing us completely," said Maor. "However, those who know how to use AI are going to replace those who don't know how to use AI."

One of the main problems, in healthcare or any other industry, is that the bar has been lowered for potential threat actors. At one time, someone had to have deep knowledge of coding and hacking to attack computer systems. Then, it became possible to purchase malicious software from threat actors on the dark web. And then criminal services became popular on the dark web, and companies began offering to conduct attacks for hire.

With each of these stages, the bar got lowered. But now, the bar is at its lowest level yet, because malicious actors can now have artificial intelligence do their dirty work for them.

The key for hospital leaders and clinicians is to employ personnel with a deep knowledge of how current AI models can be manipulated. Hackers, said Maor, look for vulnerabilities to attack.

One common method they employ is called feedback poisoning, when they purposefully misdirect generative AI models like ChatGPT. At its core, it's a simple tactic: When a model generates a response to a question or request, the threat actor simply tells the model it's wrong, or makes suggestions or corrections to the response that "confuses" the AI. Since users of these models basically "train" them, a malicious actor can mistrain it.

This can come in the form of text or images. Maor shared a story of the time he uploaded a picture of London to ChatGPT and asked it to describe the image. It gave a nonsensical response because extremely small text embedded in the image, naked to the human eye but readable by AI, told it to.

Many healthcare leaders have been keen to adopt AI because of the perceived benefits – according to Medical Economics, one of the most significant of those benefits is improved diagnostic speed and accuracy, which can make it easier for providers to diagnose and treat diseases. AI can be used to analyze X-rays, MRI scans and other medical images, for example, to identify patterns and anomalies that a human might miss.

However, Medical Economics pointed out that there can be potential risks, particularly when it comes to security and privacy. One of the biggest risks is the potential for data breaches, since large quantities of patient data are often targets for cybercriminals. Other types of unique AI attacks include feedback poisoning and model extraction, in which an adversary might extract enough information about the algorithm to create a substitute model.

Meor advised hospital leaders and AI teams to be vigilant and to stay one step ahead of the technology. 

"If you don't know how to use AI, the ones who do are going to take advantage," he said.

Jeff Lagasse is editor of Healthcare Finance News.
Email: jlagasse@himss.org
Healthcare Finance News is a HIMSS Media publication.