As cybersecurity threats to healthcare grow in number and severity, artificial intelligence is helping providers detect vulnerabilities and respond to data breaches faster and with greater precision.
Given that 63 percent of organizations of all types don’t have enough staff to monitor threats 24/7, according to a 2019 Ponemon report, the added defense is crucial. It’s arguably even more important for the healthcare industry, whose data is often considered more valuable than Social Security and credit card numbers.
As a healthcare tool, AI can help predict falls in seniors and identify early signs of sepsis. It’s also poised to shape many other facets, from disease detection to administrative tasks. As an IT defense mechanism, however, AI may be employed to recognize network behaviors unlikely to represent human action, keep watch for fraud threats and predict malware infections based on previously identified characteristics.
Such intuitive IT capacities offer “preventative medicine, helping prevent the infection in the first place,” says Rob Bathurst, an adviser for anti-virus software firm Cylance, in a recent white paper about AI and healthcare infrastructure.
Although most people might consider patient- and provider-facing uses as more common AI applications in healthcare, protection is gaining steam: AI-enabled security is among Gartner’s Top 10 Strategic Technology Trends for 2020. An Accenture report forecasts that AI’s value in healthcare security will reach $2 billion annually by 2026.
Moreover, 69 percent of organizations believe AI will be necessary to respond to cybersecurity threats, a July 2019 report from Capgemini found.
Using AI to Protect Healthcare Data
At Florida-based Halifax Health, a firewall employs AI to detect attacks based on the wrapper that cybercriminals place around their malware payloads. This function, as CDW cybersecurity expert Alyssa Miller notes, enables Halifax to protect against even zero-day threats that target undiscovered weaknesses.
The AI strategy isn’t taken lightly. “At the end of the day, cybersecurity is a war,” Halifax CIO Tom Stafford said earlier this year at HIMSS 2019 in Orlando, Fla. “There are people trying to attack you and your data.”
And consequences can be deadly: Ransomware and data breaches are linked to an increase in fatal heart attacks, an October 2019 study by Vanderbilt University found. The reason: Breaches prompt heightened cybersecurity measures for care teams, taking time away from quick treatment.
As a result, vendors are implementing AI in numerous security tools, Miller notes. This includes Cisco Systems, which employs the technology in its next-generation firewalls, its Cloudlock cloud access security broker solution, cognitive threat analytics and Cisco Advanced Malware Protection, among other solutions and services.
IBM’s Watson, which uses AI, is helping expedite routine security assessments, reduce response times and false positives, and provide recommendations based on deep analysis, Healthcare Weekly notes. That’s a plus for stretched healthcare IT staffs.
Predicting Unusual Behavior with AI
AI has been a powerful tool for Boston Children’s Hospital, whose patient records in 2014 were targeted by the hacking group Anonymous. The technology has since helped the hospital strengthen existing security structures and protocols.
“By using AI, we can do a better job at being more prospective and staying one step ahead and starting to be able to detect that anomalous behavior or activity as it’s happening,” Dr. Daniel Nigrin, the hospital’s senior vice president and CIO, said in a podcast interview with Emerj, an AI market research firm. “Attacks change constantly.”
Such behaviors, he noted, might be a user trying to access logs from the West Coast, or 500 doctors who attempt to view a patient record simultaneously.
As Boston Children’s AI strategy evolves, Nigrin advises his peers to follow his lead and cast a wide net when implementing their own defense.
“We are looking at other industries to see what they’ve done” using AI, he said. “I am eager to go outside my healthcare world to third parties and other verticals to see how they’ve addressed the problem.”
AI Is Being Used to Target and Attack Healthcare Organizations
For the many positives that can result from implementing AI as part of a healthcare security strategy, the effort isn’t foolproof. This is because cybercriminals are recognizing the growth of these defense mechanisms and leveraging them to their advantage.
Ron Mehring, CISO of Texas Health Resources, and Axel Wirth, former distinguished technical architect for Symantec, spoke about the threat at HIMSS 2019. AI can help hackers engage in sophisticated social engineering attacks tailored to specific targets, as well as realistic disinformation campaigns, Miller reports in her blog for CDW.
AI also can be used by hackers to find new vulnerabilities or to thwart an organization’s AI-fueled defenses. It’s what Richard Staynings, chief security strategist for biomedical Internet of Things startup Cylera, calls “offensive AI” — intelligence that mutates to learn about a targeted environment and make detection harder.
That can trigger a host of unease: “Did a physician really update a patient’s medical record or did ‘Offensive AI’ do it? Can a doctor or nurse trust the validity of the electronic medical information presented to them?” Staynings asked in an interview with Healthcare IT News. “This is the new threat, and it is best executed by AI.”
Organizations, then, must realize that AI-enabled security can’t be left on autopilot after implementation, according to Reg Harnish, executive vice president at the Center for Internet Security. More important, a thorough risk evaluation should come first to best determine how AI can solve specific problems facing a hospital or clinic.
Otherwise, as Harnish told Healthcare IT News, “if your job is cutting the board in half, no amount of hammers is going to help you do that effectively.”