Medical device technologies can have security and privacy vulnerabilities just like any other product or application – the difference is that the stakes are much higher.
Paul Kostek spent part of his life in the consulting world working with medical device companies, and he remembers the catalyst that put security and privacy concerns into the spotlight in the industry.
“Things like cyber security were not an issue until about five years ago, when somebody successfully hacked an infusion pump,” he says.
“Not only did they hack the pump, but they were also able to backdoor their way into the hospital system.
In actual fact, the hack was a demonstration hosted at the BlackBerry Security Summit in 2015, in which ethical hacker Graham Murphy infiltrated a LifeCare PCA pump from medical device firm Hospira (acquired by Pfizer in the same year) and administered a lethal dose of morphine to an empty glass.
Hospira was quick to point out some behind-the-scenes preparation that brought the potential for a hack on hospital premises into question – but the awareness it raised was enough to grab the attention of the FDA.
“The reality was that many medical device companies hadn’t thought about security, so the FDA made a big push that said they now had to consider it,” explains Kostek.
This push started with the Information Security and Privacy Advisory Board, a US advisory committee on cyber security issues, and resulted in FDA guidance to manufacturers on maintaining the security of their medical devices after they reach the market.
The regulator has updated that guidance to reflect the changing cyber security landscape, but according to Kostek – who now serves as a senior member of the Institute of Electrical and Electronics Engineers (IEEE), as well as an advisory systems engineer with Base 2 Solutions – companies still face significant challenges in securing their technologies.
What are the risks associated with medical device cyber security?
As the BlackBerry “hack” showcased, one of the most important cyber security risks associated with medical devices is the potential to cause patient harm through an altered dosage of pharmaceuticals, or even electricity.
“In the US many years ago when Dick Cheney was still vice president and he had a pacemaker, he was paranoid about it being hacked and somebody either turning up or down his defibrillation, putting him into cardiac arrest,” says Kostek.
Hacking an individual medical device in this way might allow for the potential to murder someone, but Kostek believes the motivation behind this kind of security breach is unlikely to stretch beyond a personal vendetta or somebody just looking to prove they can do it.
Hackers are typically motivated by money, which is why infiltrating a device as a prelude to capturing sensitive medical information on the hospital’s IT system is a more likely ambition.
“If you can get access to the hospital system and access patient information, you have names, dates of birth, and all sorts of other information that has value.
“If a hacker does that with hundreds or even thousands of patients, they might go back to the hospital and say ‘we’ll release all this information on the dark web unless you pay us’.”
Other routes of extortion include blackmailing the medical device manufacturer under the threat of exposing the breach, thus harming their market share, threatening the hospital with the same exposure on the basis that its credibility will drop, or using ransomware to lock hospitals out of their own systems until they pay up.
Although there’s no evidence that a medical device has ever resulted in backdoor access to a hacker outside of penetration testing, Kostek says there have been breaches of hospital systems in the US in the past.
What makes the issue of cyber security all the more pressing, though, is a report from The International Criminal Police Organization (Interpol) back in April that warned there was a higher risk of breaches due to the critical importance of hospitals working to treat Covid-19 patients.
What are the challenges associated with securing medical devices against hackers?
Despite the obvious need to secure their equipment against a breach, Kostek says medical device manufacturers face a key challenge.
“Most of them have been around for a while, so unless you’re building a new device from the ground up, you’re trying to now hack into your own system and then add in security to existing code, which is always a challenge,” he says.
“Companies who are reusing code or hardware have to go and modify it to meet cyber security standards.
“This is versus if they’re starting from scratch and they know they have to incorporate that into the hardware and software during the design process, which is much easier.”
A second challenge is mostly related to the seemingly rapid development of wearable technologies, some of which are considered medical devices due to their clinical applications.
Much like in the case of hackers infiltrating hospitals, the key risk here is the exfiltration of private data.
But with many wearables being personalised healthcare tools in the consumer space, rather than commercial equipment sold to hospitals, manufacturers could also fall foul of data protection laws simply through being unaware of how different jurisdictions mandate the management of personal information.
How can privacy laws trip medical device companies up?
A range of paid tools exist to help companies firm up their data management strategies and thus avoid the repercussions of breaking the myriad of privacy laws that exist throughout the world – but high-profile examples continue to prove it’s possible to slip up.
Recent proof of this came from European telemedicine provider Babylon Health, whose app connects patients to physicians through video chat.
The company experienced a software error in June that allowed three patients to access the consultation recordings of others – sensitive data protected under the General Data Protection Regulation (GDPR) law in Europe.
An investigation carried out by the Information Commissioner’s Office (ICO) concluded that Babylon had acted swiftly enough in fixing the error – which itself impacted few customers – to avoid a fine.
But the potential for financial and reputational damage was great enough to reinforce one software vendor’s already steely resolve, and encourage companies to build their technologies with strong data privacy protection in mind.
Swiss firm Pryv chose to open source its data management software in June and has been creating awareness about it in the medical devices space – particularly in telemedicine – ever since.
“In digital health, what’s very important, besides keeping the relationship between doctor and patient remotely, is to collect data,” says Pryv co-founder and chief business development officer Evelina Georgieva.
The sources of this data can be numerous and involve a range of different devices that measure different aspects of health, ultimately informing the personalised care of a patient.
But the risk of collecting it, Georgieva says, is that companies don’t always integrate the most GDPR-friendly user experience into their apps, or build the back end of their systems in a way that makes the retrieval and deletion of user data an easy task – two key aspects of compliance under GDPR, as well as the California Consumer Privacy Act (CCPA).
“What we’re doing is enabling businesses to capture this consent and structure the data in a way that preserves privacy, so at any time a customer using a mobile health app or a device can call the organisation and ask for proof that they consented to the processing of data, ask for a copy of it, or ask for proof that it’s been deleted.”
With maximum fines for breaching GDPR as high as £18m ($23.8m) or 4% of annual global turnover for a company (whichever is greater), Georgieva says the result of noncompliance, can be “that you’re out of business”.
What’s the best way to ensure medical devices protect user privacy?
Pryv is a business, so while medical device and software developers may choose to take advantage of its open-source option to shortcut the time it takes to develop a product and reach their market, they should be aware that this limits them to enjoying compliance benefits implicit within one country.
If they want to expand to other regions with different privacy laws, they’ll need to become a paid customer.
As GDPR is widely considered the strictest data-privacy legislation around the world right now, this shouldn’t be a problem for companies operating in European markets, where abiding by it is a necessity.
But for companies in less regulated areas of the US, for example, where data privacy laws can differ from state to state, Kostek says there’s been a trend of manufacturers designing their technology with the strictest legislation in the country – currently California’s CCPA – in mind.
“Medical device companies a lot of the time are going by the mantra ‘what’s the toughest regulation? Let’s meet those and then we satisfy everybody’,” he says.
“Companies are saying ‘we’re not going to make a California device, it’s just going to be the same everywhere, so we don’t really care if Montana hasn’t adopted California’s rules, they’re getting them anyway.
“It’s a lot easier to develop devices to meet that one set of regulations, rather than worry that they might have shipped US-compliant devices to the EU.”