Meet your next client here. Join our medical devices group community.
Private answer
Very timely, Gal, thank you!
In fact, Medical Devices Group Advisory Board member Rebecca Herold invited a security expert to join her on her Security, Privacy, HIPAA – and Hacking Medical Devices workshop in May. Shelby will demonstrate how easy it is to hack a pacemaker! http://ow.ly/tLksn Marked as spam
|
|
Private answer
Rebecca Herold
Yes, very timely! Thanks Gal and Joe!
This is a sleeper of a growing significant risk, and one that more organizations need to address. The hospitals and clinics I work with are starting to realize the significant health, as well as information security and privacy risks, that accompany most medical devices that they are using. Most are surprised to learn that they are running on very old operating systems (OSs)...some as old as Windows 95!...that are no longer even supported. More are using the security and privacy checklists I provide to them to ask their medical device providers about how these risks are being addressed. There is a trifecta of significant risk ares that must be addressed: 1) patient safety/health; 2) information security; 3) privacy (there is A LOT of data associated with medical devices specific to the patients using them). As Joe indicates, I will be covering these risks, and then also discussing the ways to mitigate them, during a 1/2 day workshop in May in St. Paul, MN that will take place the day before the 10X Medical Device Conference. Hope to see a lot of folks there! Marked as spam
|
|
Private answer
Stephen Glassic
From what I have been reading, this is also a hot topic among Hospital Clinical Technology Management (CTM) Departments. With so many devices connected to the hospital network (many of them wireless), it has become a huge concern in regard to safety and HIPAA compliance. They have had to re-define the relationship between the CTM and IT departments as well as upgrade their IT networks and security management. This is also a major consideration when evaluating new devices. The winners will be the device manufacturers who are able to help hospitals manage security efficiently and effectively as well as using the technology to streamline operations, improve safety and reduce clinical errors.
Marked as spam
|
|
Private answer
Marco CATANOSSI
Furthermore also pure software products that runs on computers may be classified as Active Medical Devices in EU (eg, visualization software that helps diagnosis), not to mention some kind of Apps, so the topic is very timely.
Marked as spam
|
|
Private answer
Michael Kremliovsky
Disclaimer: speaking for myself, not my employer.
OK, I got it. The topic is important and it is a growing concern. Expressing concern is very typical for compliance-burdened organizations. Many people are running around being very concerned. What we have a deficit of is the people (and culture) who can address the cyber-security concerns. Cyber-security crime is not really different than any crime. Recent reports show that hacking becomes targeted and it becomes professional. Please, understand - there is NO good defense from professional hacking. To make this point clear, compare hacking with targeted assassination. There is no defense from the latter either as far as general public is concerned, so we keep NSA and other specialized forces to counter the threat. Compliance with regulations keeps away script-kiddies only, because any sophisticated attack factors-in all the elementary defenses. Our best bet in defending medical devices (as well as financial systems) is making the information useless/pointless for the attackers. This, of course, assumes that all regular best practices are taken care of: encryption of data at rest and transit, proper management of private keys, authentication and authorization management from infrastructural layer up to application layer, monitoring and early threat identification and prevention actions. Yet, as I insist, it would not stop targeted attacks by highly qualified (and sponsored) groups. So, we have three major fronts: diminish or eliminate the value of hacking, counter-intelligence (including numerous decoys), and improve criminal prosecution to the degree of extremely high risk for the attackers. Marked as spam
|
|
Private answer
Michael Kremliovsky
Gal, there is another significant factor that should not be overlooked. In many cases, strict security practices severely limit interoperability and ease of use of medical devices. It may also affect the efficacy of the therapies and diagnostics. Security does NOT come for free and consumers of medical systems often are not ready to pay premium and/or deal with elaborated procedures. Another factor that affects our ability to deal with cyber-security is the zoo of many generations of systems that we have deployed. It is a significant capital investment if we want to update all these systems and make them more hacking proof.
As you can see, proclamations are easy to come by, but reality is challenging. Consider "public outrage" at NSA's activities. Public knew about it, people were hinted many times over, and yet, once "discovered" , it caused some media storm. Seriously? Don't we understand that our personal computers are easy to hack? That means hackers can get access to our corporate VPNs and, if they are good enough, they can step by step take over corporate networks. We are pretty wide open for those who really goes for it. So, our best bet is to layer the cake and remove incentives as well as clearly state that cyber crime is the same as any crime. Marked as spam
|
|
Private answer
Stephen Glassic
Michael, I was wondering what the likelihood of an attack through the laptops of Field Service Engineers that are used in the process of servicing and updating medical devices in the field and if there is enough being done to reduce that possibility. I would also be interested in if you know how seriously hospitals are dealing with proactive security measures. Are they developing their own systems or are they partnering with specialists?
Marked as spam
|
|
Private answer
Rebecca Herold
There are three categories within which security incidents and privacy breach occur:
1. Malicious intent. This includes such activities as hacking as well as disgruntled insiders using their authorized capabilities to do bad things. 2. Mistakes. *EVERY* human being, no matter how educated and cautious, makes mistakes sooner or later. Some of these mistakes can have devastating consequences. These are not only mistakes with using technology, but also mistakes in building the technologies. (Case in point: the latest Apple iOS7 security flaw was a result of a *HORRIBLY BAD* programming mistake, or simple lack of programming education.) 3. Lack of knowledge. I've helped hundreds of organizations (the majority in the healthcare sector) with their information security and privacy initiatives, and audited hundreds of other companies' information security and privacy programs. I've *NEVER* found an organization that gave too much training, or too much regular awareness communications. Organizations in general simply do not provide enough information security and privacy training, and do not send out enough reminders for how to mitigate security and privacy risks during normal work activities. When people don't know how to work while preserving security and privacy, they will do things that result in security incidents and privacy breaches. When I first started my career, one of my very experienced co-workers was fraud investigator for the large multi-national insurance and financial organization where I worked. He indicated that not only through his experience, but also through various studies done by his fraud institute group he belonged to, that in general people fall into three categories with regard to committing fraud, which also including doing bad things such as hacking, stealing, etc. * 10%: There will always be this portion of the population that will strive to follow the rules and never break them. Regardless of potential sanctions/punishment. The "goody two-shoes" population. * 10%: There will always be this portion of the population that will strive to steal, commit fraud, hack, or any other illegal activity. Regardless of potential sanctions/punishment. The "career criminals" and "bad seeds". * 80%: This large majority of the population will usually follow the rules, but *will* break them (to steal, hack, commit fraud, etc.) if they 1) think they can do so and get away with it without being caught (e.g., no oversight, lack of separation of duties, etc.); 2) justify in their mind why they should break the rules (e.g., they are underpaid, they should have gotten a promotion instead of Sue Doe, they were treated badly by a business, etc.); or 3) they become desperate (e.g., lose their job, lose their home, lose a loved one, etc.). Depending upon potential sanctions/punishment, their reasoning and opportunity will often lead to them taking action that is illegal. Continued in next comment... Marked as spam
|
|
Private answer
Rebecca Herold
Continuation from earlier comment...
No matter how the incentives change for those who hack (those with malicious intent), we will still need strong security controls to mitigate the risks of those who make mistakes and who have not had enough training to know better. No matter how the punishments change for those who hack, there will always be 10% who are going to do it anyway regardless of potential punishment, and 80% who, under the right circumstances, will do things they shouldn't do. In addition to incentives and sanctions we need to have multiple layers of information security and privacy controls built into technologies, including medical devices, for all these reasons. Incentives and punishments alone are not enough to ensure appropriate levels of security. When we also have the added health risks to the patient added on to information security and privacy risks, it should be even more compelling and clear that strong security and privacy need to be implemented within medical devices, right from the time they are engineered. The need for more security and privacy controls is greater than ever, and not just simply because of the hacking threat alone. And it is only going to increase as we increase utilization of technologies, and increasingly depend upon technologies. Just because you cannot have 100% security (this has never existed, by the way, except for technology that was decommissioned, locked up and not used) doesn't mean you should implement less controls. As technology, and uses of it, becomes more complex, the need becomes much greater for more layers of security and privacy controls. These must be built in, so that the are transparent and have as least of an impact as possible on usability. Marked as spam
|
|
Private answer
Michael Kremliovsky
Rebecca, we love consultants and academicians, they always know what to do and what is the best for the customer :)
Stephen, in my experience, Field Service Engineers is by far not the weakest point. They are usually pretty good at what they do and, most importantly, they have eyes on the ball. Of course, we still try to wrap them up into a good process, and it works, reasonably. However, weaknesses exist in product designs, because systems integration and interoperability lags significantly behind the requirements of having end-to-end secure systems. Hospitals come in all shapes and forms and their cyber-security readiness ranks from 0 to 9+ on a 10 grade scale. Those who rank 0 usually don't have IT budgets and sometimes don't have EMR systems in place, so even if they lose patients to errors, there is no good way of investigating. You can imagine that cyber-security is deeply secondary in their minds, because they cannot even achieve good practices of clinical integration. So, reality is neither academic nor pretty. Now, back to Rebecca's points. Normally, I would be preaching the same lines to customers and general audience, but here I want to turn the table and raise the points that we, as public, need to be seriously thinking about. We need a very serious mechanism of investigating and prosecuting cyber crimes. We also need implementation of better technologies including strong authentication mechanisms that would help to identify attackers and weaknesses that have been exploited. We also need compliance enforcement, so that it becomes significant among customers priorities - truly significant (read - financially significant), not just preaching. We all should prepare for the cost of compliance too. Not a penny. Marked as spam
|
|
Private answer
Stephen Glassic
We can only hope that all (or most) of the security professionals are in the goody two shoes category and are being closely watched by other goody two shoes.
I remember a story a while back about a hacker that was hired by a government agency to develop security against hackers. I can understand the reasoning but but how could they be sure he wouldn't go back to his old ways? They would never hire a hit man to guard the President, or would they? Marked as spam
|
|
Private answer
Michael Kremliovsky
Stephen, this is a separate and pretty fascinating topic, in my opinion. Thinking that only "bad guys" can have skills is pretty ridiculous, but I would also question if "good guys" lack aggressiveness that is necessary for inventing evil ways of attacking. For example, it would be difficult for many people to attack medical facilities, because we have respect for those who are ill. Yet, there are some people who would not blink. It is a challenge. Good guys don't like to kill and their triggering time might be longer than for those who don't believe that human life and dignity are important. This is why we need to develop mechanism of prevention and very aggressive investigation. This is where the good brains should be excelling.
Marked as spam
|
|
Private answer
Rebecca Herold
No one said that only bad guys have skills. In fact the good guys need comprehensive wide-ranging skills to have effective security controls. They need to be able to think like a hacker without being a hacker to do so (plus so much more). Bad guys often have just one or two tricks they use to exploit an well-known vulnerability to do their dirty work. The fact is security is necessary to defend not only against bad guys, but also against good guys who will make mistakes, and who will do things that result in incidents simply because they were not provided with enough security or privacy education or guidance. Security is not simply to protect against the aggressive; they are only one of the groups from where threats originate. We need to address all threat vectors, not just threats from hackers.
Stephen, regarding hackers, there are the "white hats" and the "black hats". Many of the proclaimed white hats were prior black hats, as you indicate. Or, they may be prior disgruntled employees. I was responsible for building and managing the information security and privacy program where I worked for 12 years at that large multi-national corporation I referenced earlier. I was vetting outside security vendors for a penetration test I wanted to have an objective entity perform against a new online bank we were implementing before it went live. One of the vendors was very impressive; great references, longevity in the business, etc. However, when I asked about their team, I learned that one of their team leaders used to work at my organization; he had been a Sr. systems admin in the IT area. When he had worked at my org I had discovered him committing a wide range of policy violations, and my friend I previously mentioned also found he had committed fraud with his admin access. He had tried to cover it up, but my info sec area noticed the anomalies. He was fired. He was disgruntled. He vowed that we would be sorry and that he would get back at us. What a coincidence that he actually got work at a company where he may get that opportunity! To make a long story short, I did not hire that security vendor for the job. :) Marked as spam
|
|
Private answer
Gal Landesman
Thanks for all your comments.
Marco, you are right, I've read part of the excellent thread here: http://www.linkedin.com/groupAnswers?viewQuestionAndAnswers=&discussionID=5840131183534432257&gid=78665&goback=.gde_78665_member_5841672057288417281#commentID_null and the recent developments in medical mobile apps are exciting but are also a reason for concern as mobile malware attacks are increasing. Michael, I would add to your list increasing awareness of healthcare practitioners and medical device developers. Compliance does not ensure security and I think we could do much better. A recent Norse-SANS report found a large amount of malicious traffic sent from medical organizations systems. Many of the threats could be prevented. Also, it seems the latest regulatory developments place a lot of responsibility on organizations to protect themselves or else face penal punishments, I guess it's more difficult to prosecute cyber criminals, many of them operating covertly from Russia and Eastern Europe. Marked as spam
|
|
Private answer
Jeff Walker
Part of OIG 2014 Work Plan, will it help?
Controls over networked medical devices at hospitals (new) Protected Health Information. We will determine whether hospitals’ security controls over networked medical devices are sufficient to effectively protect associated electronically protected health information (ePHI) and ensure beneficiary safety. Context—Computerized medical devices, such as dialysis machines, radiology systems, and medication dispensing systems that are integrated with EMRs and the larger health network, pose a growing threat to the security and privacy of personal health information. Such medical devices use hardware, software, and networks to monitor a patient’s medical status and transmit and receive related data using wired or wireless communications... Marked as spam
|
|
Private answer
Michael Kremliovsky
As I mentioned above, these types of concerns (justified, no questions) so far lead to disintegration. We cannot share and protect in the same time: you are either sharing or not sharing. The cost of compliance is currently overweighs the benefits, so many companies decide not to develop features that trigger privacy concerns. Note that the concerns are mostly theoretical, because somebody showed somewhere that things can be hacked. The truth is: most things can be hacked given enough time, access, and skills. The security of patient data is being breached from stolen laptops rather than from dialysis machines, however we raise concerns where we have light and not where we have a problem (in the dark). Similar irrational risk/benefit analysis takes place on other healthcare fronts, an interesting phenomenon that I personally attribute to regulatory activism and business development by various consultants.
Marked as spam
|
|
Private answer
Stephen Glassic
Thank you Michael and Rebecca for your input on this very timely and important subject. You both have a way of explaining things in a way that can be understood by those outside the field of expertise.
I was wondering, with many device manufacturers and software developers concerned with proprietary software and with the quickly advancing technology, is there or will there be standardized platforms that will emerge that can be seamlessly updated and integrated with all medical devices as technology advances and allow for cross country and worldwide integration in a safe environment? Marked as spam
|
|
Private answer
Michael Kremliovsky
Stephen, it is a very large topic. Firstly, on security side, almost nothing is proprietary, because hiding is considered to be the worst practice in communication and data security unless you just cut the physical link altogether. Therefore, devices and other IT systems have to rely on public standards and protocols. As far as interoperability is concerned, we did not have a business model for long time and to a large degree we still don't. "Working together" is a good theory/will which does not always deliver a reward to the participants. This is slowly changing, but not without losses on the part of the manufacturers. Interoperability is a good trend and it should eventually win, but it will be happening in an evolutionary (read - "slow") way. Manufacturers have to find ways of doing things more efficiently to compensate for the lost revenue due to the monopolization/bundling of their offerings.
Marked as spam
|
|
Private answer
While software barriers are important, I don't think enough attention gets paid to the hardware and "midware" where the initial intrusion occurs. We hear all the time how hackers are able to gain access by co-opting the circuity.of ATMs, Point-of-Sale card swipe terminals, RFID ports. Johnson Electric / Parlex has a technology, Secure-Flex TM, which destroys the circuit if the intruder tries to gain access. Johnson has not recognized the potential value of this in the med device field yet. I would encourage your Design Engineers to look into this. The more barriers to critical information, mechanical and software related, the better.
Marked as spam
|
|
Private answer
Anthony Antonuccio
In my experience, this is very solvable with existing security measures using client authentication certificates and end to end encryption when the origins, pathways and destination points are in a closed secure system. How often have you heard stock exchanged being hacked and gone unprosecuted while billions of transactions occur. Movies take pride at hacking out nuclear arsenal yet it hasn't happened since it's inception. A bit tongue in cheak, but this isn't rocket science. It's all about trusting the holder of the keys and ensuring there is are fail safe systems in place where no one person holds the master key for the master key doesn't exist.
Marked as spam
|
|
Private answer
Michael Kremliovsky
Anthony, it is not a rocket science, but it costs money to manage things and almost nobody wants the additional work. Would you change pipes in your house preventively or just wait until something breaks? 90% (if not more) react rather than proact.
Marked as spam
|
|
Private answer
Anthony Antonuccio
Michael, if my pipes were leaking, I certainly would. If my apartment door had no locks in NYC, I certainly would add one. I don't believe this is a 'if it isn't broke, don't fix it' issue. It's fundamental security, especially when dealing with sensitive information.
Marked as spam
|
|
Private answer
Michael Kremliovsky
Anthony, pipes are not leaking yet and you live in suburban California... you would not care, statistically speaking, I guarantee :)
Marked as spam
|
|
Private answer
Jeff Witt
I would fear the Government the most because of the NSA, and there Prizm program.
Marked as spam
|
|
Private answer
Gal Landesman
I have to agree with Anthony, a lot of these problems are solvable with existing means, like the financial industry is applying (not always successfully but alas). At least the Health Information Technology for Economic and Clinical Health Act (HITECH) invests now a lot of money in healthcare IT-related initiatives, including protecting EHR.
See also: http://www.theregister.co.uk/2014/02/28/microsoft_botnet_takedown_team_say_healthcare_is_herders_next_target Marked as spam
|
|
Private answer
This is a great thread!
As a network security reseller we realize the gaps in current technology and in the lengthy process of developing and implementing policy. We know there are gaps in current technology and that the "bad hackers" are often quicker than our device upgrades and security is, especially in Health Care with budget restrictions and lengthy approval processes. Because of this we partner with cutting edge emerging technology that keeps your devices safe in many ways. One technology solution recognizes that some devices may be too old to be considered "secure" by HIPAA standards due to the operating systems they run on, inability to be upgraded, etc. but they are still functional and effective devices. We have solutions to keep your devices safe, HIPAA compliant and protected from hackers so the user experience doesn't change, the device is simply secured. The true key in IT security is cutting edge technology. The only way to stay safe and ahead of hackers is to stay current with technology and be sure you are filling gaps that hackers are constantly trying to exploit. Marked as spam
|
|
Private answer
Michael Kremliovsky
Gal, I think the point is being missed. We know how to secure devices, but we don't know how to keep them as usable after that. It is a trade-off: security vs. usability. Now, the key to understand the issue is that the devices are usable every minute, but hackable only once in a while when criminal activity takes place. This trade-off is not much different than any other security-related trade off such as airport security.
Marked as spam
|
|
Private answer
Rebecca Herold
Folks you need to implement information security *and* privacy controls:
* To protect from hackers, but *NOT ONLY* to protect against hackers; you ALSO need to, * To protect against mistakes (all humans make them), * To protect against people doing things because they simply didn't have the knowledge (lack of training, lack of knowledge of associated risks, new threats & vulnerabilities, etc.), * To meet compliance requirements (or face huge fines and sanctions), and * *To protect the patient*! Not implementing security *and* privacy controls will be most successful if done in a way that is most effective, most efficient, and in ways that support usability. Instead of prolonging the ineffectual and non-productive claim that you can't implement security and privacy controls because it will impact usability too much, you need to instead be thinking about, implementing, and creating as necessary: * The technology tools do we need to use, some of which may be radically different than what we've ever used before, to minimize the usability issues. We need to engineer the devices with controls built in, and then supplement as necessary with add-on technologies. * The administrative and operational activities that can be implemented to strengthen the security and privacy of the devices and the data stored upon them while supporting usability, and * The physical controls that can be implemented to strengthen the security and privacy of the devices and the data. All three of these types of controls (technology, administrative and physical) are necessary to support effective security and privacy while minimizing the associated usability issues. These are all topics covered at my workshop in May (http://medicaldeviceevents.com/2014-conference/medical-device-workshops/#security-privacy-HIPAA). With all the interest shown in this thread, I would hope that all who are responsible for, and concerned with, these issues would attend and engage in thoughtful dialogue during the workshop so that we can address the risks, make devices that are more secure, with better privacy controls, and as a result support usability. Simply saying security is irreconcilable to usability (it should not be security versus usability) is not a productive viewpoint. We need to think differently about the unique risks that are inherent with medical devices, and then create new security and privacy solutions, as necessary, to address them and support usability. In addition to using any existing controls that are appropriate for usability. This should be seen as a great opportunity for medical device makers; to build in security and privacy controls in ways that have not existed to date. It can be done with open minds. This security versus usability debate has been going on ever since we've been using computers for healthcare/business/etc. We finally got past this log jam of a non-productive opinion with other technologies after being creative and innovative. We need to start addressing both for medical devices instead of just continuing on with the same old debate. Marked as spam
|
|
Private answer
Michael Kremliovsky
Rebecca, you can preach this all day long (I do too), but preaching does not eliminate the root cause prevailing in the industry. Simply saying that something is not a "productive debate" does not replace IT infrastructures and does not upgrade devices. In fact, I did not see a single engineer who says "security is not important" (and not even marketing people), but we have numerous logistic challenges and inconsistencies, so when you convert preaching to execution things get below your expectations. So, from the practical point of view, it makes sense to adopt a risk-based approach (as we do it with all other safety related issues): we identify areas where we have maximal risks and address the risks in a such a way that the solution can be adopted (benefited from).
Marked as spam
|
|
Private answer
Rebecca Herold
Michael, I'm not just preaching; I'm taking actions, and have for over two decades throughout the work I've been doing. I am encouraging others to take actions also, not just say you can't do it.
And yes, of course, all security and privacy controls chosen must be based upon risks within the environments within which they are implemented. This is a given. You must consider the risks based upon all those five areas I've already iterated. (Also covered in the workshop.) Marked as spam
|
|
Private answer
Michael Kremliovsky
I am trying to draw a line between advertising and professional discussion, if you will. I did not mean to imply that you are not doing or what you are saying is wrong. Oppposite to that, I am supporting your suggestions and I typically express them in a very similar way when I am talking to stakeholders. However, Gal's question is posed for a discussion and, with all my history, I am not aware about adverse events caused by security problems. I am aware about several adverse events due to the poor interoperability and usability of the devices though. I am always insisting on higher security based on "New York Times" headlines argument, but so far the headlines came mostly from hacking experts who spent time and managed to break into devices. Yes, today we still need to preach and do, the way you suggest, but I do want to see the problem to be mitigated differently in the (ultimate) future. Security needs to be reasonable, managable and "automated" as much as possible. Our weakest link in the chain will always be a human and this is where we need to make sure that we have the best defense. Ideally, we protect people from themselves and give them no excuse by making it easy to follow.
Marked as spam
|
|
Private answer
Rebecca Herold
Michael, if you agree with me, then we are basically both supporting new and more effective security...and privacy...controls and expressing ourselves in different ways; thank you for agreeing with me. :)
My workshop co-presenter, Shelby Kobes, is doing his PhD research on this topic, and we'll be providing what he has found with regard to adverse events that have occurred from insufficient security controls. I will also provide some examples from my hospital and clinic clients. Marked as spam
|
|
Private answer
Andrei Kamandzenka
Rebecca, your intentions are good but lack some practical strategy.
Firstly, modern MDs are complex programmable systems composed of complex programmable subsystems exchanging huge array of data among them and the outer environment. Every data exchange link is a potential security and privacy breach. Eliminating the breaches will require revision of a many things from systems down to their hardware components based on the appropriate industry standards, which are yet to be developed too. In the end it might and probably will appear that majority of existing (already installed and produced) systems are not upgradable and will have to either be removed or replaced within some (lengthy) period of time (years). On one hand it seems to be a good opportunity for the manufacturers to sell more systems, on the other hand it will increase manufacturing prices (especially due to further IT development and finding new breaches in previously secure technologies). Now think about how sufficient medical care coverage in the world is today and how it will be affected by implementing the security and privacy measures at the equipment level. What fraction of customers would financially vote for the changes? Secondly, about Michael's weakest link - people. Ok, I believe at the equipment level many things can be done - software, authentication, sensors, monitors, etc. However I doubt, that these will provide sufficient security and privacy since in the end systems are still operated and serviced by humans and the humans might bring their personal attitude to the processes. Removing that personal attitude will require that the processes be performed by robots. In the end it will require that our future physicians will be robots instead of humans. Do you believe it's practical? Marked as spam
|
|
Private answer
Michael Kremliovsky
Andrei detailed my comment very well. Rebecca, if you want additional use cases, you can always ask device specialists and they will tell you what is involved in clinical integration. We do NOT consider HIPAA to be anywhere close to being challenging. Again, Andrei covered the perimeter pretty much.
Marked as spam
|
|
Private answer
Jeff Walker
Med device software developers are now starting to write secure code and this is where it all should start. Good news for pre-market devices, but we still face the daunting challenge of trying to secure systems that have been in patient use for years...
Marked as spam
|
|
Private answer
Shelby Kobes
Thank you everyone, the information in this conversation is very insightful. From what I have seen there is a huge issue developing in how to secure medical devices. The FDA regulations are weak, but the FDA was never designed for advanced technical oversight. I have seen devices that have very little security designed in to them, with the idea that they will only be used in a hospital setting. I have talked to large hospitals in many states that are dealing with 10 of thousands of devices on their networks. This does not count BYOD, patient computers and mobile devices. I have seen DOS attacks that were propagated by to many wireless systems that are too close together. The problem is a complicated issue between the relationships with in a hospital, manufactures, and government. Designed a technical fix will help in some cases, but that still does not stop the doctor from walking out the door with 50,000 files on a USB drive, or backup drives being lost in the mail. Most HIPAA violation has been physical in nature and not due to hacking, but that doesn’t mean we should be week on security. With how fast this market is moving, having weak security could be very horrifying. I believe the answer fall in the ability for a hospital to prioritize the devices with in there hospital in a manner that is aligned with the stakeholders objectives. This will allows security teams to focus on the systems that are vital to the hospitals mission and not waist time securing a systems that might have little effect on patient care and hospital goals. This also allows hospital to set their standards for what is an except able to have placed in their system. I have worked for a medical device company and I know if the customer will not buy a product unless it has a list of required security systems on it , then these system will be available soon. Example, the pacemaker programmer that I have is a newer model, and it doesn’t even have a user log on. All the records all the data is open to anyone that walk up the device. Why should a hospital allow this system to be integrated in to their environment when they are responsible for the HIPAA violation when they lose date?
Marked as spam
|
|
Private answer
Michael Kremliovsky
Good point (in pacemaker) and it is a very common problem. Having authentication on the programmer raises several logistic problems. In fact, the only really feasible way of doing this is either connecting the pacemaker to the hospital IAM system or by using scannable ID cards. In both cases, the manufacturer of the programmer has to accommodate for the whole zoo of possibilities and variations in different hospital IT systems. Moreover, it is almost certainly that the setup will require a qualified intervention by IT (and/or biomed) people and if the hospital is small, this is a recipe for annoyance.
Marked as spam
|
|
Private answer
Shelby Kobes
I agree that there are going to be roadblock. I was just very surprised at my ability to get top of the line equipment from ebay, that is loaded down with client data. Then I was surprised that I was able to turn on the system with out any authentication. Next I was able to interrogate the pacemaker with out any authentication. But what really scared me was when I got the network card to remotely download updates from some server, somewhere with out verification that these were legitimate updates. It seems that somewhere in that loop, I should have been questioned by something about who I am and why. All the tools are there, why spend hours trying to decode the systems when you can buy the software and devices, and modify them to do your work for you.
Marked as spam
|
|
Private answer
Michael Kremliovsky
Shelby, your example is really amazing. The most amazing thing is how this equipment got to eBay in the first place. Was it marked "not for clinical use" when you received it?
Marked as spam
|
|
Private answer
Shelby Kobes
I have been able to social engineer an newer model pacemaker programmer with client data on it that I destroyed, I also have 4 pacemakers that were given to me. The pacemakers were not for use, but the programmer has plenty of use on it. Since the Programmers hold the key to interrogating the medical devices, I do not believe these items should be in the open market. Not only was there client data on it but there also was hospital internal customer numbers that a person could use for social engineering in to the hospital system. This is what I found on the surface. Who knows what I could find if I pulled the hard drive and exposed it to Encase or FTK. Also the Windows version and key is on a nice windows sticker on the outside of the device for everyone to read.
Marked as spam
|
|
Private answer
Rebecca Herold
Shelby, your research and just the few findings you mention here for the vulnerabilities from within those 5 major risk areas I listed in one of the early comments should be eye-opening, and provide some impetus for not only hospitals and clinics to pay more attention to the security of these devices, about also provide yet more reasons and examples to medical device engineers for why they need to start building security and privacy controls into devices; and think of new controls to replace the long-standing security controls that do not work well on these devices.
Information security and privacy technology (and administrative and physical) controls must evolve to counter the threats that emerge from new technology being used. If it is not appropriately addressed, not only will security incidents and privacy breaches occur, but medical device manufacturers, and the providers using them, are putting the health of their patients at risk. (We'll cover all this in our workshop.) Yet more reasons why action is necessary to address medical device security and privacy proactively. An excerpt from a recent study about the (sad) state of information security and privacy protections within the healthcare industry: "Connected medical devices, applications and software used by health care organizations providing everything from online health monitoring to radiology devices to video-oriented services are fast becoming targets of choice for nefarious hackers taking advantage of the IoT to carry out all manner of illicit transactions, data theft and attacks. This is especially true because securing common devices, such as network-attached printers, faxes and surveillance cameras, is often overlooked. The devices themselves are not thought of as being available attack surfaces by health care organizations that are focused on their more prominent information systems." SANS-Norse Report http://www.forbes.com/sites/danmunro/2014/02/20/new-cyberthreat-report-by-sans-institute-delivers-chilling-warning-to-healthcare-industry/ Marked as spam
|
|
Private answer
Michael Kremliovsky
Rebecca, what Shelby was talking about is largely related to "human factor" (or its exploitable - "social engineering"). Technology is available, but it is not being used in spite of the processes in place. Just a side comment.
Marked as spam
|
|
Private answer
Rebecca Herold
Actually, he's pointed out both poor technology capabilities, and the human factors. As I outlined in an earlier comment, it is necessary to address all types of controls within the three primary domains (administrative, technical, physical) if you want to have effective security and privacy controls.
Additionally, the technology that exists was not created for medical devices; they were made primarily for humans sitting at keyboards. We need to look at the ways in which the devices can be engineered differently to build in security and privacy controls that have not yet been used. Such new, inherent-to-the-device controls, will ultimately improve upon not only mitigating security and privacy risks, but also address the usability problems of existing technologies that you pointed out in other comments. Marked as spam
|
|
Private answer
Michael Kremliovsky
Yep, if you want to solve this technologically, then my previous comments apply - long shot, although ultimately a good shot.
Marked as spam
|
|
Private answer
Shelby Kobes
I didn’t cause that crime! The computer in my brain was infected by virus and it made me angry!
After doing my research in to the stakeholders of medical device security, I have found that the drivers of security is the hospital. They are the central trust agent for all the stakeholders that are connected to a device. It is their risk when the device is hacked and it is the hospital that has to take care of the cleanup job. It is the hospital’s reputation and their ability to effectively care for patients that is on the line. We have grown up in a society that has high trust in the hospital system. We know from past terrorist attacks that their major goal is to break trust and to incite fear. By not providing the security needed at the hospitals and manufacturers on medical devices we are opening the door to wide scale government-sponsored attacks on a hospital infrastructure. Not only is this a major trust breaker for the citizens of the United States, but the medical system is also a major financial tool in the financial security of our government. We are going to have to start looking at this problem in a different light. It is going to have to require the FDA to pass tougher standards for approval, manufacturers to make ethical long-term investments in security, not just ROI, and hospitals to develop architectures that will not allow devices that are not secure within their environment. Technical fixes are going to play a role, but if we only focus on technical side, hacker will walk around them with very little resistance. This isn’t about the current technology on the market, but it’s about building the foundation for connected technology in the next 10 years that include the implantable computer chips, smart prosthetics, and etc. Marked as spam
|
|
Private answer
Michael Kremliovsky
Shelby got it right. I am just validating since this is something I am involved with on daily basis.
Marked as spam
|
|
Private answer
My personal opinion.
Security needs to be addressed from the bottom up. It took FDA couple of decades to set right rules for Med Device SDLC, but finally it got it right, see IEC 62304. The same approach should be taken for security in software. The software industry come up with the name Security Thread Analysis where every module/function is assessed from Security Risk perspective and should have an acceptable failure mode. FDA and Customers should do Audit of Med Device and need to see that Security Thread Analysis is part of SDLC and works correctly. I guess FDA doesn't have enough skilled resources (Customers don't have them as well) to do this audit. Everyone hope they are not at the front line :). It is understandable as it is an expensive and hot field and as mentioned above we deal with the professionals. Again the risk approach should be appropriate. Peacemakers should be taken priority. Some other Med Devices probably can wait. That is also the answer to the usability vs security. However I personally think it goes together and there is no contradiction. Simpler and robust usability protect from human mistakes and that means less adverse events. FDA recognized that and have a guideline on UI. There is also another aspect of the picture. The sw vendor can create a very robust and secure product, but the local sys admin didn't set permissions correctly and it didn't work. Unfortunately we all in a very passive mode on this as it is very expensive field. What happen at the end is CIO are fired and this list is growing: Target, Sears, Neiman Marcus. In my opinion it is not effective, but shows the lesson. I imagine it will be hard to find the job with such record. Marked as spam
|
|
Private answer
Bogdan Baudis
This is going to be tough ... I remember everybody being concerned what if ANY wireless communication would be allowed for medical devices and that was mostly because of EMC/EMI (RF radiation, interference etc etc...) not even thinking about security!
Then one day we all woke up and found Wi-Fi in almost every hospital since IT did not like wires! IMHO nothing is going to happen unless some disaster happens, public will bay, politicians and lawyer will smell blood in the water and the big ugly foot of governments' regulations will start descending down ... Marked as spam
|
|
Private answer
Andrei Kamandzenka
Bogdan, when saying 'almost every hospital' do you mean Wi-Fi in the hospital, or in MedDevs? I believe that wireless IT infrastructure was harmless to MedDevs exactly due to the strict EMI requirements for the devices. Concerning your forecast on the situation evolution - unfortunately you're probably right.
Marked as spam
|
|
Private answer
Bogdan Baudis
@Andrei
Why Wi-Fi mostly did not harm the existing devices but it effectually prevented or made more difficult several possible design routes for the new devices. I guess it would be very hard to quantify the effect now, that would be what-if work and I doubt if there is money in such endeavor :-) However, what was worse: it created de facto standard for the hospital communication infrastructure while at the same time being woefully inadequate from the security point of view. WiFi still is not very secure but now lost of people expect that eventually all or most of the medical devices will be networked! How about a medical device which can kill a patent with WiFi, Bluetooth, ZigBee, optional cellular modem and possibility of remote operations?! Picture a nurse walking with a tablet or smart phone some other place... Cannot say what, when, who etc but I saw a concept prototype laid before me on the conference table ... Marked as spam
|
|
Private answer
Michael Kremliovsky
This hypothetical hacking is truly a double edged sword. I do believe that it is an honest attempt to argue for better security design, but it is also used for self-promoting and business development. It really strikes an ethical border at that. Consider a manufacturer of guns and body armor who organizes a shooting exercise to demonstrate how to use "old" weapon to kill efficiently. Aim at the head, approach at such and such angle and distance, use special ammunition with high damage, and so forth. This is to sell the consulting services, ammunition, and protection. So, the first activity is cool, and the second is pretty special, right?
So, if somebody wants to do harm and commit crime using WiFi (or any other means) it is NOT different than any other crime. It is more time consuming and trickier than stealing a laptop loaded with customer data. Of course, NSA has its own reasons to disable remote functionality of some of the devices installed on high value targets, but we pay NSA to think about it. To go and defend from a hypothetical risk is difficult to justify economically and nobody typically does it for that reason. It is not because people are stupid, it is simply because they are well-intended. School shootings is probably the most publicly appealing demonstration of this paradigm. Do we really want to surround schools with armed security and barber wire? How far should we go? Contrary to medical devices, we have real and present danger when children are dying every year. Our measures should be asymmetrical in most of these cases, because symmetric response just leads to a race. Marked as spam
|
|
Private answer
Bogdan Baudis
@Michael The one reason for hacking which is not that hypothetical to think about is the same which seems to be prevalent on our desktops: stealing the patient/provider usage data for the purpose of resale.
While in our space (desktops, phone) that is mostly annoying and even not everywhere illegal, it still has the potential of harm by enabling all kinds of frauds. In the medical use space it would be much more damaging: lets just say HIPPA and ponder! It does not have to harm the patient physically (or even harm the patient at all) to cause serious problems in the whole medical device design/use chain... Marked as spam
|
|
Private answer
Michael Kremliovsky
Bogdan, we were talking about medical devices and cyber-physical systems, in general. "Office" computing is somewhat different. I did mention stealing laptops that illustrates your point. Regular network and computing security has to be in place. Period.
It is "HIPAA", not "HIPPA". Health Insurance Portability [and] Accountability Act. Marked as spam
|
|
Private answer
Bogdan Baudis
@Michael Sorry for my sloppy typing ... It is HIPAA ...
But I am not sure what "Office" have to do with the subject at hand? Surely the computing is different but not all the reasons for hacking. Once a medical device is on the net with full TCP/IP stack it is not only vulnerable for hacking but it also becomes a target for the information theft and for the some of the same reasons as the regular desktops and mobiles: the use profile data for example. The HIPAA comes into play because it exposes legally the providers but that eventually is transferred at least as the development cost increases to the design process. This means that even if the target customers of the stolen data were only interested in the aggregated data (for marketing rather than outright fraud), this still could have a more substantial impact than in some generic "office" or "home" environments. In fact the above could be another argument why "medical" differs from "office" or "home" ... :-) Marked as spam
|
|
Private answer
Michael Kremliovsky
The difference between general use computing and medical devices is pretty substantial. Medical devices are not designed (at least, not normally) with the idea in mind to provide open communication environment with user-controlled security. This is a big difference, because in medical devices we do not commit to serving computing and networking needs of users, we are just focused on very narrow communication scope and domain. To break into a medical devices often requires knowing additional information, reverse engineering, and/or physical access to the device.
Marked as spam
|
|
Private answer
Bogdan Baudis
@Michael "To break into a medical devices often requires knowing additional information, reverse engineering, and/or physical access to the device. "
I am not disputing that at all. In fact some time ago there was a lot of buzz about a "hack" to a defibrillator device but in fact to pull it off really would require inside (and quite substantial) knowledge. Having myself some experience with MICS radio I can say that it is not that simple even to get yourself a set of necessary tools. Yes, so far we try to limit connectivity to what is absolutely necessary. Still, there seems to be a pressure to put everything on the net and sooner or later this may lower the difficulty bar and subsequently: make the hacks economical ... While the insulin pump may use a proprietary and possible encrypted protocol it can be only hardened to a point because of necessarily limited platform. As for the "prioprietary" stuff, it is not like we did not have plenty of "insider" jobs already ... Just recently: I took a part in persuading the client away from storing the full usage data on the device with a connectivity option to a smartphone ... If something can be done then it will be done evntually... Marked as spam
|
|
Private answer
Michael Kremliovsky
I should not be misunderstood either: medical devices need to be secure and safe. The modern regulatory framework is risk-based, and we need to add business risks like HIPAA too, so we have a consistent and up to date approach to the issues. We face significant (and fundamental) challenges on two fronts: 1) customers' discipline in following procedures; 2) interoperability between devices and systems. Both challenges make us think twice before we tighten the screws on security and often force us to loosen up - the reality of the market.
Marked as spam
|
|
Private answer
Jeff Walker
We're starting to see hospital systems disfavoring insecure medical device systems during the procurement process. They're doing this because the HDO assumes the lion's share of risk when deploying these in their facilities.
Marked as spam
|
|
Private answer
Bogdan Baudis
@Jeff This would be very good news ... At least I would have better position arguing my effort estimates :-)
Marked as spam
|
|
Private answer
Shelby Kobes
My experience talking with many hospitals, Is that we need to build the structure that can support implementation of secure devices during procurement process. Hospitals are the ones that have the most risk in the game and also the reputation on providing quality service. The problem is very systemic, and difficult to fix. What I have seen, is hospitals working is silos trying to solve the same issues by using layers of technology that require hours of time with little evidence of its effectiveness. I do believe this worked at one time, but we are in a different environment that requires a different mindset.
Marked as spam
|
|
Private answer
Geoff Hummel
Taking a different perspective: Business decision based - FDA requires that a Risk Assessment (IEC 14971) is done. During RA hacking would be one of many risks and assessed according to the regulations and internal SOP's. Main criteria being potential harm to patients. If the result, following ALARP guidelines, is that the risk needs to be mitigated, only then do we need to be concerned (according to the law - but not from a business continuation or ethical view).
There may be low financial incentive for hacking into MD’s but what about the panic amongst the general public if terrorists can hack in and cause harm to patients remotely. We owe it to everybody, including ourselves, to address this regardless of any financial or legal issues. How would you feel if a device you helped develop was hacked and resulted in death or serious harm, however low the probability (which in today’s world is likely to be higher that you’d think.) Marked as spam
|
|
Private answer
Michael Kremliovsky
Here you go, Geoff. Risk based approach as we do all other things nowadays in the medical device space. So, the dilemma that I am trying to point out to is that security risks can be mitigated by limiting functionality. The simplest way to mitigate for network attacks is to take the device off the network. The simplest way to address privacy is to remove PHI from the device. There are, of course, risks in these kind of mitigations as well due to the limits of usability and functionality, but they are usually considered to be "second order" by regulators (simply because we typically move from established practice of not having network and PHI functions to having them).
Marked as spam
|
|
Private answer
In my 20+ years of experience working with software-based medical devices, I can tell you that I have rarely if ever seen "security" requirements in an SRS for a medical device. What you typically see are requirements related to HIPPA compliance (privacy) and requirements related to access controls (i.e., Lab supervisor vs. lab tech).
As we start to introduce new technology into medical devices, there are bound to be more security issues. For example, wireless technology in most medical devices today is not encrypted. If it were, devices would be far less susceptible to hacking (as in the pacemaker and insulin pump examples) The FDA's recently published guidance on Cybersecurity lays out a reasonable approach for address true security issues. I recently gave a talk on this topic and a FDA Update Conference sponsored by MassMedic in Boston. Anyone who is interested in getting a copy of this can contact me and I'll be happy to share... Steve Rakitin steve@swqual.com Software Quality Consulting Marked as spam
|
|
Private answer
Akshay Mishra
Sales Reps required for USA and Europe Market for disposable medical devices. Send your resume on a.mishra@sofoli.biz.
Marked as spam
|
|
Private answer
Akshay Mishra
Sales Reps required for Australia and New Zealand Market for disposable medical devices. Send your resume on a.mishra@sofoli.biz.
Marked as spam
|
|
Private answer
Bogdan Baudis
@Akshay
... It's not for me but I think your are posting to the wrong thread ... You are in risk of being flagged and from my personal experience it is not fun at all ...(it will have an effect on ALL your groups ...). Marked as spam
|
|
Private answer
Jouni Erkkila
Hi,
Responding after 10 months this started. FDA has set the new "non-binding" ruling here. Take a look: http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/ConnectedHealth/ucm373213.htm To make it simple: 1) Risk Management, 2) Specifications, 3) Verification 4) Report Thanks, JoE Marked as spam
|