Windows 10 Certificate Validation Bug Exposes a Fundamental Weakness

Major Windows CVEThe announcement Tuesday from the NSA about the new cryptographic vulnerability in the Microsoft Windows operating system sent ripples of shock through our entire community. In case you missed it, this devastating vulnerability (CVE-2020-0601) allows attackers to bypass trust mechanisms to falsify certificates, making them appear to come from a trusted source. It also allows attackers to falsely authenticate themselves on vulnerable HTTPS connections and remotely execute code. Let’s hope everyone is on top of their Microsoft security patches or there could be some serious damage done.

This week’s warning isn’t the usual story of forged certificates or somebody using stolen keys. We all remember Stuxnet (read more on that here), but that exploit required the attackers to penetrate and then steal the code signing keys from two trusted software manufacturers. The theft was non-trivial and the stolen keys were only dangerous while the theft remained undiscovered. Once the world learned about the theft, any certificate created from the stolen keys could be revoked and rendered useless. In other words, the Stuxnet code signing problem was serious but the fix was simple.

But what happens to trust when you can’t trust the trust system? With this latest vulnerability, we’re talking about the very underpinnings of digital signing and software validation for any software running on any current Windows-based platform. And while the vulnerability doesn’t impact the actual controllers on the plant floor, I’m willing to bet that 99.9% of today’s industrial systems are running the Windows operating system for all the operator HMIs, engineering stations, data historians, and management servers. In other words, while this vulnerability doesn’t impact the actual PLCs, it will allow counterfeit and malicious software to sneak onto all the computers that communicate with, manage, or report on industrial processes.

This isn’t the first time that the limitations of code signing have been laid bare. In 2017, researchers at the University of Maryland showed that there were, at the time, over one million malware files in the wild that were signed. Such files are signed by bad guys as a means of fooling poorly-written antivirus software into thinking the malware is legitimate software, causing the software to skip over it.

So, as I point out frequently at conferences, code signing and digital certificates are necessary but not sufficient to ensure software is tamper-free and legitimate. This is especially true in critical infrastructures, where the use of code-signing is limited* and multiple validation mechanisms are necessary to keep our industrial processes reliable and our people safe.

This all ties back to why, over a half-decade ago, I became interested in alternative methods of validating software. My current project, the Framework for Analysis and Coordinated Trust (FACT), provides a collection of validation checks for vulnerabilities, malware, and subcomponent analysis, and does a deep dive into a file’s full certificate chain. Then, after thorough scrutiny, the platform provides a “FACT trust score” that technicians and managers can use to be confident in the decision to install a package (or the decision not to).

Certainly, any single test that FACT performs could be misled by a vulnerability like this latest one. However, by combining multiple tests and enabling the community to share intelligence, we stand a much better chance of outing rogue packages, counterfeits, and deprecated versions.

The ICS world needs ways it can trust software and firmware that cannot be signed (e.g., controller binaries) and confirms the validity of files that are signed, but with invalid certificates. I hope you’ll join the FACT community and help make ICS safer and more secure.

If you want to learn more, check out a quick video on how FACT handles Code Signing Validation.

If you want to kick the tires for yourself, try the FACT platform for free.

__

* For most embedded devices in the industrial world, code signing isn’t even an option. The operating systems found in most industrial devices don’t have the ability to validate certificates. ICS vendors are making progress in having the newest controllers offer validation features, but it will be many years before we can expect code signing to be broadly deployed in ICS.

When the Security Researchers Come Knocking, Don’t Shoot the Messenger

Our own Jonathan Butts and Billy Rios were interviewed this month on the CBS Morning News about their research showing that medical devices like pacemakers and insulin pumps can be hacked by… basically anybody.  These devices all contain embedded controllers, but unlike most modern computer technologies, they haven’t been designed with security in mind.

“We’ve yet to find a device that we’ve looked at that we haven’t been able to hack”, said Jonathan.

Billy also speaks to the one-way nature of medical equipment exploits, noting that it’s not just a matter of issuing a new credit card or changing a password when bad guys take advantage of the flaw. Victims of these kinds of attacks can end up dead.

You can see the full interview here:

http://www.cbsnews.com/video/how-medical-devices-like-pacemakers-insulin-pumps-can-be-hacked/

The Washington Post did a story on the same subject, featuring Billy and Jonathan back in October.

Poor security design is clearly widespread throughout the medical device industry.  As readers of our blog know, devices with embedded controllers are found in the electrical power industry, oil & gas, manufacturing, aerospace, defense, and a host of other critical infrastructure sectors. And many of those devices have had serious security vulnerabilities exposed in the past decade. But what makes this story concerning is that the medical industry seems especially behind in its approach to vulnerability management.

Billy and Jonathan uncovered the vulnerabilities associated with a Medtronic pacemaker way back in January last year. They then disclosed their findings in a detailed report to the vendor. Unfortunately, Medtronics denied that action was necessary and did nothing to address the problem or warn users.  It took a live, very public demonstration at Black Hat USA 2018 to capture the attention of the FDA and the vendor.

That isn’t the way responsible vulnerability disclosure is supposed to work. When researchers discover a vulnerability and privately share it with the vendor (and/or appropriate government agencies), the vendor needs to take that vulnerability seriously. That way the users of its products get a chance to patch before the dark side of the cyber world starts to exploit the weakness. Requiring researchers to broadcast the news to the world to get action is simply terrible security practice.

As a former CTO of a large industrial device manufacturer, I have faced my share of researchers bringing news of vulnerabilities in my company’s products. Some of the vulnerabilities proved to be very serious, while others simply a misunderstanding of how the product would be deployed in the field. Regardless, we took every vulnerability report seriously, immediately engaging the researchers so we could learn as much as possible about their testing techniques and findings. Sometimes, when we thought the researcher was onto a particularly serious or complex problem, we flew them into our development center so we could start addressing the issues as quickly and completely as possible.

The bottom line is that device manufacturers need to start seeing security researchers as partners, not annoyances. When a researcher finds a vulnerability, they are basically doing free QA testing that the quality and security teams should have done before the product ever shipped. It’s time that companies like Medtronic started working with security researchers, not fighting them. Instead, we should all be fighting the bad guys together.  It is the only way our critical systems will become more secure.

Follow Us:

Twitter
  LinkedIn

Scroll to top