Earlier this month, as the coronavirus accelerated its alarming sprint across North America, NERC requested that FERC defer a number of looming deadlines for Reliability Standards. For the cybersecurity-related standards (CIP-005-6, CIP-010-3, and CIP-013-1), NERC requested a 3-month delay to “help ensure grid reliability amid the impacts posed by the coronavirus outbreak, a public health emergency that is unprecedented in modern times.”
It certainly sounds like a sensible proposal, and last Friday FERC granted the request, stating it was “... reasonable to provide them additional flexibility to properly allocate resources to address the impacts of COVID-19.” The “them” in this case are the utilities involved in the operation of our electric power grid.
We have a particular interest in the CIP-013-1 standard that focuses on supply chain risk management. It’s kind of our bread and butter here at aDolus. In fact, we delivered a training session to a great group at the last NERC CIPC meeting in early March on how to use our FACT platform to help with CIP-013 compliance without introducing onerous internal processes. (Back in the day, when people could actually sit within 6 feet of each other.)
While the need for these standards is overwhelming, I think we can all agree that the current COVID-19 emergency is an unprecedented, added strain on the operators of the electric grid. In their joint news release, FERC and NERC note the goal of helping utilities to “...focus their resources on keeping people safe and the lights on during this unprecedented public health emergency” and they specifically recognize the need to “focus on keeping their own people safe.”
I can only imagine the additional steps and processes each utility is having to develop and implement — practically overnight. Keeping their workforce adequately distanced and protected, disinfecting control equipment, vehicles, and entire substations… the list goes on. The necessary precautions will take immense planning and effort. These aren’t the kinds of jobs you can do from home and keeping this particular workforce safe, healthy, and focused is critical.
We can wait a few months to comply with the upcoming CIP standards.
Here’s a list of the cybersecurity-specific standards that have been delayed (courtesy of John Hoffman at NERC):
This week’s warning isn’t the usual story of forged certificates or somebody using stolen keys. We all remember Stuxnet (read more on that here), but that exploit required the attackers to penetrate and then steal the code signing keys from two trusted software manufacturers. The theft was non-trivial and the stolen keys were only dangerous while the theft remained undiscovered. Once the world learned about the theft, any certificate created from the stolen keys could be revoked and rendered useless. In other words, the Stuxnet code signing problem was serious but the fix was simple.
But what happens to trust when you can’t trust the trust system? With this latest vulnerability, we’re talking about the very underpinnings of digital signing and software validation for any software running on any current Windows-based platform. And while the vulnerability doesn’t impact the actual controllers on the plant floor, I’m willing to bet that 99.9% of today’s industrial systems are running the Windows operating system for all the operator HMIs, engineering stations, data historians, and management servers. In other words, while this vulnerability doesn’t impact the actual PLCs, it will allow counterfeit and malicious software to sneak onto all the computers that communicate with, manage, or report on industrial processes.
This isn’t the first time that the limitations of code signing have been laid bare. In 2017, researchers at the University of Maryland showed that there were, at the time, over one million malware files in the wild that were signed. Such files are signed by bad guys as a means of fooling poorly-written antivirus software into thinking the malware is legitimate software, causing the software to skip over it.
So, as I point out frequently at conferences, code signing and digital certificates are necessary but not sufficient to ensure software is tamper-free and legitimate. This is especially true in critical infrastructures, where the use of code-signing is limited* and multiple validation mechanisms are necessary to keep our industrial processes reliable and our people safe.
This all ties back to why, over a half-decade ago, I became interested in alternative methods of validating software. My current project, the Framework for Analysis and Coordinated Trust (FACT), provides a collection of validation checks for vulnerabilities, malware, and subcomponent analysis, and does a deep dive into a file’s full certificate chain. Then, after thorough scrutiny, the platform provides a “FACT trust score” that technicians and managers can use to be confident in the decision to install a package (or the decision not to).
Certainly, any single test that FACT performs could be misled by a vulnerability like this latest one. However, by combining multiple tests and enabling the community to share intelligence, we stand a much better chance of outing rogue packages, counterfeits, and deprecated versions.
The ICS world needs ways it can trust software and firmware that cannot be signed (e.g., controller binaries) and confirms the validity of files that are signed, but with invalid certificates. I hope you’ll join the FACT community and help make ICS safer and more secure.
* For most embedded devices in the industrial world, code signing isn’t even an option. The operating systems found in most industrial devices don’t have the ability to validate certificates. ICS vendors are making progress in having the newest controllers offer validation features, but it will be many years before we can expect code signing to be broadly deployed in ICS.
I’m writing this blog from Marrakech, a city in the western foothills of Morocco’s High Atlas Mountains. Marrakech has been a trading city since it was established by a clan of Berber warriors (the Almoravids) in the 11th century. The heart of the city (where Joann and I are staying) is the medina, a densely packed, walled medieval city with over 9000 maze-like alleys full of noisy, chaotic souks (marketplaces) that sell everything from traditional textiles, pottery, and jewelry to food and spices to motorcycle parts. There is probably nothing you can’t buy, either legal or illegal, in the Marrakech medina.
Like all unregulated marketplaces, the Marrakech medina has its share of fakes and counterfeits. Some are very obvious (Armani bags for $50), some are highly amusing (need an official Louis Vuitton football anyone?), and some are subtle. But the one fake that really interested me was counterfeit saffron.
If you aren’t familiar with saffron, it is a vivid crimson spice created by collecting the stigma and styles from crocus flowers. According to Wikipedia, saffron is the world’s most costly spice by weight. I won’t disagree: the saffron we ended up buying in the medina cost $6 USD per gram, but we heard of some higher quality stuff selling for over 4 times that price.
Now, at those sorts of prices, it isn’t surprising that some crooked merchants might start making fake product to swindle the unsuspecting consumer. Joann and I wanted to learn a bit about both the real and the fake saffron, so we spoke to a reputable spice merchant in the souk. He showed us both the real and fake product and what he looks for when buying wholesale.
I won’t go into the details of selecting good authentic saffron in this blog, but the fake stuff fascinated me. While there are some good fakes that require pretty sophisticated testing, many fakes are easily spotted impostors. These are made by simply dying corn silk with either red food colouring or paprika. The tests to spot them are simple, as this Youtube video shows.
This got me wondering: how can it be profitable to make and sell such poor quality fakes? After all, if one can detect them so easily after a few minutes of education, wouldn’t anyone selling these fakes be immediately discovered and never make a sale? (Or much worse: in Middle Ages, those found selling adulterated saffron were executed under the Safranschou code.)
Sadly, there must be a large enough cohort of people (aka tourists?) that buy saffron without knowing the first thing about it. Or, to put it another way, clearly counterfeit saffron doesn’t need to be a quality imitation to be an effective scam, it just needs to be good enough to fool a person that lacks either the knowledge, time or incentive to perform the simple tests (such as a tourist in a rush to get back to a tour bus).
Later that night I realized that the cybersecurity world has been seeing this same situation playing out in the area of digital signatures for executable files (aka code signing). In 2017, Doowon Kim, Bum Jun Kwon, and Tudor Dumitraș at the University of Maryland published a paper investigating malware that carried digital signatures.
Some of the malware they investigated had been digitally signed with keys stolen from legitimate companies: Stuxnet being the most famous example of this sort of trickery. In other cases, malware was signed using certificates that had been mistakenly issued to malicious actors impersonating legitimate companies. For example, in 2001 VeriSign issued two code signing certificates with the common name of “Microsoft Corporation” to an adversary who claimed to be a Microsoft employee. Both of these types of exploits require a considerable amount of expertise and effort to carry out.
However, the authors discovered a third unbelievably simple exploit that accounted for almost one-third of the signed malware in the wild. In the words of the authors:
We find that simply copying an Authenticode signature from a legitimate file to a known malware sample may cause anti-virus products to stop detecting it, even though the signature is invalid, as it does not match the file digest. 34 anti-virus products are affected, and this type of abuse accounts for 31.1% of malware signatures in the wild.
This is the digital equivalent of the border officer that lets you pass simply because you have a passport in your hand. Not taking the time to see if the passport actually belongs to you completely invalidates the integrity of the passport system.
Now, in all my travels, I’ve never actually seen a border officer do this. They are trained to follow the approved validation processes that ensure all but the most skillfully constructed fake passport is detected.
But, like saffron purchasers, much of the IT and OT world has been assuming that the mere existence of a digital signature is proof of that software being trustworthy. This is a terrible assumption that allows malicious actors an easy attack path. Unless we start to properly test software signatures, the bad guys will penetrate our systems just as quickly as the scam artist in the medina separates tourists and their money.
Earlier this year I attended the Public Safety Canada Industrial Control System Security symposium in Charlottetown, PEI (FYI the PSC ICS events are outstanding – worth attending, even if you are not Canadian). While there, I had a chance to meet with an old friend, Andrew Ginter, Vice President of Industrial Security at Waterfall Security Solutions. We chatted about an issue I’ve been interested in – or, dare I say, obsessed with – for a while now: the software supply chain in ICS and how to ensure that it’s trustworthy. Our conversation was the basis for the podcast Where Do Your Bits Really Come From? Let me fill you in on some of the points we discussed.
We began by talking about the field itself: just how widespread is the issue of supply chain integrity? The problem is actually much larger and complex than I suspected when I started investigating it in 2017. Initially, I thought that the problem just affected the owners and operators of ICS assets: as the 2014 Dragonfly 1.0 attacks showed us, technicians in the field risk patching their control systems with harmful updates.
However, the more people I spoke with about supply chain issues, the more I realized ICS vendors had their own challenges as well. It turns out that people view the problem differently depending on which vertical they’re in and what their role is – but the same basic problem still exists for all audiences.
So what exactly is the problem? Well, that’s pretty simple: How do we, as users, trust the firmware and software that we’re loading into our industrial control systems?
And to look at it from the other side: How do vendors know that the software out in the world associated with their organization hasn’t been tampered with or did, in fact, come from them?
When dealing with software, there are multiple issues that test our trust. The two examples I discussed with Andrew were counterfeiting (injection of malicious code into the supply chain) and our inability to know exactly what components make up a software package. This latter issue is complex because of the way we develop software today: most software projects include embedded third-party and open-source code. And that embedded code has its own third-party and open-source code embedded in it. So what happens if one of those subcomponent’s subcomponents has a vulnerability? Would the ICS vendor even know about the vulnerability? Would their customers know? Unfortunately, the usual answer is “No.”
This may sound a bit gloomy but don’t despair: we talk about how the industry is making progress in this area. For example, code signing technology is useful to address the software tampering issue, though it won’t solve the problem on its own. Unfortunately, it is complex, it’s not widely used outside of IT software – AND malware writers have figured out how to use it to their advantage!
The key is in the ability to break down the libraries so that we can identify who built which pieces and generate a reliable Software Bill of Materials, so to speak. The solution, as is often the case, lies in having better knowledge and then being able to effectively share that knowledge with users.
It’s amazing how much ground you can cover during a single conversation. After talking extensively about the problem, I shared how we are using the information available to us to find a solution. I spoke with Andrew about aDolus, how it all started, and how our team is working to address the problem with the supply chain. Vendors are keen to come on board to help us with our solution.
I’m hoping you find the challenge of securing the supply chain as interesting as the aDolus team does. If so, jump over to the Waterfall Security Solutions podcast page and listen to the full podcast. If you stick around until the end, you’ll even get to hear where the company name aDolus comes from. Enjoy!
Last Wednesday (September 11), the U.S. Department of Defense released a draft of its Cybersecurity Maturity Model Certification (CMMC) for public comment. The idea is for the DoD to create a unified framework for defense contractor cybersecurity.
This new document attempts to make the DoD requirements fit better with the specific security needs of a given department or project by allowing contractors to operate at five different maturity levels. I won’t go into all the details, but the U.S. legal firm Arnold & Porter provided a nice in-depth analysis of both the need for and challenges of a CMMC framework in their recent blog.
With others commenting on the policy issues, I decided to look at the CMMC from a technical point of view. In other words, does each requirement seem appropriate for the maturity level it is assigned to?
For the most part, the CMMC is reasonable. For example, there were sensible requirements for the use of password managers and Multi-Factor Authentication (MFA) for contractors operating at Maturity Level 4. However, I was shocked to see a requirement that was straight out of the days of the mainframe:
Identification and Authorization (IDA) L3-5: A minimum password complexity, including change of characters, is defined and enforced.
While this requirement has its history in NIST SP 800-171, it is diametrically opposed to the current NIST guideline for password management, NIST Special Publication 800-63B. Section 184.108.40.206 of that document explicitly requires that organizations NOT force users to use the complex mix of numbers, letters, and punctuation. The UK has a similar policy on password composition rules for any organization supplying services to its government.
Why prohibit composition rules for passwords? To quote NIST:
Composition rules also inadvertently encourage people to use the same password across multiple systems since they often result in passwords that are difficult for people to memorize.
Today no one who studies cryptography thinks enforced password complexity is a good security strategy. It is not a secret that humans are poor at remembering random sequences of characters. They may be able to remember a few important sequences (such as their phone number) but as soon as they are required to memorize more than a handful, they resort to “tricks” to help out. Writing passwords on Post-it Notes is one trick. Adding a number or a special character to a simple word or an existing password is another common solution.
So when companies force users to invent complex passwords, users resort to using secrets like Password!. In other words, they use passwords that are easy to remember but are extremely insecure. The bad guys understand human nature and start with the faux-complex passwords like Password! when they are hacking a system.
Unfortunately, as this latest DoD document shows, these old-fashioned policies are still prevalent throughout many IT departments and required in many security guidelines, including NIST SP 800-171. It is time for them to go.
What should DoD be asking contractors to do instead? First, they should require contractors instruct their staff to NEVER reuse passwords, especially across systems. Each password should be completely different from any previous password you have used.
Second, employees should be coached to choose passwords that will never be found in a dictionary. The UK government advises creating passwords using three random words:
You just put them together, like ‘coffeetrainfish’ or ‘walltinshirt’.
It is a good strategy and the famous techie cartoon xkcd shows why this works:
Staff should be warned not to get cute and substitute numbers for letters (such as “5” for “s”) as the bad guys are already onto them and try passwords like Pa55w0rd.
I hope passwords will die out soon, to be replaced with far better technologies like Multifactor Authentication (MFA) and Fast Identity Online (FIDO). In the meantime, if contractors train their users to make every password different, ideally from a string of random three or four words, the DoD’s IT and OT systems will be far more secure than annoying password expiry policies ever made them.
Our own Jonathan Butts and Billy Rios were interviewed this month on the CBS Morning News about their research showing that medical devices like pacemakers and insulin pumps can be hacked by… basically anybody. These devices all contain embedded controllers, but unlike most modern computer technologies, they haven’t been designed with security in mind.
“We’ve yet to find a device that we’ve looked at that we haven’t been able to hack”, said Jonathan.
Billy also speaks to the one-way nature of medical equipment exploits, noting that it’s not just a matter of issuing a new credit card or changing a password when bad guys take advantage of the flaw. Victims of these kinds of attacks can end up dead.
Poor security design is clearly widespread throughout the medical device industry. As readers of our blog know, devices with embedded controllers are found in the electrical power industry, oil & gas, manufacturing, aerospace, defense, and a host of other critical infrastructure sectors. And many of those devices have had serious security vulnerabilities exposed in the past decade. But what makes this story concerning is that the medical industry seems especially behind in its approach to vulnerability management.
Billy and Jonathan uncovered the vulnerabilities associated with a Medtronic pacemaker way back in January last year. They then disclosed their findings in a detailed report to the vendor. Unfortunately, Medtronics denied that action was necessary and did nothing to address the problem or warn users. It took a live, very public demonstration at Black Hat USA 2018 to capture the attention of the FDA and the vendor.
That isn’t the way responsible vulnerability disclosure is supposed to work. When researchers discover a vulnerability and privately share it with the vendor (and/or appropriate government agencies), the vendor needs to take that vulnerability seriously. That way the users of its products get a chance to patch before the dark side of the cyber world starts to exploit the weakness. Requiring researchers to broadcast the news to the world to get action is simply terrible security practice.
As a former CTO of a large industrial device manufacturer, I have faced my share of researchers bringing news of vulnerabilities in my company’s products. Some of the vulnerabilities proved to be very serious, while others simply a misunderstanding of how the product would be deployed in the field. Regardless, we took every vulnerability report seriously, immediately engaging the researchers so we could learn as much as possible about their testing techniques and findings. Sometimes, when we thought the researcher was onto a particularly serious or complex problem, we flew them into our development center so we could start addressing the issues as quickly and completely as possible.
The bottom line is that device manufacturers need to start seeing security researchers as partners, not annoyances. When a researcher finds a vulnerability, they are basically doing free QA testing that the quality and security teams should have done before the product ever shipped. It’s time that companies like Medtronic started working with security researchers, not fighting them. Instead, we should all be fighting the bad guys together. It is the only way our critical systems will become more secure.
First of all, kudos to Schneider Electric for alerting their customers and providing information on how to remedy the situation. According to Schneider, the infected files would not affect the devices themselves. Schneider also noted that the particular malware was easy to detect and remove by common virus scanning programs.
Provided that all of Schneider’s customers read these alerts, this should remain a minor security incident. Unfortunately, this is a big assumption. Due to the complexities of modern distribution channels, I’m pretty certain no one in the world knows if the Schneider notice is getting to the people who actually use the Conext product. It could be getting stuck on some purchasing manager’s desk, never to be forwarded to the technicians in the field. Or it could be languishing in the inbox of an engineering firm that is no longer working at the location where the Conext product is deployed. If ZDNet and CyberScoop had not reported on the story, it may have stayed off everyone’s radar. Clearly, both vendors and asset owners need better ways of sharing urgent security information.
The Conext Battery Monitor Source: Schneider Electric
But what is especially interesting is that the thumb drives were not infected at Schneider’s facilities. They were infected via a third-party supplier during the manufacturing process. Like ALL major ICS vendors, the supply chain for Schneider hardware, software (and even the media upon which it is shipped) is exposed to many hands.
This situation highlights an alarming reality in the ICS world. Just because a digital file comes from a trusted vendor doesn’t mean you can trust all the other companies that touched that file.
Who knows which “third-party supplier’s facility” was involved in contaminating those USB drives?? Was it the USB manufacturer… or a duplication company… or even a graphics company who added some branding? Schneider Electric no doubt will be re-thinking that relationship, but the fact remains that they have to work with 3rd parties to get their products to market.
The worrisome question is, what other ICS vendors use that same third-party supplier? How widespread is the infection? It seems unlikely that Schneider Electric is this supplier’s only customer. Naming and shaming the supplier may be fraught with legal consequences (or perhaps they are still tracking down the specific vendor) so Schneider has remained silent for now on the source of the malware. That means all the other vendors out there and their customers may be exposed as well. Or not. We don’t know – and that is a problem.
One hopes that if other vendors have detected issues with their USB drives, they will follow Schneider Electric’s lead and issue prompt alerts. Some vendors are better than others at transparency and there will likely be some who choose to lay low instead to avoid bad publicity. It is a pity because vendors like Schneider are as much a victim in this scenario as the end users.
This is one of the reasons aDolus is developing a platform for ICS asset owners and vendors that offers an ecosystem of trust where they can verify software of, let’s call it “complicated origin” and ensure it hasn’t been tampered with BEFORE they install it. We’re also looking at ways vendors can get early warnings about security issues occurring at their client’s sites and not have to wait until hundreds or thousands of facilities have been infected.
Interested in learning more about protecting yourself from compromised software? Let us know if you are an end user interested in validating ICS software or an ICS vendor interested in protecting your distribution mechanisms to ensure they are clean.
Back in 2014, when I was managing Tofino Security, I became very interested in the Dragonfly attacks against industrial control systems (ICS). I was particularly fascinated with the ways that the attackers exploited the trust between ICS suppliers and their customers. Frankly, this scared me because, as I will explain, I knew that all the firewalls, antivirus, whitelisting, and patching in the world would do little to protect us from this threat.
If you are not familiar with the Dragonfly attacks, they were launched against the pharmaceutical industry (and likely the energy industry) in 2013 and 2014. The attacks actually started in early 2013 with a spear phishing campaign against company executives. But the part that concerned me began later, starting in June 2013 and ending in April 2014.
During that period, the Dragonfly attackers penetrated the websites of three ICS vendors: vendors who supply hardware and software to the industrial market. Once the bad guys controlled these websites, they replaced the vendors’ legitimate software/firmware packages with new packages that had Trojan malware called Havex embedded in them (Attack Stage #1).
When the vendors’ customers went to these websites they would see that there was a new version of software for their ICS products. They would then download these infected packages, believing them to be valid updates (Attack Stage #2). And because one of the messages we give in the security world is to “keep your systems patched,” these users pushed out the evil updates to the control systems in their plants (Attack Stage #3).
Once these systems were infected, the Havex malware would call back to the hacker’s command and control center, informing the attackers that they had penetrated deep into a control system. The attackers then downloaded tools for ICS reconnaissance and manipulation into the infected ICS hardware (Attack Stage #4). These new attack tools focused on the protocols we all know well in the ICS world, such as Modbus, OPC, and Ethernet/IP.
As far as we know, the attackers were most interested in stealing industrial intellectual property — not destroying equipment or endangering lives. However, there was nothing that would have restricted the attackers to just information theft. Their tool sets were extremely flexible and could have easily included software that would manipulate or destroy a process.
The Dragonfly attacks were particularly insidious because they took advantage of the trust between suppliers and end users. The engineers and technicians in industrial plants inherently trust their suppliers to provide safe, secure, and reliable software. By downloading software and installing it, the Dragonfly victims were doing what they had been told would improve their plant’s security. In effect, these users were unwittingly helping the attackers bypass all the firewalls, circumvent any whitelisting or malware detection, and go directly to the critical control systems.
This is what I call “Exploiting the Supplier-User Trust Chain” — and I think it is one of the most serious security risks facing our world today. It is not only a problem for ICS-focused industries like energy or manufacturing, but also for any person or company that uses “smart “ devices… which is pretty well all of us. Aircraft, automobiles, and medical devices are all susceptible to this sort of attack.
So with the help of Billy Rios, Dr. Jonathan Butts , a great team of researchers, and the DHS Silicon Valley Initiatives Program, I’ve been working on finding a solution to the Chain-of-Trust challenge. aDolus and FACTTM (Framework for Analysis and Coordinated Trust) are the result of 1000s of hours of our systematic investigation into the problem and its possible solutions. Join me on this blog over the next few months as I share what we have learned and where we still have to go to ensure trust in our software.