Don’t Judge an Ebook by Its Cover

Interesting times lead to interesting opportunities. The current pandemic is proving no exception, but, sadly, it’s an opportunity for some attackers who have laid a rather cunning trap. As you no doubt know, supply chain security typically focuses on firmware and installers. However, in the course of researching vendor documentation, we saw a clever technique being utilized by attackers targeting critical infrastructure and industrial asset owners.

Instead of modifying software, the bad guys are going after ICS documentation. They are planting modified documents using legitimate titles and content from OEM manuals, thus polluting search engine results so that they can deliver a modified PDF/drive-by download to potentially compromise ICS systems. What we observed began about 2 weeks ago and it’s been getting worse since then.

Why is this scheme clever?

  • It targets specific industries susceptible to already known vulnerabilities.
  • It mimics real content on Google by stealing legitimate hosting on a valid domain, such as a university.
  • It blends in with system integrators hosting various information and installers outside of the official OEM ecosystem.
  • It targets end users who might not be the most cyber-savvy, especially if they are in a hurry or struggling during this pandemic.

In case you still think air gaps are a thing, this type of attack ends that debate. Any well-meaning technician seeking a manual or other deployment document, or perhaps an application installer, can download what they believe is a legitimate file right into their facility — bypassing the facility’s “secure” perimeter. The Google search results look reasonable enough, and that may seem like a faster way to get your hands on a document than navigating through a complex vendor website.

Let me share an example. On our hunt for the manual for an Emerson Delta-V DCS, we ran into a downright weird site that seemed pretty sketchy (not to pick on Emerson: we ran into variants of this issue with other vendors as well). Googling for the manual returned results that looked fairly reasonable:

We went ahead and clicked the first result (as one might do if they are in a hurry, had trouble tracking down the manual via other searches, or just generally consider .edu sites to be trustworthy). We then found ourselves here:

Suspicious, we checked out that original domain info-online.miami.edu from the Google results and something definitely looked strange (and each time we reloaded the page, the list of seemingly random titles was different):

No way were we going to click on that Click Here link.

We had a similar experience when Googling for a GE training manual. (I won’t bother with all the screen captures from that adventure!) The manual showed up in the results multiple times, using the exact text and product names from the authentic GE document — but they were hosted on a bunch of .ru sites. And a comparison of one of the Russian versions and the real document showed that the PDF had indeed been tampered with. The XML in the PDF had been rewritten and the trailer (tail end) was removed. We haven’t yet done a full analysis on the details of the tampering, but you obviously don’t want technicians downloading it onto a secure plant floor.

Most OT sites are soft targets on the inside, so the impacts could be highly disruptive (e.g., delivery of ransomware) and the infected user instantly becomes an unwitting malware delivery service.

So what to do?

  • For vendors: use FACT (🙂) to fingerprint your files, including your PDFs, and encourage your customers to always validate the origins of their software and to use your OEM portal.
  • For asset owners: use FACT and authenticate files before letting them anywhere near your critical systems.
  • For everyone: perform endpoint protection to block commodity malware and limit unauthorized software or documentation from being installed.

FACT is free (and easy) to use, so there’s really no reason to take risks with files of dubious origin. Try out FACT here.

3 Month Reprieve for Utilities on Cybersecurity Supply Chain Standards

Earlier this month, as the coronavirus accelerated its alarming sprint across North America, NERC requested that FERC defer a number of looming deadlines for Reliability Standards. For the cybersecurity-related standards (CIP-005-6, CIP-010-3, and CIP-013-1), NERC requested a 3-month delay to “help ensure grid reliability amid the impacts posed by the coronavirus outbreak, a public health emergency that is unprecedented in modern times.”

It certainly sounds like a sensible proposal, and last Friday FERC granted the request, stating it was “... reasonable to provide them additional flexibility to properly allocate resources to address the impacts of COVID-19.”  The “them” in this case are the utilities involved in the operation of our electric power grid.

We have a particular interest in the CIP-013-1 standard that focuses on supply chain risk management. It’s kind of our bread and butter here at aDolus. In fact, we delivered a training session to a great group at the last NERC CIPC meeting in early March on how to use our FACT platform to help with CIP-013 compliance without introducing onerous internal processes. (Back in the day, when people could actually sit within 6 feet of each other.)

aDolus NERC CIPC Group Training

While the need for these standards is overwhelming, I think we can all agree that the current COVID-19 emergency is an unprecedented, added strain on the operators of the electric grid. In their joint news release, FERC and NERC note the goal of helping utilities to “...focus their resources on keeping people safe and the lights on during this unprecedented public health emergency” and they specifically recognize the need to “focus on keeping their own people safe.”

I can only imagine the additional steps and processes each utility is having to develop and implement — practically overnight. Keeping their workforce adequately distanced and protected, disinfecting control equipment, vehicles, and entire substations… the list goes on. The necessary precautions will take immense planning and effort. These aren’t the kinds of jobs you can do from home and keeping this particular workforce safe, healthy, and focused is critical.

We can wait a few months to comply with the upcoming CIP standards.

Here’s a list of the cybersecurity-specific standards that have been delayed (courtesy of John Hoffman at NERC):

CIP-005-6 – Cyber Security – Electronic Security Perimeter(s) delayed to October 1, 2020
CIP-010-3 – Cyber Security – Configuration Change Management and
Vulnerability Assessments
delayed to October 1, 2020
CIP-013-1 – Cyber Security – Supply Chain Risk Management delayed to October 1, 2020

You can read the full order here.

Stay safe and wash your hands!

Windows 10 Certificate Validation Bug Exposes a Fundamental Weakness

Major Windows CVEThe announcement Tuesday from the NSA about the new cryptographic vulnerability in the Microsoft Windows operating system sent ripples of shock through our entire community. In case you missed it, this devastating vulnerability (CVE-2020-0601) allows attackers to bypass trust mechanisms to falsify certificates, making them appear to come from a trusted source. It also allows attackers to falsely authenticate themselves on vulnerable HTTPS connections and remotely execute code. Let’s hope everyone is on top of their Microsoft security patches or there could be some serious damage done.

This week’s warning isn’t the usual story of forged certificates or somebody using stolen keys. We all remember Stuxnet (read more on that here), but that exploit required the attackers to penetrate and then steal the code signing keys from two trusted software manufacturers. The theft was non-trivial and the stolen keys were only dangerous while the theft remained undiscovered. Once the world learned about the theft, any certificate created from the stolen keys could be revoked and rendered useless. In other words, the Stuxnet code signing problem was serious but the fix was simple.

But what happens to trust when you can’t trust the trust system? With this latest vulnerability, we’re talking about the very underpinnings of digital signing and software validation for any software running on any current Windows-based platform. And while the vulnerability doesn’t impact the actual controllers on the plant floor, I’m willing to bet that 99.9% of today’s industrial systems are running the Windows operating system for all the operator HMIs, engineering stations, data historians, and management servers. In other words, while this vulnerability doesn’t impact the actual PLCs, it will allow counterfeit and malicious software to sneak onto all the computers that communicate with, manage, or report on industrial processes.

This isn’t the first time that the limitations of code signing have been laid bare. In 2017, researchers at the University of Maryland showed that there were, at the time, over one million malware files in the wild that were signed. Such files are signed by bad guys as a means of fooling poorly-written antivirus software into thinking the malware is legitimate software, causing the software to skip over it.

So, as I point out frequently at conferences, code signing and digital certificates are necessary but not sufficient to ensure software is tamper-free and legitimate. This is especially true in critical infrastructures, where the use of code-signing is limited* and multiple validation mechanisms are necessary to keep our industrial processes reliable and our people safe.

This all ties back to why, over a half-decade ago, I became interested in alternative methods of validating software. My current project, the Framework for Analysis and Coordinated Trust (FACT), provides a collection of validation checks for vulnerabilities, malware, and subcomponent analysis, and does a deep dive into a file’s full certificate chain. Then, after thorough scrutiny, the platform provides a “FACT trust score” that technicians and managers can use to be confident in the decision to install a package (or the decision not to).

Certainly, any single test that FACT performs could be misled by a vulnerability like this latest one. However, by combining multiple tests and enabling the community to share intelligence, we stand a much better chance of outing rogue packages, counterfeits, and deprecated versions.

The ICS world needs ways it can trust software and firmware that cannot be signed (e.g., controller binaries) and confirms the validity of files that are signed, but with invalid certificates. I hope you’ll join the FACT community and help make ICS safer and more secure.

If you want to learn more, check out a quick video on how FACT handles Code Signing Validation.

If you want to kick the tires for yourself, try the FACT platform for free.

__

* For most embedded devices in the industrial world, code signing isn’t even an option. The operating systems found in most industrial devices don’t have the ability to validate certificates. ICS vendors are making progress in having the newest controllers offer validation features, but it will be many years before we can expect code signing to be broadly deployed in ICS.

Podcast: Where Do Your Bits Really Come From?

Earlier this year I attended the Public Safety Canada Industrial Control System Security symposium in Charlottetown, PEI (FYI the PSC ICS events are outstanding – worth attending, even if you are not Canadian). While there, I had a chance to meet with an old friend, Andrew Ginter, Vice President of Industrial Security at Waterfall Security Solutions. We chatted about an issue I’ve been interested in – or, dare I say, obsessed with – for a while now: the software supply chain in ICS and how to ensure that it’s trustworthy. Our conversation was the basis for the podcast Where Do Your Bits Really Come From? Let me fill you in on some of the points we discussed.

We began by talking about the field itself: just how widespread is the issue of supply chain integrity? The problem is actually much larger and complex than I suspected when I started investigating it in 2017. Initially, I thought that the problem just affected the owners and operators of ICS assets: as the 2014 Dragonfly 1.0 attacks showed us, technicians in the field risk patching their control systems with harmful updates.

Dragonfly Attacks
The Dragonfly attackers penetrated the websites of ICS vendors and replaced legitimate software with packages that had trojan malware called Havex embedded in them. Customers downloaded and installed these infected packages, believing them to be valid updates.

However, the more people I spoke with about supply chain issues, the more I realized ICS vendors had their own challenges as well. It turns out that people view the problem differently depending on which vertical they’re in and what their role is – but the same basic problem still exists for all audiences.

So what exactly is the problem? Well, that’s pretty simple: How do we, as users, trust the firmware and software that we’re loading into our industrial control systems?

And to look at it from the other side: How do vendors know that the software out in the world associated with their organization hasn’t been tampered with or did, in fact, come from them?

When dealing with software, there are multiple issues that test our trust. The two examples I discussed with Andrew were counterfeiting (injection of malicious code into the supply chain) and our inability to know exactly what components make up a software package. This latter issue is complex because of the way we develop software today: most software projects include embedded third-party and open-source code. And that embedded code has its own third-party and open-source code embedded in it. So what happens if one of those subcomponent’s subcomponents has a vulnerability? Would the ICS vendor even know about the vulnerability? Would their customers know? Unfortunately, the usual answer is “No.”

Nesting DollsPhoto by Marco Verch / CC BY 2.0 / Cropped, small doll face altered

This may sound a bit gloomy but don’t despair: we talk about how the industry is making progress in this area. For example, code signing technology is useful to address the software tampering issue, though it won’t solve the problem on its own. Unfortunately, it is complex, it’s not widely used outside of IT software – AND malware writers have figured out how to use it to their advantage!

The key is in the ability to break down the libraries so that we can identify who built which pieces and generate a reliable Software Bill of Materials, so to speak. The solution, as is often the case, lies in having better knowledge and then being able to effectively share that knowledge with users.

It’s amazing how much ground you can cover during a single conversation. After talking extensively about the problem, I shared how we are using the information available to us to find a solution. I spoke with Andrew about aDolus, how it all started, and how our team is working to address the problem with the supply chain. Vendors are keen to come on board to help us with our solution.

I’m hoping you find the challenge of securing the supply chain as interesting as the aDolus team does. If so, jump over to the Waterfall Security Solutions podcast page and listen to the full podcast. If you stick around until the end, you’ll even get to hear where the company name aDolus comes from. Enjoy!

Who Infected Schneider Electrics’ Thumbdrive?

Infected USB Drive

On 24 August 2018 Schneider Electric issued a security notification alerting users that the Communications and Battery Monitoring devices for their Conext Solar Energy Monitoring Systems  were shipped with malware-infected USB drives.

First of all, kudos to Schneider Electric for alerting their customers and providing information on how to remedy the situation. According to Schneider, the infected files would not affect the devices themselves. Schneider also noted that the particular malware was easy to detect and remove by common virus scanning programs.

Provided that all of Schneider’s customers read these alerts, this should remain a minor security incident. Unfortunately, this is a big assumption. Due to the complexities of modern distribution channels, I’m pretty certain no one in the world knows if the Schneider notice is getting to the people who actually use the Conext product. It could be getting stuck on some purchasing manager’s desk, never to be forwarded to the technicians in the field. Or it could be languishing in the inbox of an engineering firm that is no longer working at the location where the Conext product is deployed. If ZDNet and CyberScoop had not reported on the story, it may have stayed off everyone’s radar.   Clearly, both vendors and asset owners need better ways of sharing urgent security information.

The Conext Battery Monitor from Schneider Electric
The Conext Battery Monitor
Source: Schneider Electric

But what is especially interesting is that the thumb drives were not infected at Schneider’s facilities.  They were infected via a third-party supplier during the manufacturing process. Like ALL major ICS vendors, the supply chain for Schneider hardware, software (and even the media upon which it is shipped) is exposed to many hands.

This situation highlights an alarming reality in the ICS world. Just because a digital file comes from a trusted vendor doesn’t mean you can trust all the other companies that touched that file.

Who knows which “third-party supplier’s facility” was involved in contaminating those USB drives?? Was it the USB manufacturer… or a duplication company… or even a graphics company who added some branding? Schneider Electric no doubt will be re-thinking that relationship, but the fact remains that they have to work with 3rd parties to get their products to market.

The worrisome question is, what other ICS vendors use that same third-party supplier? How widespread is the infection? It seems unlikely that Schneider Electric is this supplier’s only customer. Naming and shaming the supplier may be fraught with legal consequences (or perhaps they are still tracking down the specific vendor) so Schneider has remained silent for now on the source of the malware. That means all the other vendors out there and their customers may be exposed as well. Or not. We don’t know – and that is a problem.

One hopes that if other vendors have detected issues with their USB drives, they will follow Schneider Electric’s lead and issue prompt alerts. Some vendors are better than others at transparency and there will likely be some who choose to lay low instead to avoid bad publicity. It is a pity because vendors like Schneider are as much a victim in this scenario as the end users.

This is one of the reasons aDolus is developing a platform for ICS asset owners and vendors that offers an ecosystem of trust where they can verify software of, let’s call it “complicated origin” and ensure it hasn’t been tampered with BEFORE they install it. We’re also looking at ways vendors can get early warnings about security issues occurring at their client’s sites and not have to wait until hundreds or thousands of facilities have been infected.  

Interested in learning more about protecting yourself from compromised software? Let us know if you are an end user interested in validating ICS software or an ICS vendor interested in protecting your distribution mechanisms to ensure they are clean.

Follow Us:

Twitter
  LinkedIn

Building (or Losing) Trust in our Software Supply Chain

Back in 2014, when I was managing Tofino Security, I became very interested in the Dragonfly attacks against industrial control systems (ICS). I was particularly fascinated with the ways that the attackers exploited the trust between ICS suppliers and their customers. Frankly, this scared me because, as I will explain, I knew that all the firewalls, antivirus, whitelisting, and patching in the world would do little to protect us from this threat.

If you are not familiar with the Dragonfly attacks, they were launched against the pharmaceutical industry (and likely the energy industry) in 2013 and 2014. The attacks actually started in early 2013 with a spear phishing campaign against company executives. But the part that concerned me began later, starting in June 2013 and ending in April 2014.

During that period, the Dragonfly attackers penetrated the websites of three ICS vendors: vendors who supply hardware and software to the industrial market. Once the bad guys controlled these websites, they replaced the vendors’ legitimate software/firmware packages with new packages that had Trojan malware called Havex embedded in them (Attack Stage #1).

When the vendors’ customers went to these websites they would see that there was a new version of software for their ICS products. They would then download these infected packages, believing them to be valid updates (Attack Stage #2). And because one of the messages we give in the security world is to “keep your systems patched,” these users pushed out the evil updates to the control systems in their plants (Attack Stage #3).

Once these systems were infected, the Havex malware would call back to the hacker’s command and control center, informing the attackers that they had penetrated deep into a control system. The attackers then downloaded tools for ICS reconnaissance and manipulation into the infected ICS hardware (Attack Stage #4). These new attack tools focused on the protocols we all know well in the ICS world, such as Modbus, OPC, and Ethernet/IP.

As far as we know, the attackers were most interested in stealing industrial intellectual property — not destroying equipment or endangering lives. However, there was nothing that would have restricted the attackers to just information theft. Their tool sets were extremely flexible and could have easily included software that would manipulate or destroy a process.

The Dragonfly attacks were particularly insidious because they took advantage of the trust between suppliers and end users. The engineers and technicians in industrial plants inherently trust their suppliers to provide safe, secure, and reliable software. By downloading software and installing it, the Dragonfly victims were doing what they had been told would improve their plant’s security. In effect, these users were unwittingly helping the attackers bypass all the firewalls, circumvent any whitelisting or malware detection, and go directly to the critical control systems.

This is what I call “Exploiting the Supplier-User Trust Chain” — and I think it is one of the most serious security risks facing our world today. It is not only a problem for ICS-focused industries like energy or manufacturing, but also for any person or company that uses “smart “ devices… which is pretty well all of us. Aircraft, automobiles, and medical devices are all susceptible to this sort of attack.

So with the help of Billy Rios, Dr. Jonathan Butts , a great team of researchers, and the DHS Silicon Valley Initiatives Program, I’ve been working on finding a solution to the Chain-of-Trust challenge. aDolus and FACTTM (Framework for Analysis and Coordinated Trust) are the result of 1000s of hours of our systematic investigation into the problem and its possible solutions. Join me on this blog over the next few months as I share what we have learned and where we still have to go to ensure trust in our software.

For more on Dragonfly:

Scroll to top