Sniffing Out Fakes: From Saffron in Marrakech to Digital Certificates

Eric Byres in Morocco

I’m writing this blog from Marrakech, a city in the western foothills of Morocco’s High Atlas Mountains. Marrakech has been a trading city since it was established by a clan of Berber warriors (the Almoravids) in the 11th century. The heart of the city (where Joann and I are staying) is the medina, a densely packed, walled medieval city with over 9000 maze-like alleys full of noisy, chaotic souks (marketplaces) that sell everything from traditional textiles, pottery, and jewelry to food and spices to motorcycle parts. There is probably nothing you can’t buy, either legal or illegal, in the Marrakech medina.

Like all unregulated marketplaces, the Marrakech medina has its share of fakes and counterfeits. Some are very obvious (Armani bags for $50), some are highly amusing (need an official Louis Vuitton football anyone?), and some are subtle. But the one fake that really interested me was counterfeit saffron.

If you aren’t familiar with saffron, it is a vivid crimson spice created by collecting the stigma and styles from crocus flowers. According to Wikipedia, saffron is the world’s most costly spice by weight. I won’t disagree: the saffron we ended up buying in the medina cost $6 USD per gram, but we heard of some higher quality stuff selling for over 4 times that price.

Now, at those sorts of prices, it isn’t surprising that some crooked merchants might start making fake product to swindle the unsuspecting consumer. Joann and I wanted to learn a bit about both the real and the fake saffron, so we spoke to a reputable spice merchant in the souk. He showed us both the real and fake product and what he looks for when buying wholesale.

I won’t go into the details of selecting good authentic saffron in this blog, but the fake stuff fascinated me. While there are some good fakes that require pretty sophisticated testing, many fakes are easily spotted impostors. These are made by simply dying corn silk with either red food colouring or paprika. The tests to spot them are simple, as this Youtube video shows.

This got me wondering: how can it be profitable to make and sell such poor quality fakes? After all, if one can detect them so easily after a few minutes of education, wouldn’t anyone selling these fakes be immediately discovered and never make a sale? (Or much worse: in Middle Ages, those found selling adulterated saffron were executed under the Safranschou code.)

Sadly, there must be a large enough cohort of people (aka tourists?) that buy saffron without knowing the first thing about it. Or, to put it another way, clearly counterfeit saffron doesn’t need to be a quality imitation to be an effective scam, it just needs to be good enough to fool a person that lacks either the knowledge, time or incentive to perform the simple tests (such as a tourist in a rush to get back to a tour bus).

Later that night I realized that the cybersecurity world has been seeing this same situation playing out in the area of digital signatures for executable files (aka code signing). In 2017, Doowon Kim, Bum Jun Kwon, and Tudor Dumitraș at the University of Maryland published a paper investigating malware that carried digital signatures.

Some of the malware they investigated had been digitally signed with keys stolen from legitimate companies: Stuxnet being the most famous example of this sort of trickery. In other cases, malware was signed using certificates that had been mistakenly issued to malicious actors impersonating legitimate companies. For example, in 2001 VeriSign issued two code signing certificates with the common name of “Microsoft Corporation” to an adversary who claimed to be a Microsoft employee. Both of these types of exploits require a considerable amount of expertise and effort to carry out.

However, the authors discovered a third unbelievably simple exploit that accounted for almost one-third of the signed malware in the wild. In the words of the authors:

We find that simply copying an Authenticode signature from a legitimate file to a known malware sample may cause anti-virus products to stop detecting it, even though the signature is invalid, as it does not match the file digest. 34 anti-virus products are affected, and this type of abuse accounts for 31.1% of malware signatures in the wild.

This is the digital equivalent of the border officer that lets you pass simply because you have a passport in your hand. Not taking the time to see if the passport actually belongs to you completely invalidates the integrity of the passport system.

Now, in all my travels, I’ve never actually seen a border officer do this. They are trained to follow the approved validation processes that ensure all but the most skillfully constructed fake passport is detected.

But, like saffron purchasers, much of the IT and OT world has been assuming that the mere existence of a digital signature is proof of that software being trustworthy. This is a terrible assumption that allows malicious actors an easy attack path. Unless we start to properly test software signatures, the bad guys will penetrate our systems just as quickly as the scam artist in the medina separates tourists and their money.

For More Information:

Doowon Kim, Bum Jun Kwon, and Tudor Dumitraș Certified Malware: Measuring Breaches of Trust in the Windows Code-Signing PKI, CCS ’17 Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Pages 1435-1448, October 2017

Watch how FACT handles code signing/certificate checking:

For More Reading:

NIST 800-63-3 Appendix A: The Strength of Memorized Secrets: https://pages.nist.gov/800-63-3/sp800-63b/appA_memorized.html

Podcast: Where Do Your Bits Really Come From?

Earlier this year I attended the Public Safety Canada Industrial Control System Security symposium in Charlottetown, PEI (FYI the PSC ICS events are outstanding – worth attending, even if you are not Canadian). While there, I had a chance to meet with an old friend, Andrew Ginter, Vice President of Industrial Security at Waterfall Security Solutions. We chatted about an issue I’ve been interested in – or, dare I say, obsessed with – for a while now: the software supply chain in ICS and how to ensure that it’s trustworthy. Our conversation was the basis for the podcast Where Do Your Bits Really Come From? Let me fill you in on some of the points we discussed.

We began by talking about the field itself: just how widespread is the issue of supply chain integrity? The problem is actually much larger and complex than I suspected when I started investigating it in 2017. Initially, I thought that the problem just affected the owners and operators of ICS assets: as the 2014 Dragonfly 1.0 attacks showed us, technicians in the field risk patching their control systems with harmful updates.

Dragonfly Attacks
The Dragonfly attackers penetrated the websites of ICS vendors and replaced legitimate software with packages that had trojan malware called Havex embedded in them. Customers downloaded and installed these infected packages, believing them to be valid updates.

However, the more people I spoke with about supply chain issues, the more I realized ICS vendors had their own challenges as well. It turns out that people view the problem differently depending on which vertical they’re in and what their role is – but the same basic problem still exists for all audiences.

So what exactly is the problem? Well, that’s pretty simple: How do we, as users, trust the firmware and software that we’re loading into our industrial control systems?

And to look at it from the other side: How do vendors know that the software out in the world associated with their organization hasn’t been tampered with or did, in fact, come from them?

When dealing with software, there are multiple issues that test our trust. The two examples I discussed with Andrew were counterfeiting (injection of malicious code into the supply chain) and our inability to know exactly what components make up a software package. This latter issue is complex because of the way we develop software today: most software projects include embedded third-party and open-source code. And that embedded code has its own third-party and open-source code embedded in it. So what happens if one of those subcomponent’s subcomponents has a vulnerability? Would the ICS vendor even know about the vulnerability? Would their customers know? Unfortunately, the usual answer is “No.”

Nesting DollsPhoto by Marco Verch / CC BY 2.0 / Cropped, small doll face altered

This may sound a bit gloomy but don’t despair: we talk about how the industry is making progress in this area. For example, code signing technology is useful to address the software tampering issue, though it won’t solve the problem on its own. Unfortunately, it is complex, it’s not widely used outside of IT software – AND malware writers have figured out how to use it to their advantage!

The key is in the ability to break down the libraries so that we can identify who built which pieces and generate a reliable Software Bill of Materials, so to speak. The solution, as is often the case, lies in having better knowledge and then being able to effectively share that knowledge with users.

It’s amazing how much ground you can cover during a single conversation. After talking extensively about the problem, I shared how we are using the information available to us to find a solution. I spoke with Andrew about aDolus, how it all started, and how our team is working to address the problem with the supply chain. Vendors are keen to come on board to help us with our solution.

I’m hoping you find the challenge of securing the supply chain as interesting as the aDolus team does. If so, jump over to the Waterfall Security Solutions podcast page and listen to the full podcast. If you stick around until the end, you’ll even get to hear where the company name aDolus comes from. Enjoy!

Will the DoD’s CMMC Encourage Bad Password Habits?

Last Wednesday (September 11), the U.S. Department of Defense released a draft of its Cybersecurity Maturity Model Certification (CMMC) for public comment. The idea is for the DoD to create a unified framework for defense contractor cybersecurity.

Now you might think that none of this is new. After all, DFARS 252.204-7012 has been in effect since December 2017 and it requires that defense contractors comply with the National Institute of Standards and Technology’s Special Publication 800-171 (NIST SP 800-171). Unfortunately, it has become obvious that full compliance with NIST SP 800-171 is overkill for many contractors and projects. It also isn’t clear who needs to comply with what portions of NIST SP 800-171 and how that is enforced.

This new document attempts to make the DoD requirements fit better with the specific security needs of a given department or project by allowing contractors to operate at five different maturity levels. I won’t go into all the details, but the U.S. legal firm Arnold & Porter provided a nice in-depth analysis of both the need for and challenges of a CMMC framework in their recent blog.

With others commenting on the policy issues, I decided to look at the CMMC from a technical point of view. In other words, does each requirement seem appropriate for the maturity level it is assigned to?

For the most part, the CMMC is reasonable. For example, there were sensible requirements for the use of password managers and Multi-Factor Authentication (MFA) for contractors operating at Maturity Level 4. However, I was shocked to see a requirement that was straight out of the days of the mainframe:

Identification and Authorization (IDA) L3-5: A minimum password complexity, including change of characters, is defined and enforced. 

While this requirement has its history in NIST SP 800-171, it is diametrically opposed to the current NIST guideline for password management, NIST Special Publication 800-63B. Section 5.1.1.2 of that document explicitly requires that organizations NOT force users to use the complex mix of numbers, letters, and punctuation. The UK has a similar policy on password composition rules for any organization supplying services to its government.

Why prohibit composition rules for passwords? To quote NIST:

Composition rules also inadvertently encourage people to use the same password across multiple systems since they often result in passwords that are difficult for people to memorize.

Today no one who studies cryptography thinks enforced password complexity is a good security strategy. It is not a secret that humans are poor at remembering random sequences of characters. They may be able to remember a few important sequences (such as their phone number) but as soon as they are required to memorize more than a handful, they resort to “tricks” to help out. Writing passwords on Post-it Notes is one trick. Adding a number or a special character to a simple word or an existing password is another common solution. 

So when companies force users to invent complex passwords, users resort to using secrets like Password!. In other words, they use passwords that are easy to remember but are extremely insecure. The bad guys understand human nature and start with the faux-complex passwords like Password! when they are hacking a system. 

Unfortunately, as this latest DoD document shows, these old-fashioned policies are still prevalent throughout many IT departments and required in many security guidelines, including NIST SP 800-171. It is time for them to go.

What should DoD be asking contractors to do instead? First, they should require contractors instruct their staff to NEVER reuse passwords, especially across systems. Each password should be completely different from any previous password you have used. 

Second, employees should be coached to choose passwords that will never be found in a dictionary. The UK government advises creating passwords using three random words:

You just put them together, like ‘coffeetrainfish’ or ‘walltinshirt’.

It is a good strategy and the famous techie cartoon xkcd shows why this works:

Bad password cartoon from xkcd

Staff should be warned not to get cute and substitute numbers for letters (such as “5” for “s”) as the bad guys are already onto them and try passwords like Pa55w0rd.

I hope passwords will die out soon, to be replaced with far better technologies like Multifactor Authentication (MFA) and Fast Identity Online (FIDO). In the meantime, if contractors train their users to make every password different, ideally from a string of random three or four words, the DoD’s IT and OT systems will be far more secure than annoying password expiry policies ever made them.

For More Reading:

NIST 800-63-3 Appendix A: The Strength of Memorized Secrets: https://pages.nist.gov/800-63-3/sp800-63b/appA_memorized.html

When the Security Researchers Come Knocking, Don’t Shoot the Messenger

Our own Jonathan Butts and Billy Rios were interviewed this month on the CBS Morning News about their research showing that medical devices like pacemakers and insulin pumps can be hacked by… basically anybody.  These devices all contain embedded controllers, but unlike most modern computer technologies, they haven’t been designed with security in mind.

“We’ve yet to find a device that we’ve looked at that we haven’t been able to hack”, said Jonathan.

Billy also speaks to the one-way nature of medical equipment exploits, noting that it’s not just a matter of issuing a new credit card or changing a password when bad guys take advantage of the flaw. Victims of these kinds of attacks can end up dead.

You can see the full interview here:

http://www.cbsnews.com/video/how-medical-devices-like-pacemakers-insulin-pumps-can-be-hacked/

The Washington Post did a story on the same subject, featuring Billy and Jonathan back in October.

Poor security design is clearly widespread throughout the medical device industry.  As readers of our blog know, devices with embedded controllers are found in the electrical power industry, oil & gas, manufacturing, aerospace, defense, and a host of other critical infrastructure sectors. And many of those devices have had serious security vulnerabilities exposed in the past decade. But what makes this story concerning is that the medical industry seems especially behind in its approach to vulnerability management.

Billy and Jonathan uncovered the vulnerabilities associated with a Medtronic pacemaker way back in January last year. They then disclosed their findings in a detailed report to the vendor. Unfortunately, Medtronics denied that action was necessary and did nothing to address the problem or warn users.  It took a live, very public demonstration at Black Hat USA 2018 to capture the attention of the FDA and the vendor.

That isn’t the way responsible vulnerability disclosure is supposed to work. When researchers discover a vulnerability and privately share it with the vendor (and/or appropriate government agencies), the vendor needs to take that vulnerability seriously. That way the users of its products get a chance to patch before the dark side of the cyber world starts to exploit the weakness. Requiring researchers to broadcast the news to the world to get action is simply terrible security practice.

As a former CTO of a large industrial device manufacturer, I have faced my share of researchers bringing news of vulnerabilities in my company’s products. Some of the vulnerabilities proved to be very serious, while others simply a misunderstanding of how the product would be deployed in the field. Regardless, we took every vulnerability report seriously, immediately engaging the researchers so we could learn as much as possible about their testing techniques and findings. Sometimes, when we thought the researcher was onto a particularly serious or complex problem, we flew them into our development center so we could start addressing the issues as quickly and completely as possible.

The bottom line is that device manufacturers need to start seeing security researchers as partners, not annoyances. When a researcher finds a vulnerability, they are basically doing free QA testing that the quality and security teams should have done before the product ever shipped. It’s time that companies like Medtronic started working with security researchers, not fighting them. Instead, we should all be fighting the bad guys together.  It is the only way our critical systems will become more secure.

Follow Us:

Twitter
  LinkedIn

Who Infected Schneider Electrics’ Thumbdrive?

Infected USB Drive

On 24 August 2018 Schneider Electric issued a security notification alerting users that the Communications and Battery Monitoring devices for their Conext Solar Energy Monitoring Systems  were shipped with malware-infected USB drives.

First of all, kudos to Schneider Electric for alerting their customers and providing information on how to remedy the situation. According to Schneider, the infected files would not affect the devices themselves. Schneider also noted that the particular malware was easy to detect and remove by common virus scanning programs.

Provided that all of Schneider’s customers read these alerts, this should remain a minor security incident. Unfortunately, this is a big assumption. Due to the complexities of modern distribution channels, I’m pretty certain no one in the world knows if the Schneider notice is getting to the people who actually use the Conext product. It could be getting stuck on some purchasing manager’s desk, never to be forwarded to the technicians in the field. Or it could be languishing in the inbox of an engineering firm that is no longer working at the location where the Conext product is deployed. If ZDNet and CyberScoop had not reported on the story, it may have stayed off everyone’s radar.   Clearly, both vendors and asset owners need better ways of sharing urgent security information.

The Conext Battery Monitor from Schneider Electric
The Conext Battery Monitor
Source: Schneider Electric

But what is especially interesting is that the thumb drives were not infected at Schneider’s facilities.  They were infected via a third-party supplier during the manufacturing process. Like ALL major ICS vendors, the supply chain for Schneider hardware, software (and even the media upon which it is shipped) is exposed to many hands.

This situation highlights an alarming reality in the ICS world. Just because a digital file comes from a trusted vendor doesn’t mean you can trust all the other companies that touched that file.

Who knows which “third-party supplier’s facility” was involved in contaminating those USB drives?? Was it the USB manufacturer… or a duplication company… or even a graphics company who added some branding? Schneider Electric no doubt will be re-thinking that relationship, but the fact remains that they have to work with 3rd parties to get their products to market.

The worrisome question is, what other ICS vendors use that same third-party supplier? How widespread is the infection? It seems unlikely that Schneider Electric is this supplier’s only customer. Naming and shaming the supplier may be fraught with legal consequences (or perhaps they are still tracking down the specific vendor) so Schneider has remained silent for now on the source of the malware. That means all the other vendors out there and their customers may be exposed as well. Or not. We don’t know – and that is a problem.

One hopes that if other vendors have detected issues with their USB drives, they will follow Schneider Electric’s lead and issue prompt alerts. Some vendors are better than others at transparency and there will likely be some who choose to lay low instead to avoid bad publicity. It is a pity because vendors like Schneider are as much a victim in this scenario as the end users.

This is one of the reasons aDolus is developing a platform for ICS asset owners and vendors that offers an ecosystem of trust where they can verify software of, let’s call it “complicated origin” and ensure it hasn’t been tampered with BEFORE they install it. We’re also looking at ways vendors can get early warnings about security issues occurring at their client’s sites and not have to wait until hundreds or thousands of facilities have been infected.  

Interested in learning more about protecting yourself from compromised software? Let us know if you are an end user interested in validating ICS software or an ICS vendor interested in protecting your distribution mechanisms to ensure they are clean.

Follow Us:

Twitter
  LinkedIn

Building (or Losing) Trust in our Software Supply Chain

Back in 2014, when I was managing Tofino Security, I became very interested in the Dragonfly attacks against industrial control systems (ICS). I was particularly fascinated with the ways that the attackers exploited the trust between ICS suppliers and their customers. Frankly, this scared me because, as I will explain, I knew that all the firewalls, antivirus, whitelisting, and patching in the world would do little to protect us from this threat.

If you are not familiar with the Dragonfly attacks, they were launched against the pharmaceutical industry (and likely the energy industry) in 2013 and 2014. The attacks actually started in early 2013 with a spear phishing campaign against company executives. But the part that concerned me began later, starting in June 2013 and ending in April 2014.

 

During that period, the Dragonfly attackers penetrated the websites of three ICS vendors: vendors who supply hardware and software to the industrial market. Once the bad guys controlled these websites, they replaced the vendors’ legitimate software/firmware packages with new packages that had Trojan malware called Havex embedded in them (Attack Stage #1).

When the vendors’ customers went to these websites they would see that there was a new version of software for their ICS products. They would then download these infected packages, believing them to be valid updates (Attack Stage #2). And because one of the messages we give in the security world is to “keep your systems patched,” these users pushed out the evil updates to the control systems in their plants (Attack Stage #3).

Once these systems were infected, the Havex malware would call back to the hacker’s command and control center, informing the attackers that they had penetrated deep into a control system. The attackers then downloaded tools for ICS reconnaissance and manipulation into the infected ICS hardware (Attack Stage #4). These new attack tools focused on the protocols we all know well in the ICS world, such as Modbus, OPC, and Ethernet/IP.

As far as we know, the attackers were most interested in stealing industrial intellectual property — not destroying equipment or endangering lives. However, there was nothing that would have restricted the attackers to just information theft. Their tool sets were extremely flexible and could have easily included software that would manipulate or destroy a process.

The Dragonfly attacks were particularly insidious because they took advantage of the trust between suppliers and end users. The engineers and technicians in industrial plants inherently trust their suppliers to provide safe, secure, and reliable software. By downloading software and installing it, the Dragonfly victims were doing what they had been told would improve their plant’s security. In effect, these users were unwittingly helping the attackers bypass all the firewalls, circumvent any whitelisting or malware detection, and go directly to the critical control systems.

This is what I call “Exploiting the Supplier-User Trust Chain” — and I think it is one of the most serious security risks facing our world today. It is not only a problem for ICS-focused industries like energy or manufacturing, but also for any person or company that uses “smart “ devices… which is pretty well all of us. Aircraft, automobiles, and medical devices are all susceptible to this sort of attack.

So with the help of Billy Rios, Dr. Jonathan Butts , a great team of researchers, and the DHS Silicon Valley Initiatives Program, I’ve been working on finding a solution to the Chain-of-Trust challenge. aDolus and FACTTM (Framework for Analysis and Coordinated Trust) are the result of 1000s of hours of our systematic investigation into the problem and its possible solutions. Join me on this blog over the next few months as I share what we have learned and where we still have to go to ensure trust in our software.

For more on Dragonfly:

Scroll to top