My thanks to Charlie Stross for alerting us all to the unfortunate incident of the Russian kettle, bugged with malware intended to find unsecured Wi-fi networks and co-opt them into a zombie bot net (1). Now Charlie’s take on this revolves around the security/privacy implications for the ‘Internet of Things’ movement, making everything smart and web savvy may sound really cool, but not if your toaster ends up spying on you, a creepy little fore-taste of the panopticon future.
What I take away from this is the implications for system and software safety. Traditionally the safety engineering of say, a nuclear power plant or, the flight management system for an aircraft didn’t consider software security for a couple of practical reasons. The first was that they were architected as stand-alone, or at worst federated systems, simply because there was no real need to plumb them into the rest of the world. The second was that the software running in these systems was almost always bespoke and written with languages like C, Jovial, Mascot, Ada or even (shudder) assembler, which meant that hacking the system was a not inconsequential job, so script kiddies need not apply. Finally the size of the security problem was constrained by the small size and federated/isolated nature of these applications.
Consequently pulling the phone jack out of the wall and maintaining the software in-house, or getting it done by say IBM’s Federal Systems Division, pretty much ensured that you didn’t have to worry about stuff running in the system that you either a) don’t know about or b) had no real idea of what it was doing. And that’s probably why software security and software safety have been seen as distinctly separate disciplines, despite the two sharing many common attributes. Unfortunately those days are dead and gone, Moore’s law is in full swing and as chip power grows so too does the size, complexity and opaqueness of the software doing our bidding in embedded applications.
Which is not to say it’s all doom and gloom, for example the use of embedded Linux with it’s very low latent fault rate (2) is certainly not an inherently ‘bad thing’. The problem is that the technological firewall between embedded process control systems and the rest of the world has become much more permeable, and therefore any safety analysis needs to consider the hazard of exploitable security weaknesses in the system. You probably don’t want to leave that pesky exploitable vulnerability in the Linux kernel floating around in the system. Of course fixing said vulnerabilities may be a bit of problem if your embedded system is well ‘embedded’ in something like a reactor control loop or in 100,000 smart phones or for that matter 500 pacemakers. Your security problems are becoming even worse if you’ve deliberately allowing remote or wireless dial in to the system because you’ve outsourced the software support or the system is physically inaccessible, like inside a patient’s chest.
The other problem is that now you need to be distrustful of anything that can potentially interact with your system. Using wireless keyboards? Well now that keyboard is smart enough to have it’s own hackable processor and wireless connection and could be happily sending home all your keystrokes, or worse actually be ingressing your system though an exploitable weakness. In other words the more connected and smarter traditionally dumb objects get, the greater the attack surface of your system gets proportionally and the more hack-able that surface becomes. Because anything with a brain is conceptually hackable and anything with more of a brain is more hackable still. And yes Virginia peope really do connect their control systems to the internet.
The final problem that I see is that as we’re riding along on the coat-tails of Moore’s law we’re also unfortunately violating a core principle of software security, that is economy of function. What I mean by that is that security functions should be executed by the least and simplest mechanisms necessary to do the job because a) there’s generally less to go wrong, and b) when we get around to verifying the security properties we have a much more tractable job to do. But building an embedded system’s bespoke software is expensive, as are ASIC’s or CPLD design so being able to throw something like embedded Linux application into the fray can seem like a ‘really good idea’ commercially speaking. Of course questions like ‘what else did we let in when we opened that particular door?’ or ‘What have we let ourselves in for in verifying the security properties?’ do tend to get lost in the initial enthusiasm.
My prediction? The software safety community is going to have to become very familiar, and very proficient, in securing critical systems as well assuring their safely. And that as the capabilities of off the shelf embedded software grow the security of such systems will become as great a concern as the computing performance. Now the question is not just, ‘is it safe’, but also ‘is it secure’.
1. Not to mention how annoying it is for household appliances to randomly burst into stanzas of The East is Red from time to time.
2. A result of the open source ‘many eyes on the code’ development approach, Linux approaches the defect rate achieved by the NASA shuttle software development team (0.14 vs 0.1 defects per kxsloc). One might suppose that the security of open source Linux could potentially get as good as it’s defect density if the same ‘many eyes’ approach was adopted to identify and eliminate security exploits.