IoT vulnerability management: Adhering to the new laws
Vulnerability management is one of the most basic tenets of security, and a precept all IoT manufacturers should be implementing. It’s used to enable users or researchers to alert a vendor to exploitable system weakness—before they’re widely abused.
Though common practice in IT security, it hasn’t traditionally been an embedded systems concern, and as such the overwhelming majority of IoT manufacturers lack it (Fig. 1). And governments are now beginning to eye legislation to solve this problem.
Last year, representatives of the Five Eyes governments (the U.S., U.K., Canada, Australia, and New Zealand) met to discuss IoT security (often described as the wild west) and measures to protect their citizens. Specifically, what should be done to improve it? And how do we ensure manufacturers start adopting some of the established good practices used in IT security?
Key among topics discussed was vulnerability disclosure and reporting protocols. The governments agreed to collaborate and advocated that IoT should be secured by design.
In January, the U.K. became the first country to announce a law specifying vulnerability reporting. In short, the law states that any company selling an IoT product in the country needs to use unique passwords for every device. It also needs to state how long devices will receive security patches and must enable vulnerability reporting.
The law is expected to come into force as soon as is allowed by the required political and legal process. And with the U.K. being the world’s 6th largest economy—a market IoT manufacturers will find hard to write off—precedent suggests the law’s requirements will be almost universally adopted, even for companies outside of the U.K.
What’s more, the U.K. isn’t alone. Australia is likely to soon follow, announcing a draft code of practice that closely mirrors the U.K.’s, mandating vulnerability disclosure policies be in place.
In addition, while the U.S. hasn’t yet set a law at the federal level (despite calls for it to mimic the U.K.), state laws are being introduced: California announced legislation demanding devices be equipped with “reasonable” security. Yes, this is vague, but vulnerability reporting is already a key recommendation in IoT system protection documentation from the Dept. of Homeland Security. On top of that, existing federal law prevents government departments form purchasing equipment with outstanding vulnerabilities, adding market-based incentives to all firms.
In Asia, Chinese legislation allows for the state to pen-test IoT devices operating in the country to identify weaknesses. In India, calls have long been made for the government to release public vulnerability reporting guidelines. And while no vulnerability reporting legislation exists in South Korea, its Personal Information Protection Act is among the world’s strictest data-protection regimes.
The Need for Such Laws is Growing
The list of IoT hacks is growing at an alarming rate. In the first half of 2019, observed IoT cyberattacks increased more than 300%, with over 2.9 billion events observed during the six months. Just last month (March), the U.K. National Cyber Security Centre (NCSC) advised owners of smart cameras and baby monitors to check the settings after buying them.
This adds to many recent high-profile alerts and hacks. They range from a life-threatening flaw in an implantable cardiac device; to a fitness-watch API that revealed users’ home addresses, including those of spies, military personnel, and users who had put the device in private mode. Or from a video camera that allowed a hacker to talk to a young girl in her bedroom, saying “It’s Santa. It’s your best friend,” to flaws that allowed a smart lock to be unlocked—and even the hacking of the video feed from a sex toy’s in-built camera.
Thankfully many of the above were by ethical pen testers, but not all of them. And there exists a large black market for vulnerabilities, meaning companies need to make it as easy as possible for researchers to alert them if a weakness is found.
State of Vulnerability Reporting in IoT
The current picture isn’t a healthy one. In March, the IoT Security Foundation released its second annual report on vulnerability disclosure. And while there’s been a slight improvement since its original study, just 13.3% implemented any level of vulnerability reporting, yielding the conclusion that “industry must do better… much better” (Fig. 2, below).
The report analyzed the vulnerability reporting protocols of 330 companies—from Google and Amazon to small IoT startups, as well as companies such as Yale (locks) and Mattel (toys) where connectivity is an add on. To rephrase/stress again: 86.7% of the IoT manufacturers did not provide a simple mechanism for researchers to alert them.
The report shows that, with few exceptions, only major brands supported vulnerability reporting. They include Amazon, Apple, FitBit, Dyson, Garmin, Google, HP, HTC, Huawei, Lenovo, LG, Motorola, Samsung, Siemens, Signify and Sony.
And while the history of our industry means we might expect an embedded engineer to be less aware of the importance of vulnerability reporting, the report also highlighted many major brands who should know better.
Adding to the bleak picture, significant variations exist among those that do implement vulnerability reporting. Many use a weakened policy; for example, nearly two fifths (38.6%) indicated no timeline of disclosure.
And despite Europe (through ETSI’s standards and the U.K.’s new law) taking the lead on standards and legislation, European-headquartered firms performed the worst of those analyzed (Fig. 3). Just five of the 82 companies based in the region (6.1%) comply with incoming standards and laws. For North America-headquartered manufacturers, it’s 16.0% (23 of 144), and for those in Asia, it’s 16.3% (16 of 98).
The 7 Considerations for Implementing a Vulnerability Reporting Protocol
It’s vital that companies don’t fall into the trap of “shooting the messenger”, which reduces the willingness of people to report a vulnerability. However, a company should never encourage damaging activity.
Below is an outline a “coordinated vulnerability disclosure” process, which is most equitable and reasonable. There are several grey areas, which means it’s up to each individual provider to decide exactly what process to adopt, and we’ve highlighted some of the arguments behind them. Be aware that it’s important to be clear about the process in public materials, websites, and in communications with researchers to align expectations. It can also be beneficial to have a certain amount of flexibility in certain cases.
Next: How can the process be implemented?
The process is as follows:
1. Website and public materials
It’s essential that security researchers be channeled to the right point of contact within the provider organization. Therefore, it’s imperative that there’s an easy-to-find web page containing all of the necessary information. It’s recommended that the address: https://www.companydomain/security is used (or redirected from).
It’s also recommended that the organization’s “Contact” page contains a referring link to the Security page.
In addition, the use of security.txt is recommended. Security.txt defines a standard to help organizations set the process for security researchers to disclose vulnerabilities. It’s still in the early stages of development (just 1% of the companies analyzed by the IoT Security Foundation, or IoTSF, used it), but this has been now been submitted for Request for Comments (RFC) review by the Internet Engineering Task Force (IETF).
2. Means of contact
The email address securityalert@ or security@ is the de facto standard for researchers who disclose vulnerabilities to organizations. We recommend that organizations create and monitor both of these email addresses where possible.
And communications through any channel used for vulnerability reporting needs to be secured—interception might allow this information to be used maliciously. A secured web form, which doesn’t require the reporting party to install email encryption software and the necessary encryption keys, is recommended for the initial contact message.
However, organizations should also consider publishing a public key with which emails can be encrypted for confidentiality.
3. Timing of response
The text on your security contact web page should state a timeframe in which to expect a response; usually it’s a few days, or up to a week.
It’s good practice to send an automatic acknowledgement for emails sent to the contact email address repeating this.
4. Communications with researchers
Security researchers may have a wide variety of backgrounds and expectations, with hobbyists having different expectations to academics, who will likely desire to publish research, and professional consultants seeking to build a reputation for expertise.
Expectations can be set in the automatic acknowledgement email response, clarifying the timing of further communications and, once a problem has been confirmed, in what timeframe a patch, fix, or other remediation is expected to be made available.
Consideration and recognition should be given to the effort that they have made into researching the particular security problem. And credit should be given. It’s standard practice to confirm consent and then publicly acknowledge their efforts on the same web page as the vulnerability disclosure policy. It’s generally expected that a researcher’s Twitter handle (if available) will also be included.
5. Timeline for fixing a problem
What’s a reasonable amount of time for a security vulnerability to be fixed? This topic has been debated at length amongst the security community and continues to be a constant source of tension.
A web service involving individuals’ personal data might require just a few days. However, a complex problem with a physical product may require new hardware to be manufactured and distributed to repair centers, which could take many months.
It’s therefore important to communicate with the researcher and explain how you justify your estimated timing. Not doing so could result in a researcher feeling you’re not taking their report seriously enough, which could lead to a breakdown of the process and premature public disclosure.
6. Disclosing the vulnerability
IoT manufacturers should have a mechanism to issue security advisories to inform users of the issue once it’s fixed. This should be done via a secure webpage to authenticate the information.
Some organizations also use security announcement mailing lists. If you use this, it’s good practice to digitally sign the advisory email text so that it can be authenticated.
7. Bug bounties (rewards for alerts)
One of the most contentious elements of vulnerability reporting protocols is bug bounties, i.e., financial compensation to the researcher who alerted the company. Some argue these enable bad actors to use them as a means of extortion. Indeed, this topic is left out of the IoTSF’s scope of vulnerability reporting recommendations.
Compensation isn’t mandated by researchers. But in the IoTSF analysis, two-fifths (40.9%) of organizations with a vulnerability reporting protocol used some form of bug bounty, including Apple, Dyson, FitBit, and Google. It should be noted that bug bounties may vary, with Bose using a discretionary program, and Apple/Dyson’s being by invite only, enabling them to reward only those who have (in theory) proven to be acting in good faith.
Bug bounties aren’t the only grey areas. It has been argued that the publication of a vulnerability disclosure policy alone could be encouraging hackers in the name of security research. This, however, is a misleading argument.
Without a published policy, the organization is turning a blind eye to research and reporting that might otherwise go on without its knowledge.
Repeated high-profile hacks, coupled with the acceleration of attacks and the lack of either awareness or willingness to implement vulnerability reporting means laws such as the U.K.’s are critical in ensuring consumers are protected.
IoT manufacturers can embrace the diligence of users and researchers to help identify and report vulnerabilities that emerge during usage. Some simple steps can be taken to implement a practical disclosure process that helps vendors achieve better quality security and meet legal requirements.
More laws will likely follow that will strengthen them further. Embedded engineers creating such devices will need to accommodate them through design and management processes if they seek to sell any system into these territories.
More detailed information on best practices for vulnerability reporting can be downloaded for free from the IoT Security Foundation website (pdf here).
ETSI’s technical specifications for cybersecurity in consumer IoT devices (TS 103 645) can be found here.
The ISO has also published requirements and recommendations. Unfortunately, the latest releases have been placed behind a paywall. Cost is a barrier to adoption, hence we’re hopeful this is a temporary situation and that ISO will, once again, make them freely available.
John Moor is the Managing Director of the IoT Security Foundation.