For generations, flying cars have been a fixture of science fiction and thus a benchmark of the future. And they’ve remained largely just that: science fiction. But if we broaden what we mean by flying cars, then they technically exist, and have for some time.
Take the Terrafugia Transition, which, though a hybrid concept, is basically a private plane whose wings can fold up for driving on the road. Likewise, the Flight Design CT series, with its characteristic tricycle undercarriage, has been around since the ’90s. Most recently, the ultralight Kitty Hawk Flyer V1 has demonstrated its vertical take-off and landing (VTOL) capabilities over bodies of water — a harbinger, perhaps, for more to come.
But these aren’t the enclosed-cabin vehicles that hurdle along three-dimensional highways and seamlessly land in heavily populated areas, as foretold by The Jetsons. Enter Uber Elevate and its endeavors to accomplish all of these feats, more or less.
Bringing the space age to this age
In April, the company unveiled plans to test flying cars in Dallas-Fort Worth, Texas, and Dubai by 2020, as part of its new project, Uber Elevate.
Its skyward initiative hopes to shuttle riders through the air using VTOL technology, and cover about 50 miles in 15 minutes. Unlike the proposed aircraft’s closest proxy, the helicopter, these machines will operate without the associated noisiness and emissions, and far more of them will be in the sky.
It goes without saying, Uber has its work cut out for it. Especially given the fact that the technology necessary to support its efforts doesn’t currently exist. On top of that, the company is faced with other myriad obstacles, including infrastructure, operational costs, required licensing, use of airspace and general safety.
But rather than going it alone, Uber intends to prod at manufacturers and regulatory bodies alike — in effect, catalyzing innovations so that on-demand aviation becomes a reality sooner rather than later. While the deployment of such a system may seem a tad too ambitious to some, Uber is uber-confident in its success.
Will these actually be flying cars?
The Elevate initiative, however, doesn’t actually use the term “flying car” anywhere in its white papers. What it proposes are VTOL aircrafts akin to drones, which employ distributed electric propulsion (DEP) technology, to overcome issues like noisiness and vehicle performance, plus autonomous systems to eliminate driver error and enable safe air-taxi operations in dense urban areas.
As of late, VTOL and DEP technologies have become all the rage among developers, many of whom are already flying early prototypes. Such an emerging ecosystem of electrical VTOL tech — which includes the likes of Lilium, Air A3 from Vahana and the Joby S2 — may inspire a degree of optimism in the face of naysayers.
But isn’t it expensive?
In short, yes. Uber is working with its partners to find ways to use VTOL aircrafts that are safe, efficient and, of course, affordable. In addition, it has joined forces with the Dubai Road and Transport Authority, conducting studies on how to best optimize traffic flow and establish pricing models.
The company admits that it’ll be costly to get the ball rolling (or, in this case, soaring), but reason that its proven rideshare model for ground vehicles will similarly assuage initially high expenses for air taxis. It surmises that once the ride-sharing service begins, the resulting “positive feedback loop” should help reduce operational costs and thereby air-taxi fares for customers.
The Elevate project also boasts the potential cost-effectiveness of VTOL networks as compared to traditional roadway infrastructure. Such a system, the company argues, wouldn’t require roads, bridges, rails, parking garages or sound buffers.
Instead, it raises the idea of simply repurposing, say, the tops of parking garages, helipads or other similarly unused real estate or land, as vertiports. Roads, tracks and all else wouldn’t be necessary to get from point A to point B.
Won’t they be loud?
Noisiness has always been a big concern for aviation. Both airplanes and helicopters are inherently loud, and property values in neighborhoods that are in close proximity to airports are typically lower due to incessant sonic disturbance. The prospect of aircrafts assaulting the skies a hundredfold may feel rather daunting.
Fortunately, DEP also exhibits promise as a noise buffer. The biggest issue with helicopters — as yet the closest existing equivalent to Uber’s intended aircraft — is the rotor designs, which together produce loud noise. DEPs enable alternative design considerations, which still bolster vehicle efficiency while drastically reducing noisiness.
One objective is to set a minimum cruising altitude of 500 feet, at which sounds should be “about one-fourth as loud as the smallest four-seat helicopter currently on the market.” But given the sheer number of aircraft in the sky, it may not suffice in reducing noisiness.
Luckily, there are existing methods for measuring sound “annoyance,” established by the FAA, which could be leveraged to analyze and tailor aircraft and vertiports around “acceptable operational noise levels.”
Further, Uber is teaming up with Siemens to conduct noisiness tests between electric and combustion engines. Siemens recently unleashed an electric plane that’s not only significantly quieter than a standard combustion engine, but also broke the world speed record for battery-powered aircrafts at over 200 mph.
Is it safe?
In order for any of this to work, regulations regarding vehicle design standards, maintenance, pilot licensing and inter-jurisdictional travel will need to be established. Not to mention, en masse driving from yon high necessarily requires reducing driver error to near-zero percent. And, of course, the only way to eliminate driver error is to, in fact, eliminate drivers.
As mentioned earlier, a key ingredient of the Elevate initiative is autonomous technology. Similar to how Tesla’s developing self-driving tech in phases, so too would VTOL autonomy be integrated over time. Compared to cars on the ground, however, urban airspace is largely open and unobstructed, which could make the process faster.
When it comes to urban airspace, Uber maintains that existing air traffic control (ATC) systems would be able to accommodate hundreds of aircraft. Still, to operate at a much higher frequency, as it intends to do, ATC systems would need to scale up significantly plus implement new systems.
Not only is airspace density a hurdle, there are also other things like bad weather, which could erratically disrupt takeoff times for large fleets. Given that Uber’s value prop is “saving time,” that’s an area that needs heavy consideration.
Will this really happen?
Uber plans on having air taxis in operation by as early as 2023, provided all critical elements successfully come together.
And if massive VTOL networks — or more colloquially, “flying cars” — do become part of our daily lives, the results could be staggering. After all, Elevate’s main goal is to alleviate congestion on the ground, which, in turn, may alleviate pollution, environmental strain, highway accidents, blood pressure and even auto insurance premiums. These, in addition to a slew of other unforeseen effects — positive or otherwise.
Unlike the 1950s, the Golden Age of Futurism — whose optimism was both hopeful and fanciful, Uber has laid down a serious blueprint for how to make flying cars a real thing.
Next steps are to get key players on board, from regulators and cities, to designers and network operators. To be sure, it’s a project rife with excitement and possibility, exhibiting promise as a major game-changer alongside the self-driving car. For now, though, it all remains to be seen.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Last year, we saw the Mirai attack on IoT devices and digital cameras used as a distributed denial-of-service (DDoS) attack on the DNS servers on the East Coast. This attack caused services from Netflix, Amazon and others to be disrupted for several hours. The example of Mirai should be a wake-up call to our businesses and central commerce systems of the U.S. If devices can be compromised and turned into a botnet army of robots causing systems to stop functioning, we can start to imagine what type of damage can be triggered by a systematic attack on businesses and U.S. commerce. These botnet attacks would be as costly as the natural disasters we have seen with Category 5 hurricanes attacking the U.S. economy today.
As we think about cyberattacks in the IoT space, markets and verticals all have the same vulnerabilities. The attacker has the intention to cause the behavior of a platform to act different than it was intended. If it’s a virus similar to Mirai, causing a DDoS or chipset vulnerability, then the attacker can remotely control functions of the device, such as Wi-Fi, to an exfiltration of the network through malware deployment through the IoT device.
The technology industry has a few areas of concentration that are priorities: device design and device monitoring. In device design, security by design must be a priority in all IoT devices, regardless of value. As an example, an IP camera produced in large volume for mass distribution and a sensor for our critical infrastructure should both, in theory, have the same security objectives — prevent third-party attacks. To be fair, a sensor monitoring a nuclear power plant may be subject to both manufacturing compliance and security evaluation. However, the basic concepts of device protection and prevention of attack should be commercialized in any IP connected device. We can argue that IP-connected lightbulbs are just as big a cyber-concern as a heart monitor. The attackers are looking for the most vulnerable connected device from which to deploy their attacks.
For example, an article this week mentioned that an infusion pump used in critical care and neonatal care was subject to a communications attack. The attack allows the pumps’ communications to be hijacked. The responsiveness of the manufacturer for a real remediation was lackluster; the corrective action was network segmentation, static IP addresses and complex passwords. Similarly, there was a recent recall of pacemakers where the patients were asked to visit their physicians for a firmware update to correct a life-threating flaw where an attacker could give malicious commands to the device.
These examples are as serious in our businesses as they are in medical devices. Another concerning example is in the GPS communications segment. Cyberattacks of ships’ navigation could intentionally cause maritime accidents. Whether these are military, shipping lane or commercial targets, they are all extremely damaging. There are examples of GPS hacks for toll way fee evasion, hacks to employee tracking in telematics systems, and even more complex nefarious attacks in GPS jamming or GPS spoofing. Attacks on radio frequency signals are different than those of IT systems; however, the attack on GPS devices is similar to IT systems issues whereby assessment and monitoring are good tactics.
Attackers can issue malicious commands or enter networks to exfiltrate data. As mentioned above, design of IoT is critical, including updating patches and continuously monitoring IoT devices. We watch device behavior to understand when they are acting differently as this is a sign of a cyber-event.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
A recent report by the American Public Transportation Association (APTA) found that many public transit systems in major cities suffered a decline in ridership, with metropolitan areas such as Atlanta and Baltimore seeing decreases of over 10 %. Most cities — with the notable exceptions of Boston and New York — have fewer people riding trains, subways and buses, the transportation mode that experienced the largest decline. In comparison, ride-sharing services like Uber and Lyft are seeing significant increases in ridership. Lyft reported a “breakout” 2016 with its ride-hailing service tripling rides from 53 million in 2015 to 160 million in 2016. The question that stands before transit agencies and city officials is now a critical one — what can cities learn from innovative technology-enabled transit alternatives like Uber and Lyft to reverse the decline in transit ridership?
Start with the rider experience
The rise of apps in the transportation sector means that riders are increasingly able to make decisions based on the mode of transit that provides the best experience. Uber and Lyft put an emphasis on customer experience with their apps, enabling riders to seamlessly plan trips, pay, share arrival times with others and rate drivers. Short of being able to control traffic, these apps have eliminated many sources of transportation friction and anxiety for riders — a goal that every transit agency needs to strive for. By enabling new technologies, transit agencies have the opportunity to offer riders an experience comparable to that of ride-sharing services without being restricted to the roads, which are sometimes subject to uncontrollable conditions like traffic or construction detours.
Rather than viewing ride-sharing apps as a threat, transit agencies should use them as inspiration to increase transit ridership. In its “Shared Mobility and the Transformation of Public Transit” report, APTA found that the more people use shared modes of transportation, the more likely they are to use public transit, own fewer cars and spend less on transportation overall. By integrating transit services with alternative modes of transportation, such as car or bike-sharing services, and streamlining ticketing through mobile applications, transit agencies can offer smooth and uninterrupted experiences to riders. Transit agencies should replicate behaviors that have made ride-hailing services successful and integrate technologies that streamline the rider experience by using software development kits (SDK), or by linking services via deep linking, creating seamless experiences using peoples’ mobile phones. Through an SDK, transit agencies can integrate functionality from another app, like mobile ticketing, directly into new and existing applications already used by riders in their city ecosystems. This means that riders would be able to plan their routes using an agency-supplied or widely deployed journey-planning application and then purchase and validate tickets all in a single place. Linking services in this way will allow transit agencies to adapt to the evolving transportation landscape and provide a streamlined travel experience for riders.
Bet on innovative technology
The appeal of services like Uber and Lyft stems from streamlined digital processes, technology and innovation. In March, the popular app Transit, which shows upcoming arrival times for various public transit modes, announced a partnership with Uber. It will allow riders to view public transit options and wait times near their destination directly in the Uber app while they’re riding — eliminating the need to toggle between apps. Technology advances such as open data and integration through open APIs is giving rise to transport innovations that promise a brighter transportation future for all — agencies, commuters and private tech innovators.
Embrace best-of-breed mobile-first technologies
Agencies must adapt to and adopt new technologies that allow users to have a more streamlined travel experience. Strides have already been made in places like New York City, where underground Wi-Fi capabilities were introduced on the subway system, allowing its commuters to check their email and send messages in previous service dead zones. Other cities have also updated technology to make the daily commute a better experience. Boston’s commuter rail introduced mobile ticketing to remove the need to wait in line to buy your ticket at ticket vending machines to catch the train. Kansas City is in the process of implementing a data collection system for their streetcar line, updating it with information about things like utilities and current conditions. These types of improvements make a big difference in terms of boosting ridership.
Recently, Oakland-based transit authority Alameda-Contra Costa Transit (AC Transit) awarded a contract to Iteris, which includes the implementation of transit signal priority, improvements for bus stops, an adaptive signal control technology system and a passenger information system updated in real time. These updates aim to improve the San Francisco transportation system by reducing delays and average travel time along with improving the reliability of schedules. AC Transit prioritized these technological improvements understanding that they would lead to improved rider experience. Cities experiencing a downward trend can learn from San Francisco, Boston, New York and Kansas by leveraging mobile-first, best-of-breed technologies that address the end-to-end needs of their rider and promote seamless public transit use — something essential for smart cities to flourish.
Local governments must be involved
In conjunction with innovative transit agencies, it is crucial to have government officials who are equally as attentive to the needs of their city’s riders. Understanding where opportunities for improvements lie will allow governments to be at the forefront of strengthening their transportation systems. Recent successes of various transportation systems have stemmed from the support of government agencies, Kansas City being one example. A $15.7 million 2016 initiative allowed Kansas City to become a smart city leader. Surrounding its 2.2-mile streetcar line are 328 Wi-Fi access points, 178 smart lighting video nodes and 25 smart kiosks, enabling the ongoing collection of data. In February, Kansas City released its open data platform, giving citizens live access to streetcar arrival times, traffic flow and parking availability. Through this data collection, the local governments can not only analyze trends, but also drive innovation. The collaboration between transit systems and local governments is imperative to the success of both entities, and the continued satisfaction of the riders themselves.
A successful city is one that moves its people effectively and efficiently. One way cities can do this is by integrating apps and services for a more seamless commute. It is also crucial to have local governments and transit authorities working together to improve the rider experience. The downward trend in ridership does present a challenge, but also an opportunity for cities to become smarter, innovate and revolutionize rider experiences.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Only 35% of millennials own homes. Millennials are also the ones most likely to adopt new IoT technologies. Still, the apartments in which they live aren’t equipped for this new wave of connectivity. As the on-demand and digital economies continue to boom, millennials will continue to demand greater services from their apartment communities, and to meet these demands, building designers need to change the way they approach multifamily housing.
When you tour a new apartment building, you see similar amenities: a gym, bike storage and maybe even a dog washing station, like many buildings here in Portland, Ore., have to attract canine-loving renters. While these are all great, they rarely meet the needs of renters’ other daily activities. Building developers need to look at what a millennial typically relies on, such as on-demand services like Amazon Prime or Seamless, and shared services like Lyft. With applications like these becoming more and more popular, renters will no longer be enticed by a gym as a selling point. They’ll be looking for an Amazon drop-off location or a designated ride-sharing zone.
To continue accommodating this new market of renters, building designs will need to change. Some of the biggest amenities impacted will be traditional mailrooms and parking garages. If, in the future, I’ll be receiving packages from Amazon Drones, does that mean an IoT-enabled dumbwaiter will bring my package to my door? Or, if ride-sharing companies truly are focusing on driverless fleets, how long will it be until my parking garage is filled with autonomous Lyft rides awaiting passengers?
The first adaptations for IoT we’re beginning to see may seem minor, but represent extensive projects for building developers. These come in the form of updated smart-enabled door locks, thermostats and light switches, as well as the installation of sensors. Each of these applications takes a major step in enabling the smart apartment.
Say you’re leaving on a business trip and realize you forgot to lock your door. With a simple command, you can quickly tell your smart lock to secure your apartment — anytime, anywhere. With the addition of sensors, your at-home experience changes dramatically. Lights will turn on and off as you move throughout your apartment, or the radio will turn on as you enter the kitchen. Over time, these sensors learn your habits and cater to your preferences and may even move furniture and appliances based on your mode. This creates some fantastic conveniences, including energy-savings benefits, eliminating electronics being left on when you’re not in the room, or managing a comfortable temperature for your pet while you’re away at work.
It may sound far-fetched, but it wasn’t all that long ago that the idea of controlling your thermostat from your smartphone seemed more like an episode of The Jetsons than reality. As IoT continues to make its way into our lives, multifamily housing developers need to have the infrastructure in place to meet the demand of renters. It’s only a matter of time before your leasing agent tells you the prospective renter was looking for her unit’s smart thermostat, instead of the building’s dog washing station.
Recently, I came across an article in a newspaper that described a cool healthcare innovation: smart bandages. These bandages will use 5G and wireless networks to track the healing process. To me, this is super interesting, especially because it goes beyond typical healthcare. There are so many applications for this technology to treat injuries of all kinds. Even with today’s modern medicine, you don’t know if an injury is healed until you unwrap the bandage, which usually requires another trip to the doctor to give you the thumbs up.
Imagine if you could unwrap the bandage at home after an app dictates it’s safe to do so. The smart bandage app is continuously connected to a server, collecting, monitoring and correlating data — and gathering results that can be shared digitally with a healthcare provider as needed. Very cool.
However, while this technology has a lot of really interesting applications, it does make me slightly concerned. If healthcare firms and practitioners don’t apply the correct infrastructure to support IoT devices like these, they risk inadvertently exposing their corporate networks and the patient data stored within.
Healthcare tracking … everywhere you go
Years ago, I read a great short story by former Google executive Christian Baudis. This story described a day in the life of a man whose every move was tracked by devices. Life as he knew it was dominated by an environment that looked a lot like what we now call the internet of things, but obviously taken to a sci-fi extreme. For example, in this fictional world, cash didn’t exist; payment was done by a chip attached to human skin.
Another device was a health-tracking scanner that changed the way practitioners managed patient care. The story was more focused around some of the philosophical implications and obvious benefits of such technology. But, as a cybersecurity professional, I thought it was particularly interesting to imagine the people managing this technology.
For example, you would need a lot more IT managers that have crossover experience in human biology and medicine. In the story they were called DQPs, which was short for “digital quality providers.” But in the real world, how easy would it be to recruit staff that is competent in both areas? I could see a lot of hospitals choosing to train their existing healthcare workers in basic IT skills so they could maintain the scanners. However, this raises concerns about how well they would know the technology. Would they have the same understanding of patient data and security concerns as a fully trained IT professional?
How safe are connected healthcare devices?
If you apply those same learnings to the real world, it raises a number of questions around these new smart bandages. Let’s assume you have a smart bandage and the server that is collecting every patient’s data and storing it within a hosted data center. Assuming it had the latest protection from outside access, like firewalls, intrusion prevention systems and advanced threat protection systems, it sounds pretty safe, right?
But what about access from within? What if an adversary — a rogue user or disgruntled administrator — uses the LAN to gain access to the server? What if the smart bandage itself is hacked into and the data is manipulated or tainted in some way? A determined hacker could easily access your health records and possibly provide instructions based on incorrect data. That same hacker could even gain access to other sensitive data available digitally on the hospital network.
We’re seeing broad and rapid adoption of IoT devices, especially in the healthcare space, but with a profound lag in cybersecurity precautions. Being able to see these connected devices and being able to control and manage them is imperative when it comes to minimizing the vulnerabilities these connected devices can create.
IoT is today’s problem
When new technology, such as connected plasters, is announced, it always spawns a lot of discussion around the future and how this technology will impact the workplace. However, I would be remiss to mention that millions of IoT devices are already installed on healthcare networks around the world, collecting increasingly detailed data on their patients.
It’s time to stop thinking of IoT as a problem for tomorrow. Being able to discover, classify, assess and continuously monitor devices, including personally owned and agentless medical devices, will enable healthcare firms around the world to feel safe in the knowledge their data is secure. Additionally, regulatory measures, such as HIPAA and HITRUST, are increasingly demanding healthcare firms enforce security posture and regulatory compliance policies. These are necessary to notify users, restrict or block access, and automate network segmentation.
The moral of the story: IoT is cool and new use cases and innovations are becoming available daily, proving that Christian Baudis’ vision of a connected world wasn’t very far off. But keep in mind, proper visibility and control of IoT devices is key to keeping networks (and patients) safe.
The internet of things offers such exciting possibilities that it’s difficult for organizations to resist the urge to chase the latest shiny object without thinking through the business case.
Return on investment is what decides if (or when) an IoT deployment moves beyond concept to prototype to production. Without a clear ROI, IoT — or, for that matter, any initiative — may never make an impact. So, how can ROI be calculated for IoT initiatives?
Here are five questions to ask when determining the business case for an IoT initiative:
1. What problem are we trying to solve?
Having a real business problem to solve is the first step to creating business justification. A problem is either a current loss or risk — operational efficiency gaps, manual errors or safety risks, for example. Or, it is potentially untapped revenue — using contextual content to target better, for example. One may argue that technologies can open hitherto unknown opportunities, and while true, such unknown opportunity must be projected to translate to reality soon.
Once a problem has been identified, the next step is to validate this with experts. Such validation will result in either strengthening the case or it may reveal that the problem actually isn’t critical or significant. Recently, my organization was asked to propose an IoT technology to track asset location in a factory. A quick check with the shop floor revealed that most key assets were fixed, and the actual need was to do a periodic audit of whether or not these fixed assets were where they were supposed to be. Our solution subsequently changed to automate these audits instead.
2. How is it dealt with today?
Determining where operational inefficiencies are with current processes is especially important. This will provide a way to calculate the notional value of such losses.
As an example, for a factory, we noted that certain machines were stopped multiple times to reactively fix point problems. We noted that any time the machines had to be stopped and started, waste was generated. This allowed us to calculate the cost of loss. In the earlier factory asset tracking situation, the current method was to have a person do a pen-and-paper audit. The question is whether it’s worth it to fix this problem.
For the cases of revenue uplift, the equivalent is to identify what the competition is doing, if anything.
3. What are the costs of not using IoT?
There are two aspects of “cost” here. One is notional cost — “What could we have saved if we had automated this process?” The second is the actual cost of the process — “It costs us one person’s salary to take readings manually.” Additionally, there is a cost of risk — “If this valve blows, it would cost us $x in productivity loss, and $y in damages.”
The case for predictive maintenance, which is one of the key use cases for industrial IoT, hinges on these costs; by predicting a failure before it occurs, the cost of that failure is alleviated. Note that costs of risks are weighted with probability, so a very low probability risk with high value is not necessarily critical for a business case.
For new business opportunities, the cost is that of lost opportunity — “If we had data monetization, we could have an additional revenue of $x”.
4. What will I gain?
It may be straightforward to take the costs and equate those to gains to be had. However, one of the most challenging aspects is attribution — can I attribute all of these gains to this specific IoT initiative? This is most difficult for cases where the scenario is revenue uplift. As an example, we advised a hospitality customer that, by targeting contextual content based on location and profile, could result in an x% increased purchase likelihood. Unfortunately, purchase likelihood is influenced by many other things as well — an ad campaign, competition activity and so on. This is why IoT use cases of operational efficiency are easiest to justify, while those that are consumer related are far more difficult. In the above example, we couldn’t go beyond the concept phase.
5. What will this cost?
IoT deployments have two cost components. One is upfront: solution build, procurement — both hardware and software — and underlying infrastructure. Additionally, there is a recurring cost of running the deployment — infrastructure, software and hardware maintenance, and communication. Cloud-based infrastructure helps reduce capital expense for server hardware, but costs of establishing local networks and so forth would still be incurred. The total costs including both components when compared with gains over, say, a two-year period, then indicate if there is indeed a business case.
The upfront cost of deployments, combined with IoT adoption being nascent in various sectors, increases the perceived cost of failure and reduces the possible gain. One of our customers was looking to monitor coolers remotely. The cost of the cooler was less than $500 — the depreciated value would be even less — and the cost of the hardware to monitor was about $50, or 10% of the cost, which was unacceptable in this scenario.
Customers are looking for a fully managed, operational expense-led model. In this model, the vendor takes the onus of the upfront investment and recovers it over time. This is possibly more expensive in the longer run since the vendor may bake in the cost of taking up risk, interest on capital and so on. However, the risks for customers are mitigated. Subsequently, the risks for the vendor are increased.
From proof of concept to production
While ideally a business case should be established first, customers sometimes look to start by doing a proof of concept or a pilot. While that’s okay, at a minimum you should do at least a back-of-the-envelope calculation to make a quick go/no-go decision first.
Given their urge to adopt IoT, enterprises must exercise caution to ensure that there is adequate reason and justification for its adoption. Or else it will remain nothing more than a nice demo showpiece.
With the ever-expanding landscape of the internet of things, we are now in an environment where every semiconductor and chip IP vendor is, or soon will be, launching their own “security” chip. No matter the type of IoT product, all security is moving deeper into hardware, and ultimately down to the silicon layer. In order to explain how we arrived here, it’s necessary to first take a look back.
GPC chips to ASICs
A good place to start is general-purpose computing chips (GPC chips). One of the biggest purveyors of GPC chips is Intel, followed by AMD. These chips exploded in popularity in the 1990s because they could do everything well; consequently, billions of them were sold. But over time, especially as products began shrinking in size during the last decade, gradually there was a shift towards application-specific integrated circuits (ASICs). (Make no mistake, Intel did, does and will continue to sell millions of general-purpose processors. However, its meteoric rise in the 1990s and 2000s has been tempered of late. IoT and small device forays have failed, for example, the Edison line.)
Better known as specialty chips, ASICs rose in popularity as companies realized they didn’t need very powerful GPCs, but rather only parts of them to perform basic tasks. As a result, these new smaller chips increasingly became more common. Then, as early IoT devices were introduced, device makers found these chips were ideal due to the smaller size, lower power consumption and lower cost, along with the fact that they could be produced at mass scale. ASICs fit the bill well.
This shift also led to the next phase of specialty semiconductors: power control chips by Infineon, graphics chips by NVidia, automotive chips by NXP and so on. Following that, companies saw that manufacturers were creating specialized chips geared towards security. Mobile was also growing steadily at this time, further pushing the need for smaller chips as phones gradually added more varied sensors and capabilities.
Secure silicon and the supply chain
The market has now reached a point where most mature and respected semiconductor companies want to have a security play. For example, Infineon makes Trusted Platform Modules while other companies, like Renesas, produce secure microcontroller units (MCUs). This is a fascinating evolution since we began with companies creating security software running on general-purpose chips, but then slowly started moving down the layers to companies selling secure MCUs capable of tasks such as key generation, secure key storage and boot verification. Originally, these security functions were relegated to software, but now the MCU is handling these natively through APIs.
Companies like Xilinx also have security capabilities within more advanced chips, field-programmable gate arrays, while STMicroelectronics is releasing products like its ST-Safe line.
What we are also now witnessing is increased interest in secure memory. Consequently, products like Micron’s Authenta are now going to be natively capable of various security capabilities, such as health monitoring, including previously mentioned functions such as secure key storage.
Thus, we have now reached the point where the industry is talking about secure silicon, a space where companies like Intrinsic-ID play a leading role. Silicon-rooted security will be used to anchor everything on top of it. As a result, you will be able to trust your silicon chip and move all way up to applications, as well as uniquely identify devices at the hardware level.
However, as with any important development, there are also inherent risks. In this case, it’s that all IoT security chip makers will need to take great pains to understand where they source their silicon from. Consequently, a trust chain – and, in this case, a trusted supply chain — will be critical to ensure authenticity.
Trust chains or anchors must be strong as well as neutral. Some trust anchors are hardware based while others are rooted in software.
Some of the device hacks we are seeing today could be avoided. Implementing strong security standards early on is important step to avert future attacks. It’s worth noting that companies don’t need to install expensive chips on every device. As long as they are doing what they can to secure their devices, it will make it that much harder for hackers to be successful. And with the new IoT Cybersecurity Improvement Act of 2017 taking shape, the reality is that compliance and regulation is not too far down the road. The smart thing to do is to stay ahead of the curve and build security into the product design.
PKI’s important role in the IoT security chip
A critical element to successful IoT security chips will be public key infrastructure (PKI). All IoT devices with these chips will require a strong identity, which will then be used for secure authentication. Devices will need to prove who they are and not something else. They will even generate their own identity and store it safely, courtesy of PKI. In addition, it is conceivable that every device will have a certificate to prove its trustworthiness. And that is the one of the biggest goals of the internet of things: to create a trustworthy global system of systems. In that world, the chances of unauthorized access will be greatly reduced.
The chip has come a long way in the last several decades. It has gone from being massively powerful but also power hungry, to gradually dwindling to miniature size and form. Now, as we enter the age of the IoT chip, billions of devices and machines will be connecting with one another. It may take several years, but hopefully over time most device-makers will ultimately choose a secured IoT chip, ensuring that only approved devices and systems are communicating with each other. By doing so, they will be taking a bold step to certify the security of their devices — and ensure a victory against attackers.
Most of you have heard of the internet of things, the catchall phrase for common devices, such as routers, cameras, printers, refrigerators, door locks and so on, that have been enabled by their manufacturers to be controlled or to communicate over the internet.
Most of you have also probably heard that the security of most of these devices is compromised, resulting in spectacular attacks from bad actors, such as the Mirai malware that hit the Dyn DNS service provider and subsequently affected service to the Krebs on Security website in September 2016. Use of connected devices is expected to grow exponentially — and so will the security problem.
Now, the U.S. Senate also recognizes the problem and is trying to do something about it. The Internet of Things (IoT) Cybersecurity Improvement Act of 2017 is a bill before the U.S. Senate that seeks to improve the security of internet-connected devices.
What is this proposed bill, what does it do, how does it affect me, will it work and should I support it? We hope to shed light on these questions here.
The IoT Cybersecurity Improvement Act of 2017 is a bill before the U.S. Senate that seeks to improve the security of internet-connected devices. It was introduced by Senators Cory Gardner (R-Colo.) and Mark R. Warner (D-Va.), co-chairs of the Senate Cybersecurity Caucus, and Senators Ron Wyden (D-Wash.) and Steve Daines (R-Mont.). According to one article, drafters of the bill worked together with the Atlantic Council and the Berklett Cybersecurity project of the Berkman Klein Center for Internet & Society at Harvard University.
The bill defines IoT devices broadly; basically, any device that is connected to and uses the internet is an IoT device. You may be thinking that the bill would put requirements on manufacturers of such devices, but it does not; it takes a different tact. In short, the bill directs government agencies to include certain clauses in their contracts that demand security features for any internet-connected devices that will be acquired by the U.S. government. The bill outlines what these clauses are and how a waiver to these requirements can be had.
The bill further goes on to amend the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA) to exempt researchers “acting in good faith” and “acting in compliance with the guidelines.” The bill tasks the Office of Management and Budget (OMB) to work with other agencies, such as the National Institute of Standards and Technology (NIST), in setting such guidelines, as well as guidelines to be followed in the clauses added to government contracts as stated above.
The sponsors of the bill should be applauded for trying to tackle the security problems that the internet faces due to many of our internet-connected devices. They recognize that a problem exists and seek to rectify the problem with laws that address this situation.
There are obvious limitations and exceptions, but no other legislation comes close to trying to increase the security posture of such devices that we are aware of.
You can read the bill itself; it is probably no more than a half-hour read and is understandable. But you should look up what is meant by “executive agency” and the CFAA and DMCA. Below is a summary and paraphrasing of some of the details of this bill.
The bill mandates that government agencies buying such products include in their procurement contracts clauses that specify:
- The contractor (the entity selling the IoT device) provide written certification that:
- The device does not contain any known security vulnerabilities or defects that are listed in the NIST database of vulnerabilities or other such national database;
- All components are capable of being updated securely from the vendor;
- It uses only industry standard protocols and technologies; and
- It does not include any fixed or hardcoded credentials used for remote administration, delivery of updates or communication.
- That the contractor will notify the purchasing agency of any known security vulnerabilities or defects subsequently disclosed to the vendor by a security researcher or of which the vendor otherwise becomes aware for the duration of the contract.
- Software or firmware components can be updated or replaced, consistent with other provisions of the contract, in order to fix or remove a vulnerability or defect in the component in a properly authenticated and secure manner.
- A contractor requirement to provide repair or replacement in a timely manner in respect to any new security vulnerability discovered through any of the “national databases,” or from the coordinated disclosure program.
- A contractor requirement to provide the purchasing agency with information on the ability of the device to be updated, such as:
- The manner in which the device receives security updates.
- The anticipated timeline for ending security support.
- The formal notification when security support has ceased.
- Any additional information recommended by the National Telecommunications and Information Administration.
Exceptions may be granted if the executive agency reasonably believes that the device has “severely limited functionality” as defined. Within 180 days after enactment, NIST shall define what this means.
Exceptions also exist for existing third-party security standards for devices that provide an equivalent or greater level of security than that described above. These must be NIST accredited.
The same exceptions are available where agency security evaluations standards already exist.
The bill requires that not more than 180 days after enactment, the head of each executive agency establish and maintain an inventory of IoT devices used. OMB is instructed to issue guidelines for the agencies for this inventory no later than 30 days after enactment and to work with the secretary of Homeland Security to do this.
The last paragraph mentions that the director of NIST ensure that NIST establishes, maintains and uses best practices in the identification and tracking of vulnerabilities for purposes of the National Vulnerability Database of NIST.
Areas where the bill misses
What follows is the opinion of ISE and is based on certain assumptions:
- The bill only applies to vendors that sell to the U.S. government. The hope is that by using the purchasing power of the federal government there will be spin-off from the manufacturers to provide the same level of security to consumer-grade products. However, one has to ask: Are the IoT devices that the government uses the same devices that are sold to consumers? This is likely the largest area where the bill falls short. Many of the largest cyberattacks on the internet have leveraged vulnerabilities from internet-connected consumer devices. Without fixing these types of consumer devices, large attacks on and from the internet will persist.
- NIST and other government agencies will be responsible for tracking in a database vulnerabilities that pertain to internet-connected devices. Nothing is said about funding such an effort. These databases should be publicly searchable; however, if search is not robust enough and easy enough for vendors and also consumers to use, it is possible that vulnerabilities will be missed. We are not saying that this is likely, but the possibility is there. In general, we feel this requirement is a good move, but the details need to be worked out. Note that databases of vulnerabilities do exist, but the type of database considered here deals specifically with internet-connected device deficiencies, and not such general things as cross-site scripting vulnerabilities which can be found on many webpages, for example.
- Exceptions and waivers are allowed. Each executive agency has sole discretion on whether to allow such, compliant with the wording of the bill. However, the bill defers mostly to the executive agencies. There is the possibility that convenience could take a back seat to security under this clause of the bill. Perhaps more verbiage and standards in this part of the bill could take away some of this discretion.
- There are no liability or criminal penalties associated with this bill. The incentive is dollars from the government. While this is a huge incentive, not all internet-connected devices are suitable for government use. What incentives are there for a vendor that supplies an insecure device used by the hundreds of thousands that is leveraged in an internet attack on say, a bank?
- It does not address cooperation with other countries in keeping the internet safe from insecure internet-facing devices. Stipulation of such types of requirements for treaties with other nations going forward could go a long way in helping to stop the worldwide problem of insecure devices. Let us be very clear about this: The problem of insecure internet-connected devices is not a national problem, it is a global problem.
- There are no certifications or due-process requirements on manufacturers when developing their resources. We feel that the bill should further include provision for at least (this list is not exhaustive):
- A security model (stated threats and assets being protected)
- Risk analysis
- Design methodology
But then, this bill concerns contractors more than manufacturers — another failing of the bill. By targeting manufacturers, rather than just the vendors that sell to the U.S. government, a stronger possibility exists that manufacturers will build in security, not just for devices sold to the U.S. government, but also devices sold to average consumers. See the section below on “manufacturer’s impact.”
Security research impact
ISE puts a strong emphasis on research and we are pleased to see that the bill has tried to ease some of the verbiage from older statutes that affect the private industry in this regard.
The bill directs the National Protection and Programs Directorate in “consultation with cybersecurity researchers and private-sector industry experts” to issue guidelines for each agency for internet-connected devices in use by the U.S. government regarding “cybersecurity coordinated disclosure requirements that shall be required of contractors providing such devices.” This will include policies and procedures for conducting research. This is mandated to be based, in part, on ISO 29147 (or any successor standard), which concerns vulnerability disclosure. It also requires that the research be done on a device of the same class, model or type of device that will be or is used by the government and not on the actual device that is in use (so, no attacking the White House’s router, for example, in the interest of research).
The bill amends the Computer Fraud and Abuse Act, 18 U.S.C. § 1030, by adding a new subsection that says this section does not apply to persons who, in good faith, research the cybersecurity of an internet-connected device of the class, model or type provided by a contractor to a department or agency of the U.S. and acted in compliance with guidelines to be issued.
The bill further amends the Digital Millennium Copyright Act, 17 U.S.C. Ch. 12 sections 1203 and 1204, to say pretty much the same thing as for the CFAA above. The two sections mentioned deal with civil liabilities and criminal penalties, respectively.
We see some problems with the wording. For example, say an internet-connected door lock commonly used in households is being tested for vulnerabilities. It is not clear that such research is protected if the same type of door lock is not used by the U.S. government. The federal government tends to use much more secure and expensive door locks than are commonly found in private homes; yet is not the security of home locks important too?
Barring exceptions and waivers, this bill mandates that security be built into contracts for buying internet-facing devices. In effect, this bill should provide overall widespread government adoption.
Unfortunately, we do not see this likely to spread to the private industry — at least where the device inventories do not intersect. While we could be wrong, we expect the set arising from this intersection to be small.
Manufacturers are only indirectly impacted by this bill. The bill is primarily directed at contractors that sell to the U.S. government rather than the makers of the devices. The hope is that dollars will influence behavior of these manufacturers. Those that take security seriously will be rewarded by being able to sell to the U.S. government.
It is great to see that the contractual requirements address not just products themselves, but the whole lifetime that devices are being used from procurement through retirement. Security is not a “one-time effort and it’s secure” sort of thing — it requires continuous test and change; it is an ongoing process.
If there is one area lacking in this part of the bill, it is the lack of requirements during product development, as was mentioned above. Threat modeling, enumerating trust assumptions and exploring custom attack vectors will force developers of internet-connected devices into taking an adversarial view. This perspective should be taken during the lifetime of the product.
Adopting the adversarial view results in engaging third-party security experts in evaluating security on a frequent, periodic basis, say every six months at a minimum, and before updates to the device are rolled out.
If manufacturers are forced to provide for the security of devices they sell to the government, then it is possible this culture will affect their consumer-grade devices. This is a best case scenario, however.
We also recognize that a balance is needed between legislation and manufacturer innovation. Mandating security directly of the manufacturers is more heavy-handed than the approach currently taken in the bill.
The last thing that should be mentioned about manufacturers: While the security posture of many IoT devices that we have tested is poor, ISE has found that a few vendors do take security seriously. We are hopeful that this bill will result in rewarding those particular manufacturers while weeding out those for whom security is an afterthought, or for those who just pay lip service to the idea with misleading jargon such as “military-grade encryption” or “bank-vault security,” as if security were a one-shot deal.
Connected end-user impact
Our prediction is that this bill will not impact the average consumer much, if at all. Unless the types and models of devices that the government uses are the same as consumer devices, we do not see manufacturers paying attention to security of a very large portion of the market: the consumer market.
We see U.S. government agencies buying devices such as large internet routers and telecommunications equipment, for example, and not an internet-connected toaster. However, a league of internet-connected toasters can still be leveraged to DDoS an internet provider.
ISE is working together with principles of other commercial interests, such as the entertainment and hospitality industries, in setting standards for security. As with consumer devices, we feel this bill will only affect such industries indirectly, as only those devices that are sold to the U.S. government are covered.
However, the guidelines that the bill mandates that NIST and other government agencies develop could be an exemplar for standards of other groups. It will be interesting to see how much of this legislation spills over into other commercial standards.
As far as implications abroad, it is hard to say. In many cases of technology, the world follows the United States. This may come in two areas:
- Similar legislation may be prompted by this bill in other countries; and
- The same devices that the U.S. government buys will be bought in other countries.
The extent to which this will happen is unknown.
While there are shortcomings to the bill, we feel that it is a step in the right direction. It is the first bill that we know of to address internet-facing devices specifically. It also addresses some shortcomings of the CFAA and DMCA in terms of bona fide research.
Bear in mind that things may change before the bill is voted on. It should be interesting to follow this bill as it makes its way through the United States Congress, and possibly signage into law by the president.
The country’s critical infrastructure is made up of massive and sprawling elements of concrete, wood, metal and other man-made and natural materials, many of which were engineered to withstand a wide range of threats, from severe natural disasters to nuclear war. However, over time, with increased usage, deferred maintenance and the threat of even greater man-made and natural disasters, much of the country’s infrastructure is at a breaking point. This includes dams, bridges, roads, highways, electric grids and pipelines. The industrial internet of things now offers a near-term solution to manage these assets as the country and policymakers debate how and when to invest in upgrading and improving our infrastructure. Yet, these new technologies used to manage stressed infrastructure, if not secured properly and in the wrong hands, have the potential to turn the critical infrastructure against its human users.
Advances in industrial wireless data communications have created an enormous opportunity to expand operator reach to even the most remote location with immediate response times. However, the adoption of these new technologies, applications and cloud-based services comes at a potentially steep price if they require sacrificing quality and security of the applications and networks they are controlling. These sacrifices could cost us more than just a minor inconvenience, as a security breach of our critical infrastructure communications systems could be devastating to the health and well-being of the general public.
Consider, for example, the automated sensor controls surrounding a nuclear power facility, used to ensure the security of individuals both in and surrounding a specified radius around the reactors. Or consider the power grid, which provides critical electricity — the lifeblood of the modern economy — to the country’s 323 million residents. Hacker activity directed towards these sectors doesn’t need to be outstanding to have a significant impact. Even a minor distributed denial-of-service attack aimed at disrupting communications to and from a single utility substation could have devastating consequences on health and safety.
The good news is that automating critical infrastructure doesn’t necessarily have to directly correlate to substantially greater security risks. When automating any network for critical infrastructure operations, one should consider the following aspects to ensure security and quality is not compromised:
Off the shelf is easier, but that doesn’t make it better
Surprisingly, “off the shelf” Wi-Fi and cellular networks are becoming more prevalent for data communications supporting critical infrastructure — often based on short-term expediency. These technologies may work for your home automation camera or front door lock, but when it comes to the security and quality of service required for real-time industrial data communications, they don’t pass muster. These products are designed for mass adoption and purposely lack the security and quality of service features required for industrial networks. Industrial networks require unbreakable wireless connectivity often over remote areas in challenging radio frequency environments. This requires using specialized licensed radio frequencies designed for coverage over capacity.
VPN operations over public networks are too close to hacker reach
If you’re working over public networks, even if it’s a virtual private internet, the likelihood is your communications aren’t completely private and are still exposed to security and quality of service disruptions. One solution is for the industrial operator to deploy their own private wireless data network using licensed radio frequencies in specialized bands that are available on an exclusive basis. This is an excellent option for an operator (for example, an electric utility company) that has scaled operations over a large area (either multiple counties or a state level). For industrial customers that do not have the necessary scale, working with a private network operator dedicated to mission-critical operations is another option. These types of networks are now emerging in order to address the significant need.
Data is the backbone and currency of the internet of things. Whether it’s a sensor generating temperature data, a sock tracking a baby’s vitals or a vending machine that sends alerts when products are low, IoT data needs to be transmitted, processed, secured and potentially stored. One thing is clear: The choice of which data to capture and what to leave behind is a strategic business decision that widely varies depending on the company, business goals and industry. Despite how a business deals with this data, cloud storage is the way forward and a key enabler for the growth and continued innovation in IoT.
Cloud storage is gaining momentum in business overall, and with wider accessibility and lowered costs, the cloud is becoming an option to manage the copious amounts of data created by IoT. But even with lowered costs, the sheer volume of IoT data that needs to be stored, transported and processed can quickly drive up the costs beyond the budgets of many enterprises. There are different schools of thought on how to approach this, and embracing computing “on the edge” is one solution many are considering when tackling the data storage issue, particularly for industrial IoT applications.
Hardware computing processing is a more cost-effective data storage alternative, allowing for most of the data management and processing to happen closer to the source of the data, on the edge or fog area of the network, rather than deep in the cloud network. Edge, or fog, computing complements the cloud storage strategy by reducing costs of transit and storage by completing repetitive tasks and computations on network devices, and then transporting key data and anomalies back to the core network in the cloud.
Shifting IoT data to the cloud presents other challenges beyond just cost. To make data and costs manageable from a business perspective, there are several important questions that need to be asked:
- Do I want a single-source strategy for my data storage?
- How — if ever — do I get my data out of the cloud or switch it to another cloud, should I ever want to?
- How dependent is my business model on that one cloud provider?
Regardless of what cloud strategy a business decides to take, it needs to ensure that the device, and the data collected from it, remains safe. The most reliable way to manage the point of cloud vendor dependency is to always encrypt data before moving to the cloud, and keep the keys separate.
Data exploitation, storage and security remain issues
Storing and accessing the growing amounts of IoT data is quickly becoming a major issue. As companies leverage the cloud to store their data, cleaning up the cellar and moving it all offsite can mean there are additional security concerns with storing and accessing this data. Unfortunately, many companies do not encrypt IoT data before moving it to the cloud.
According to 2017 survey data collected from Altman Vilandrie & Company, 46% of IoT security buyers have experienced an IoT-related security intrusion or breach in the last two years, which seems to suggest that while traditional cybersecurity is taking a front seat for most industries, it’s still an afterthought for IoT.
A quick look at headlines shows there has been no shortage of high-profile data breaches over the last few years, with a few racking up millions of dollars in costs. Considering how many devices are being connected in homes, businesses and hospitals, utilities and other critical infrastructures, we are widening the paths of opportunity for malicious hackers, which is why it’s so important to encrypt the IoT data before moving it into a cloud environment.
The most recent Petya and WannaCry ransomware attacks that spread globally prove that just one infected machine or system has the potential to halt production and shut down an entire factory or other critical infrastructure. If these attacks are any indication of things to come, then the industry must find better solutions to address cybersecurity before these connected IoT devices become the next attack path.
It’s all in the “keys”
Data and device security in IoT continues to be a major issue, as organizations of all sizes struggle to find the right security regimen to meet their needs. However, if the most important first step to keeping IoT data safe in a cloud environment is ensuring it’s encrypted, then the second key step is ensuring that the keys to that data are kept separately.
Many companies in the IoT ecosystem are starting to realize how important it is to implement robust encryption policies that include high-quality cryptographic keys. Strong encryption results from strong cryptographic keys generated from a quality source like a hardware security module (HSM), which can be used for the creation, storage and management of cryptographic keys. By keeping the keys separate from where the rest of the IoT data is being stored, companies are effectively safeguarding who has access to the data. In many cases, this also means restricting access to the vendor tasked with storing the IoT data.
There are a few established security and cryptographic protocols that can easily be adapted to meet the needs of IoT manufacturers, including:
- Key injection — Ensuring the secure transfer of data is a critical industry priority. As one component of an HSM, companies can insert individual digital keys into semiconductors during production using a true random number generator. With each unique key, the connected device or thing is given a “digital identity” that authenticates it throughout its entire lifecycle from creation all the way into the consumer’s home.
- Code signing — It’s fundamentally important that software code is delivered from its developer to its precise destination intact and unaltered. By ensuring the software receives an individual, unique public key during the development phase of the device, this small step can go a long way to ensure that the code is both genuine and correct. If an IoT device receives software that doesn’t have the matching key signature coinciding with what’s embedded in its system, then that code is automatically rejected, thus safeguarding the overall system from a breach or attack.
- Authentication as the basis for access — As IoT systems grow and need to communicate with each other, they may need improvements and updates along the way. One way to confirm the safety of this process is by ensuring proper authentication practices are put in place. Only those who have the digital key can make changes to the system — for example, to download necessary software updates or upgrades. For maintenance work by service staff, access can be secured using a public key infrastructure.
- Hardware security modules — Enabling the secure communication of IoT data is a critical industry priority considering the billions of devices that will be connected wirelessly by 2020. That data should only ever be stored in an encrypted database. The cryptographic key material should then be managed and stored physically separated from that database in an HSM. This protects the data against unauthorized access, even if database contents get into the hands of cybercriminals.
Keep the keys and encrypted data separate
HSMs play a crucial role in ensuring the security of your data generated by IoT is kept safe while stored in the cloud. HSMs not only generate the keys, but provide a secure wrapper around the master keys, all in a secure and tamper-proof setting. Before, using an HSM meant purchasing and maintaining a physical device on premises — either as a PCI card or a rack-mounted appliance. Like other business services that have moved to the cloud, you are now able to access the full benefits of an HSM through the cloud via an HSM as a service model.
One clear message for storing secure data in the cloud is to ensure you are keeping your encryption keys and encrypted data separated. Two of the largest global cloud service providers provide both cloud storage and HSM services. This might sound like a great idea to have a one-stop shop — two services and one bill — but it provides a certain amount of risk. For example, if the master keys are stored with the encrypted data and the cloud service is hacked, there is a possibility of those master keys becoming compromised as well. If cybercriminals have the master keys, they can have access to your data. Likewise, HSM as a service typically has a multi-tenancy architecture where multiple customers will be served through secure partitions on the cloud HSM. The potential issue is if that particular server is subpoenaed or seized by legal authorities due to the actions of another customer. The cloud service provider may comply with the subpoena without your authority, allowing others to have access to your keys and secure data.
Companies are best served by retaining full access to the keys, which are safeguarding the data by storing them in an HSM either in the cloud away from their data storage provider or on premises in a physical HSM they control. Once the cryptographic keys are secure in a separate cloud HSM or on-premises HSM, not even the vendor storing the IoT data will have access to the keys.
HSM as a service is also helping smaller cloud service providers remain competitive in an increasingly crowded market. By providing a reliable service for end-customers looking to move business-critical data to the cloud, these smaller companies no longer must worry about building their own, in-house offerings from the ground up. Instead, they provide their customers with the flexibility and accessibility of a secure cloud HSM as a service, solving what was once a major hurdle for cloud adoption.
Overall, the future of IoT looks brighter than ever. With many more connected devices in development every day, there’s no stopping its proliferation into our homes, streets and even cities. Unfortunately, as industries trust important parts of their business to software-enabled systems or services, such as connected cars, smart energy distribution grids or electronic payment infrastructures, the potential for abuse is a given.
Many IoT manufacturers are realizing how important it is to implement strong encryption policies with high-quality cryptographic keys generated by a true random source. Whether that’s through basic digital signatures or taking a bigger step toward encrypting all the data, securing IoT is imperative if companies hope to protect their customers, their data and their reputations. It’s certainly where hardware security modules can best be used, and it’s only a matter of time until the rest of the industry has adopted this technology.