Let’s face it — the hype surrounding the internet of things and how it would magically transform your home into an oasis of seamless automation hasn’t exactly delivered. Consumer products like Google’s Nest and Amazon’s Echo have yet to fully live up to the promise of the connected home. But don’t be so quick to knock IoT in general; it’s becoming more advanced every day. There are plenty of devices and machines — backed by millions of lines of code — that apply the same automation in less visible, but no less impactful, corners of the world.
Industrial spaces, in particular, are ahead of the IoT adoption curve, applying networked sensors and systems to the benefit of their facilities and organizations. From heat maps that provide insight into occupancy and traffic patterns within a warehouse or factory, to asset tracking technology that helps monitor inventory and equipment via sensors, intelligent systems within industrial spaces are invaluable tool. Simply Google “industrial internet of things (IIoT)” or “industry 4.0” and you’ll quickly understand just how prevalent and influential these systems can be.
This has paved the way for IoT infiltration in the “smart workplace.” Maybe (okay, probably) even yours, regardless of if you work at a desk in a high-rise building or on the floor of a manufacturing plant. So, what makes workplaces ripe for the IoT revolution?
Data — and lots of it
Similar to a warehouse or other industrial environment, the office is a hive of activity and patterns, both obvious and unrecognized. People come and go, creating an ebb and flow around conference rooms, printers and other destinations, all of which can reveal something more meaningful about the state of the business at large. When wired for IoT, a building can track these behaviors, providing all sorts of data and applying analysis to turn that raw information into office-wide intelligence that improves productivity, engagement and safety — as well as the bottom line.
A new appreciation for employee engagement
Consider everything that contributes to a “good” day at work. In all likelihood, it’s not just what you accomplish, but how you accomplish it. With technology playing a larger role in our everyday lives, its ability to work with us, and not simply for us, has come to the forefront. This also means that many offices are no longer designed to be “one size fits all,” and IoT helps make this possible.
Customized individual lighting and temperature controls ensure a comfortable and optimized environment for specific desks or stretches of assembly lines. Sensors located in conference rooms can detect occupancy or usage trends and map to Outlook or Google calendars for dynamic room scheduling, creating a convenient solution to the “this conference room is always booked” problem. Collectively, these seemingly small improvements minimize distractions, improve engagement and allow employees to focus more completely on the responsibilities that matter most: increasing satisfaction and efficiency.
To deliver on its promise, IoT requires a few elements. It needs power. It needs a network. And it needs to be spaced appropriately so that the system is collecting data from a meaningful and representative swath of real estate and occupant behavior. Workspaces are, by nature, perfectly equipped to provide all of these things. Building management systems can integrate with the existing infrastructure, from power to HVAC, to provide some degree of automation and, in some cases, intelligence. Lighting systems can also provide a natural smart building network and, through embedded sensors, can leverage their power sources and evenly-distributed spatial design to easily capture the data necessary to truly understand the way a business operates. Companies can use these systems to not only change the way employees interact with their workspace, but also to make informed decisions that optimize their operations.
Intelligence and beyond in the new smart workplace
The trifecta of data, employee engagement and existing, equipped infrastructure make the workspace the natural next horizon for intelligence and smart building applications. And existing conduits, such as lighting, make this new world of integrated insight easier to achieve than once thought. With the workplace a daily destination for most of the population, it’s there that the benefits of this intelligence are likely to inspire new applications as well as the continued evolution of the IoT movement as a whole.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The need to efficiently move people around dovetails perfectly with the capabilities of the internet of things, making it the ideal framework around which to develop smart cities. This is why building smart transit is so frequently a first step — after all, so much can be accomplished simply by pairing sensors and data analysis to the big-ticket items that cities already have in place: Existing bus fleets, subway cars, stations and stops.
The ways in which urban centers have leveraged IoT to improve the mobility of residents and visitors are as diverse as the cities themselves. But each of them demonstrates an understanding of the power of IoT, fueled by civic data and focused on the experiences of citizens. What follows are a few real-world examples of cities using IoT to make rapid, drastic improvements in existing transportation systems.
The three-second rule
Transit systems have long provided commuters with estimated arrival times based on sensors. But in 2015, officials in Washington, D.C. recognized that the three-minute reporting cycle their buses used was simply inadequate — a two-minute traffic stoppage between those reports could go unnoticed by the tracker, leaving riders at the next stop wondering where the bus was. So the Washington Metropolitan Area Transit Authority partnered with a software company to provide buses with a smartphone app that reported their progress every three seconds. This not only gave riders much more accurate arrival times, but helped WMATA respond to traffic problems and avoid delays.
Putting transit on the grid
In its winning bid for the federal Department of Transportation’s Smart City Challenge, the city of Columbus, Ohio, focused on transportation issues as the primary challenge it would address with the $50 million grant. Why? Because transportation is integral to getting people to jobs and building the local economy. Using IoT sensors and data analysis, the city is developing a “Smart Corridor” to address last-mile connections between the workforce and employment centers; partnering with public and private social services to provide better mobility and transit operations in neighborhoods with the greatest challenges; and supporting “smart grid mobility patterns” to benefit the whole transit continuum.
Paris, city of bikes
Along with other major metropolitan areas like New York and London, Paris has recognized that traffic congestion is a crisis with severe economic consequences — to the tune of 17 billion euros per year. By implementing bike-share programs with networked stations, these cities are extending the transportation options available to citizens and visitors in a way that reduces traffic congestion and addresses last-mile connectivity. By monitoring usage data based on where bikes are picked up and dropped off, cities can respond to demand by adjusting distribution and locations — whether temporarily for high-traffic events or long-term to accommodate changes in population or commuting patterns.
Each of these examples addresses city-specific needs, but the solutions are powered by common factors: An understanding of the value of civic data, for one, and the precipitous drop in the price of sensors, bandwidth and processing power over the past 10 years, which has made building networks a much more economical prospect.
But they’re successful because they all focus on a simple premise with wide-ranging benefit: Getting more people where they need to be, when they need to be there, without friction and without busting municipal budgets. That makes smart transit an easy win for both urban stakeholders and the residents and visitors they serve.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
A journey to IoT success is a complicated one. Most companies start out focused on three things — device hardware, connectivity and an end user app. While these are all important assets, they are only part of a broader set of building blocks. There are seven key steps that any company starting an IoT journey must plan for:
- Connected business model
- Device hardware and connectivity
- Connected product management
- User app
- Product support
- Data orchestration
- Product analytics
Let’s explore each of these further.
Connected business model
Too often companies dive right into connecting physical hardware, building custom apps, and spending significant time and cost on custom development work. Before spending valuable time and resources, the first step should be to create a model of a connected business to help envision its capabilities. What data could your product generate? What new features can you enable by connecting it? How will connecting a product increase your top and bottom lines? By taking a model-first approach, it allows a company to better understand how they can create new value through IoT without spending considerable amounts of money on custom development costs. It also better defines what will be required in this new model, such as the types of users, data and business systems that will be required.
Device hardware and connectivity & connected product management
The next two key building blocks for success are device hardware and connectivity, and connected product management (CPM). Hardware is often one of the most difficult aspects of connecting a product as there are a variety of considerations — power, connectivity, sensors, form factor, environmental and system integration. The good news is that there is innovation happening in this space and a variety of solutions exist from gateways to add-on systems to Bluetooth-based modules and full embedded hardware. By having a clear understanding of the use case, there is a set of hardware types that can meet almost any IoT application. Along with hardware, the selection of a CPM solution is critical. A CPM is the system of record for connected products that manages the data and makes it actionable across a business. It will structure product data, manage users and relationship maps, update devices and firmware, and integrate that data into other business systems.
Once all of this is done, the company can then focus on the user experience through an app. The most common use case we see for connecting products is to create a user app that allows for better monitoring and remote control. Any company first launching a connected product should minimize the features they include in their app as much as possible. Create a hypothesis based on the key features the product will enable, and test those first. Too many companies pour significant time and energy into features within apps that never get used. The goal is to create new value in the first app, but not to address every use case.
Another area that tends to slow a company down is lifecycle management and remote support. Too often companies think about product support in the field as an afterthought. Provisioning workflows, remote updates and device health are all critical to IoT success. The selection of a connected product management platform should also closely link to how a company plans on supporting their products. A simple example is how firmware will be updated across devices. While these workflows can seem simple to start, they can quickly become very complex across thousands of devices and with a variety of different users.
Data orchestration & product analytics
The final, and most important from a business perspective, is making data actionable and gathering new data insights from connected products. Too often companies create IoT projects in a vacuum, where it is costly to get the data into other business systems. Much of the data created by IoT connected products is most valuable in other parts of the business, outside the product or innovation teams. As a company begins its IoT journey, they must understand how to make the data accessible and actionable across the business.
The internet of things seems complex when a company is just starting out; however, managing these building blocks correctly, companies can accelerate time to market and dramatically reduce the risk and cost of IoT — and ensure IoT success.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
In part one of our examination of the distributed computing lexicon, we discussed the importance of ensuring data availability and usability in distributed internet of things architectures. In order that data remain always available for our applications, we must construct distributed infrastructures so that if one node or network component fails, the data it houses is still accessible from other nodes of the same system. Usability refers to limiting latency as data volume grows and demand from dispersed geographies and disparate networks expands. In this second part we’ll look at what it means to achieve data accuracy and ease of operational support while maintaining cost effectiveness.
It’s available and usable, but how do you make sure your distributed data’s accurate?
A traditional database holds data for transactional systems. Transactional systems ensure data accuracy by serializing updates to the data. This ensures data accuracy in a scenario with multiple potential writes to a data record, because the record is locked until the write is complete. This works when architectures are centralized, but a centralized infrastructure is likely to prove ineffective in most large IoT deployments.
Large IoT deployments look to distributed databases like NoSQL, which are built to ensure data reliability and usability and the ability to effectively deal with geographically distributed data. However, write conflicts are inevitable when using a highly available, distributed database architected for availability. The most common ways this can happen are as a result of the inevitable network partition or hardware/node failure. Additionally, the latency inherent in distributed systems — where data centers and clients may be on opposite sides of the world — can impact the order in which your system sees updates. This, in turn, can disrupt the system’s ability to understand which data value is right. The way your distributed database resolves conflicts that arise from these issues determines how accurate your data is.
It’s not a simple problem to solve, and there are tradeoffs (see Dr. Eric Brewer’s CAP theorem) that need to be made between the consistency and availability of data when looking to scale an IoT deployment. A distributed architecture — designed to ensure availability — will inevitably create a situation in which multiple writes occur that conflict with one another. When deploying any distributed database, the availability of the information in that database to your application is critical, and the control a database gives you over resolving these conflicts is a key part of this deployment.
There are many examples of situations that can generate these conflicts in IoT deployments. A sensor(s) may disconnect due to a malfunction, node failure or network partition. When the condition heals, multiple updates to the same value may conflict with each other. Another example is if an application is keeping count of how many errors an IoT deployment sees from devices over a certain time period and a network error occurs, then different nodes will provide different values for the application’s counters. Another situation that can occur when multiple copies of data are being kept is data rot on disk. If over time one copy of the data gets corrupted, how do we know which copy of the data is accurate?
To take advantage of the scale and resiliency of distributed databases, techniques have been developed to address the data accuracy challenges. Look for systems that use logical clocks vs. system or network clocks to determine the order of writes and updates, and look for systems that offer automatic read repair for corrupt or inconsistent data. These types of capabilities, once words thrown around the halls of academia, have become relevant again as enterprises look for ways to ensure their distributed IoT systems function correctly. Now that the data is accurate, it’s time to scale the system for the growth along the horizon.
How do you build a distributed computing system without massive overhead costs?
The distributed computing world of old revolved around vertical architectures. When one database machine became overwhelmed, an organization added resources to the machine. In some instances, scaling vertically could mean moving from a machine with fewer resources to one with a larger resource pool. These transitions involved downtime and purchasing new components for each upgrade. Scaling vertically today would mean inducing downtime and constantly purchasing new resources to handle growing data volumes. The overhead costs of operating a vertical architecture would balloon out of proportion to the utility the system offered. Cost isn’t the only issue with traditional relational database management systems; we meet companies who can’t find hardware at any cost that can’t possibly scale to meet the demands of their applications.
If a weather company has only 10 sensors in the field, it’s likely that one server will be able to handle ingesting, storing and analyzing measurements from those sensors. As the deployment grows to hundreds, even thousands of sensors, the company will have to add more servers to handle the ingesting of data and look to start regionalizing this data collection. Also, as the sensor volume grows, the company will want to perform data analysis closer to where the devices and analysis. These are challenges traditional databases cannot handle without administrators having to manually segment the data across the constantly growing number of servers as well as maintaining application logic that is aware of which server has the required dataset.
Instead, IoT needs horizontal scaling to mitigate costs. Rather than update one server, horizontal architectures link new servers to existing servers in clusters. This helps disperse the resource load and add redundancy, and allows organizations to upgrade and scale without introducing planned downtime. As well as adding scale, reliability and accuracy, modern databases are often more cost effective too as they are often available as open source software and run on commodity hardware. This gives corporations the ability to trial solutions and therefore experience very little production risk.
How do you take distributed data from concept to reality?
We are in an exciting era for IoT big data. Tools exist that finally allow us to do something with all of the information we generate and pull into our pipelines. Distributed, horizontal architectures offer perhaps the best way to elicit value from the data in those pipelines. Distributed architectures enable us to move away from unwieldy data lakes and move us toward powerful edge architectures where we can incorporate data into our business processes. Yet, distributed data offers its own set of issues we must contend with.
A lot of solutions exist that offer to fulfill the promise of distributed data. Not all of these solutions offer the availability and resiliency, the conflict-resolution standards, or the bang-for-buck necessary to really generate value from IoT big data. Executives charged with making multimillion-dollar distributed data decisions must understand how to achieve data availability, usability and accuracy without incurring high overhead. Otherwise, they risk implementing a data strategy that doesn’t live up to its potential. If you see the potential that IoT applications have for helping you grow your business, I suggest you start developing distributed systems expertise in your business now.
Brick-and-mortar retailers can learn a lot from the online shopping experience provided by e-commerce heavyweights such as Amazon, especially in the upcoming holiday season. By integrating these lessons in their stores with the latest internet of things technology, retailers can enhance the in-store shopping experience for customers and be more competitive with their virtual cousins.
First of all, what are the key features that shoppers like about the online experience?
- Easy to purchase one or a handful of items quickly
- Receive shipped items within a couple of days (or same day)
- Loyalty programs for discounts on items and shipping
- Broad and deep selection — wide variety of items that are rarely out of stock
- Purchase history and wish list for easy recall
Retailers can achieve many of these online features by taking full advantage of the latest IoT technologies such as RFID. With RFID tags becoming more prevalent, retailers can leverage ceiling-mounted antennas in the stores that read the tags and track the location of every item in the store in real-time.
With this knowledge, the retail customer experience and engagement can be enhanced by:
- Limiting the time spent shopping in-store for a single or several items by providing the location of an item on a kiosk or app
- Enabling customers to conduct “self-service” and order items for pick-up later or have the items delivered same day
- Finding an item not available at one store location at another store nearby for same-day shipping
- Offering loyal customers discounts with locations of specific items they are likely interested in purchasing
- Providing coupon alerts to customers near specific products in the store
- Allowing customers to view purchase history or wish list items on kiosk or in-app to identify regular purchases or those desired on their last visit
RFID-based, real-time location services would also give more salespeople the time and freedom to address customer needs and find desired items anywhere in the store more quickly. Salespeople could also more easily determine what related items the customer might likely be interested in seeing to add to the overall “basket” of merchandise. The RFID system could make recommendations based on what other items the customer has picked up in the store, or the system could retarget them later during another in-store or online visit.
In a connected “smart” store, customers can become engaged immediately, from the mobile device they carry to the products they interact with, and get the best of both worlds. They can experience the convenience they enjoy from online shopping with the personal service and attention only an in-store experience can provide.
The good news is that insights available from real-time item level tracking are as valuable to the consumer as they are to the retailers, brands and suppliers. Inventory management, product information, customer trends and more can all converge seamlessly through an IoT dashboard.
In my next article we will look at how RFID tech is being used to streamline back-end store operations for optimum efficiency, effectiveness and profitability.
The recent cyberattacks targeting enterprise-owned connected devices have cast a spotlight on the inherent security risks the internet of things can bring to an organization. While there have been a number of IoT exploits in recent years, such as hacking into a connected car and IoT-related security breaches, what’s alarming is that this might only be the beginning as enterprises look to deploy thousands of IoT sensors and devices across their networks. According to Gartner, more than half of major new business processes and systems will incorporate some element of the Internet of Things by 2020. While IoT can provide major benefits in optimizing processes and garnering valuable insights, these connected devices also present a number of security challenges as they create a deluge of data, and new entry points for attackers.
If the latest incidents have proven anything, it’s that security cannot be an afterthought when it comes to IoT. Companies that use connected devices must build security in from the onset, and it is no longer enough to simply secure the network or back-end servers. Organizations must take a holistic approach that protects the connected devices’ applications and data, as well as incorporates detection and response for quick risk mitigation.
Understanding the risks of IoT
Nearly a decade ago, bring your own device turned traditional network security on its head by dissolving the enterprise perimeter and expanding the traditional attack surface to uncharted mobile territory. Today, the influx of IoT devices in the enterprise is bringing similar disruption — but with significantly more risk — as employees connect almost everything to the internet, including office door locks, thermostats, trash cans, light bulbs and more. Protecting the sheer number of connected devices is a daunting task in itself, but the ever-increasing volume of data driven by IoT introduces an entirely new ball game.
As organizations figure out how to manage and protect all of this new data, a critical step will be to identify what devices are collecting data, the type of data they are collecting, how they are consolidating that data and whether the data might be valuable to attackers. It’s important to note that not all data is meaningful to attackers on its own, but when combined with other information can become highly sensitive. This is where companies often fail to deliver adequate data protection.
Take healthcare data, for example. A connected blood pressure monitor’s readings alone have no value to an attacker, but when paired with a patient’s name, it becomes personally identifiable information (PII) that is extremely sensitive. A potential breach of both the patient’s name and healthcare records could lead to identity theft, in addition to being in violation of HIPAA regulations. To secure this information, companies should encrypt sensitive data as close to where it’s generated as possible, rendering it useless to attackers in the event of a breach. Furthermore, the blood pressure monitor itself needs to be protected to prevent attackers from tampering with it and subsequently impacting clinical care, which could potentially harm patients.
Data protection: Sharing on a need-to-know basis
Another pitfall that companies stumble upon is allowing excessive access to sensitive data. This will likely become an even bigger issue with IoT as organizations manage and analyze volumes of new data that will often exchange hands multiple times. For instance, in the blood pressure monitor example, the machine does not need to know the patient information, it simply needs to collect and transmit the data securely to the doctor so that she can make a diagnosis and communicate to the patient. Format-preserving encryption (FPE) will play an important role in securing this information because it enables organizations to derive value from the data while protecting it as it moves across the organization instead of requiring the information to be decrypted at every point in the process.
FPE is a form of advanced encryption standard (AES), which has been in use for some time, mainly to encrypt disc drives and communications between end points such as SSL/TLS and VPNs. However, unlike AES, which encrypts data into a large block of random numbers and letters, FPE encrypts the data into something that looks exactly like the original format. For example, a credit card number still appears in the traditional format of a 16-digit credit card number, versus a long string of characters or numbers. Preserving the data format during the encryption process allows organizations to securely perform analytics and process the data, without needing to make major changes to their applications or back-end infrastructure.
Applying end-to-end encryption is essential in ensuring that sensitive information captured by IoT devices is protected throughout its lifecycle, while still allowing companies to leverage that data for business purposes.
Securely adopting IoT
IoT has the potential to optimize efficiencies, create new revenue streams and enable organizations to leverage big data for smarter business decisions. However, as enterprises embrace IoT, security must be a top priority to ensure the sensitive information captured by these connected devices does not fall into the wrong hands. While IoT security will require a multipronged approach and collaboration between device manufacturers, enterprises and end-users, enterprises that implement data protection and security solutions, such as FPE, will be able to secure their high value, sensitive data assets end-to-end — whether it’s corporate intellectual property, customer PII, payment card information or sensitive employee information — making it useless for attackers.
Global internet data traffic volume peaked to 1 zettabyte for the first time in 2016. Data traffic has also risen by five-fold in the last five years, with more than 90% of the world’s data generated in the last two years alone.
With the proliferation of the smartphones and advanced multimedia capabilities, this traffic is going to increase at the same or even an increased pace. Moreover, with the almost ubiquitous nature of cellular connectivity and better last-mile connectivity approaches, we are seeing a new phenomenon — the internet of things — for the last few years. The forecast for the number of connected devices by 2020 is anywhere between 20 and 50 billion. Proliferation of these connected devices and “things” will lead to a flood of data emanating from them.
Those who understand data would vouch to the fact that the real value of data is in analytics, through which actionable intelligence can be derived and decision making processes are evolved. With the proliferation of a magnified number of connected devices emanating data, enterprises are posed with a unique set of new challenges in assimilating this data and deriving efficient data analytical mechanisms.
Data at the edge
Modern day sensors are becoming more capable. With the enhancement in technology, they are able to extract more data at an increased frequency. In effect, each sensory node is, day by day, becoming the source with a capacity of an ever-increasing data generation possibilities. Moving each of these chunks of data to the central processing machinery is cost-consuming and would need a capable infrastructure. Also, the long-held practice of bringing all the data into the central processing machinery, which is nothing but a small number of data centers, will no longer be a viable and a scalable model, considering the growth numbers charted earlier for the connected devices and their data generating capabilities.
It is also highly possible that some of the data generated from the sensors might be erroneous. Also, certain data points might be at the borderline and hence may be termed as outliers, and the business logic — or the implemented business use cases — may not be considering such data. Considering the evolution of sensor technologies and their data generation capabilities that we touched upon earlier, there will always be a case wherein the computing mechanism or the data analytics implementations will not have much use of all the data.
Considering all these cases, there is an obvious need of a data-filtering mechanism at the location of the data generator.
Why edge analytics?
Some edge devices, or nodes, may need the capability of decision making to trigger a localized action. An example would be a moisture sensor in an agricultural field which would trigger a local sprinkler based on moisture levels. Another example would be that of a store camera or a CCTV camera which instead of sending tons of data 24/7 can be programmed to hold a certain amount of intelligence that can spot anomalies and capture the relevant set of data (e.g., whenever a motion is detected using a motion sensor).
Edge analytics encompasses both the possibilities of filtering and decision making at the edge or the node. This approach enables the possibility of performing a certain amount of analytics in edge devices and thereby reducing the amount of data transfer from the edge device. Not transferring all data also brings with it the flipside of a possibility of missing something. But blindly capturing all data is also unattractive and is not a scalable solution to handling the big data deluge. The concept of edge analytics brings with it the possibility of designing an optimal model that provides the opportunity of managing the data transfer from the edge and data storage at data centers in an efficient way.
Drivers of edge analytics
Self-sufficiency: Edge analytics will prove to be extremely useful in remote operational use cases such as agriculture, solar farms, mining, drilling, etc. where the surrounding ambiance is constrained by the lack of a stable connectivity or a limited bandwidth. Edge analytics would enable a low-latency response through a localized and automated decision-making setup, regardless of the network capabilities.
Cheaper computing: Following on footsteps of Moore’s Law, computing is becoming cheaper and the real estate needed to compute is becoming smaller day by day. This is also one of the key drivers for installing edge computing devices. Depending on the setup of the connected devices, computing capabilities might be enabled in the edge device or in the gateway that connects a set of devices to the internet.
Secure data: Edge analytics will provide an opportunity for data and system architects to reduce the transfer of business-critical data payloads. Some of this critical data is consumed at the data source and suitable decisions taken thereby reducing the need for transfer of such data.
Efficiency and lower TCO: Edge analytics enables the near- and real-time analytics and thereby such models are naturally efficient. Reduced data transfer and lesser data storage need also means a lower TCO.
Last month, there was a massive DDoS attack that was made possible by hacking into unsecured IoT devices, mainly home surveillance cameras. This left some homeowners questioning whether they’re better off with “dumb homes” instead of taking the risk that their smart gadgets and devices could be used again in another attack.
The truth of the matter is that the attack was the result of a vulnerability on cheap cameras and other IoT devices. These specific types of devices are easily hackable because they are designed to be accessed over a local network and they come with unsecured, hard-coded default passwords. Unfortunately, many people own these types of devices, which led this cyberattack having such a wide reach. What many people don’t know is that there is a big differentiation between these types of devices — which leave themselves open to the network — versus those that connect to the network using a secure cloud.
This then bares the question of who should be held responsible, companies creating these inherently unsecure products or the consumers purchasing them who don’t take the extra measures to secure them? In my opinion, the responsibility lies with the IoT companies, which need to do a better job at educating consumers on the difference between secure and unsecure connectivity of smart home products. In looking at the IoT DDoS that occurred in October, one of the biggest holes that enabled the IoT DDoS attack was this exact point — unsecure networks.
The problem here is that most consumers don’t know how to secure networks and unknowingly expose themselves to such vulnerabilities. It seems like some IoT companies are operating under the assumption that consumers are technologically savvy enough to know how to do this, but the truth of the matter is that consumers are very uneducated on smart home security measures. Supplying a consumer with an unsecure network infrastructure is begging for a cyberattack to happen.
The implication of this assumption is what we experienced on October 21 — attackers taking control of a device to attack other devices on the network, serving as a gateway to then attack the entire infrastructure. While it isn’t vital for consumers to understand the specific ins and outs of smart home security or this hack, it serves as a good case study to exemplify some of the biggest concerns for consumers and organizations focused on IoT and smart home technologies.
As more of these attacks happen, consumers are going to get smarter and look for products that use a secure cloud and encrypted connections. They’ll also become more aware of the implications of using these different types of products on the individual level. As soon as a consumer realizes that using an unsecure security camera puts their laptop or smart phone at risk, the personal motivation to learn more about and only utilize secure devices becomes stronger.
On the other hand, smart home companies that want to stick around will need to change their back-end architecture and overarching technology to follow stricter security protocols to protect consumers and the internet at large. What’s more, they’re going to need to be more involved in educating their consumers on security. This is a change that has even more urgency as IoT products enable more smart devices to become connected. Eventually, this type of security vulnerability could have much bigger implications, impacting not only a few devices, but our cars and homes.
The recent DDoS attack carried out by hackers using tens of millions of unprotected IoT devices has mainstreamed what security pros have known for a long time: IoT devices are vulnerable to attacks. In this attack, millions of webcams were unknowingly recruited to conduct the attack on major ISPs on the East Coast. Many of these webcams are now being recalled.
This attack was a wakeup call for all IoT device makers who now realize that edge devices such as webcams and sensors must be secured. That said, balancing cost constraints and security requirements remains problematic for device makers. Let’s not kid ourselves, implementing security is a tax; it is a tax in terms of additional hardware required and time spent learning and implementing security solutions. Compounding this problem is the entrance of new developers — ones with very little background or experience in low-level embedded systems programming. These new developers are typically experienced in developing mobile and cloud applications but need significant training in developing for IoT devices. They also require significant training in IoT security.
Given the above situation, how does the IoT security bar advance? To date, predictions of doom and gloom have been the norm for trying to get device makers to adopt security solutions. Fear of getting hacked is a motivator, but often after the fact. Therefore, fear alone will not yield secure devices. A different approach is needed.
Incentivizing IoT security
The answer lies in incentivizing IoT security deployment by making it easier to deploy than before. To illustrate, let’s go back a bit into the history of the PC. There was a time when every application developer was required to write their own printer drivers. This meant that not all printers were supported and each implementation was a little different. It wasn’t until the late 1980s that the OS took on the role of standardizing interactions with peripherals and other devices.
The same is true when it comes to embedded systems and IoT security. Take cryptographic hardware for instance. Each silicon vendor has a slightly different implementation ostensibly to differentiate itself in the market or to support other aspects of the hardware such as power consumption, storage constraints and so forth. The outcome is a greater burden on device makers and application developers who now must sift through reams of data sheets and specifications in order to implement that one specific piece of hardware. The cost implications are significant given that the above effort results in a one-off implementation that is hard to maintain and update. This prevailing approach also prevents any efforts to standardize security practices across different products. In other words, security remains a checkbox and not a strategy.
What if this effort could be reduced? What if it could be done while letting silicon vendors differentiate their hardware offerings? How can IoT device makers be incentivized to deploy IoT security? One part of the answer is to provide security solutions that are foundational in nature. In other words, chips and platforms that are secure by design. ARM, whose designs power the majority of chips used for IoT products, recently made available a new security design, TrustZone® ARMv8-M, for creating secure, low-power, microcontroller-based products. The new security extensions are part of the the newly released Cortex-M23 and Cortex-M33 chip architectures. This, however, is only part of the solution as it does not address the time and labor involved in implementing security. The answer to that lies in developing better software tools and solutions that simplify the effort for the majority of developers.
A number of software vendors are working towards developing products that essentially simplify the task of creating and accessing secured functions. This generally includes providing out-of-the-box, higher-level APIs and services for accessing cryptography, notification, authentication, key management and other security functions. Several solutions are becoming available as well as a result of collaboration between chip makers and software publishers including CoreLockr-TZ from Sequitur Labs, as well as NXP, Renesas, Microchip, IAR and variety of RTOS vendors such as ExpressLogic. These collaborations signal a larger trend that replaces component-oriented selling with a solution oriented approach. That’s good for consumers as it will lead to devices with better security. After all, security is not a “part,” it’s a strategy.
Reading the blogs, news articles and conference reports, it would be easy to conclude that the smart city was already here. There seem to be so many systems being rolled out, and so many ways in which city governments are making clever uses of IoT and other technologies to improve the lives of their citizens and the efficiency of their own municipal operations.
The reality is a little different. Not many smart city applications are fully deployed, operational, costed and budgeted solutions. Machina Research, sponsored by Nokia, has just carried out a major study of smart city deployments around the world. We looked at 22 cities and evaluated the maturity of their applications and their plans across a number of different domains.
Many of the stories about smart city deployments turned out to be about pilots, though this is not always clear. We were somewhat surprised to find that San Francisco’s much heralded smart parking scheme, SFPark is a pilot, and one that has not been taken to full deployment for want of a business model to justify the investment. This is despite evaluations which show that the technology works and has achieved its declared objectives.
In addition, the term “pilot” actually covers a wide range of different kinds of implementation, from small-scale proof-of-concept demonstrations, through “living lab” action research and development in a live environment, to full-scale tests of business viability.
The plethora of pilot smart city deployments is not really that surprising. It is to be expected that some solutions will be piloted and then found wanting; that is, after all, the point of doing pilots.
In cities, though, there are specific difficulties with moving from pilot to full deployment, even where the technology works and delivers the expected benefit. In some cases, this is because that benefit does not translate into an ROI that can justify rollout; a smart parking scheme, for example, might reduce the amount of traffic congestion in the city center but lead to a decline in revenues from fees and fines. This is exactly what seems to have happened in the case of San Francisco, where the smart parking implementation was successful in reducing cruising time spent looking for parking but did not pay for itself. The UK city of Birmingham similarly found that its smart parking trial did not provide a business justification for deployment. In other words, for some smart city applications, the benefit can be quantified but only makes sense if they form part of an overall vision for the city.
In other cases, there is an ROI that would justify roll-out, but no long-term budget that can support the investment. Here vendor financing, public private partnerships and central government financing may all have important roles to play.
The prevalence of pilot smart city deployments has led us to identify at least three routes towards a mature smart city:
- An “anchor” route, in which the city adds working applications in series. Here a city has a clear and pressing need for its anchor application, to which others are then added as priorities dictate.
- A “platform” route, in which the city focuses on deploying infrastructure first so that a number of applications can be delivered later.
- A “beta city” route, in which the city continues to experiment with multiple applications without a finalized plan for how to bring these pilots to full operational deployment. Beta cities accept that the currently available technologies and business models can only be provisional and prioritize hands-on experience over short-term or medium-term tangible benefits.
These advantages and disadvantages of each of these routes are illustrated below in Figure 1.
We do not believe that one of these three routes is the right answer; each has something to recommend it, and which one fits best will depend on the city’s resources, issues and priorities. A beta approach may deliver more visible easy wins quickly. An anchor approach might be absolutely determined by a single issue, such as preparations for earthquakes, which dwarfs all others.
Few cities are pursuing an absolutely pure form of one of these routes. Most have something of more than one route; either they are hedging their bets, or are in the process of shifting from one route to another. Several are at such an early stage that they have not yet settled down into one route or another.
The full report is published as “The Smart City Playbook” available here.