One of the key issues that comes up when talking with leaders in the biometric security space is around who owns and where to store the biometric data — particularly face authentication templates with the mathematical details of a person’s face. Should they be kept in the cloud or stored locally?
For context, I think it is safe to say that the use of biometrics to manage authentication and identification across the globe is only going to increase. Of course, biometric data runs the gamut, covering anatomical data from fingerprints to voice scans to face profiles. We also need to keep in mind that this is a constantly evolving field and technologies developed today will most likely morph and change in the years ahead. But the challenge will be how to safely and securely manage and protect this kind of data.
Biometric identification historically has required a centralized database which allows the data of several persons to be compared allowing an accurate decision to be made. “Yes, it is person A” — or it is not. One of the challenges of an identification process requiring an external database is that the user does not have physical control over their biometric data, with all the privacy questions and possible security risks which that brings.
Biometric authentication can in fact be delivered without such a centralized database. The data can simply be stored on a local device, such as on a smartphone, laptop or tablet, such as Apple and SensibleVision have done for their consumer-oriented face authentication. A decentralized storage with full user control of the biometric data on the device may be preferred. For users, such a method inherently involves less risk as the hacker must breach each individual device. This alone is a powerful benefit given the increasing frequency of large-scale data hacks which compromise centralized password and face recognition repositories.
Today, policies related to storing and managing biometric data are still being defined. The recently enacted GDPR laid out some initial guidance, defining biometric data as “special categories of personal data” and prohibiting its “processing.” The objective is to protect people from having their information, including data like face templates, shared with third parties without their consent.
The best solution for biometric data management could be a secure hybrid storage approach. But it needs to be driven by use cases. There is biometric data that can be stored locally, but then shared with authorized other devices. The user always maintains control of their biometric templates. For example, you can easily store a biometric face template on a smartphone. There is no reason to have it in the cloud. The exception might be if you wanted to set up an enrollment profile that could recognize you in many different settings. You would first enroll on your phone and then be able to open the doors of your home or office with your face, all without re-enrolling.
In other situations, users might find be willing to have their face profile data in the cloud — having a provider assume the responsibility and the risk of protecting their data, for example, to gain easier access to airport screening. When stored in the cloud, there is always the risk of having your biometric data compromised by hackers — a scenario we are seeing all too often these days. On the other hand, sometimes speed and convenience offsets some of the potential risks.
The bottom line is that there is no single right answer. As in every situation where business is trying to figure out how to exploit a leading-edge technology, there are pros and cons. Developing a viable biometric data strategy will require companies to determine what makes it easy for customers to interact, as well as feel their data is secure. And then use that insight to define the best approach for storing biometric data, including face templates, on either a user’s individual device at the edge of the network or in a centralized database that they monitor and manage.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
When deploying edge connectivity and computing technologies for large-scale commercial IoT, the challenges of complexity, cost and risk often serve as major hurdles between idea and deployment. In previous articles, I’ve explored ways to reduce cost and complexity by addressing the unique design challenges of commercial IoT environments and removing the siloed design approach common in the commercial space.
While I’ve also looked at mitigating risk in those articles, the topic of security deserves its own discussion. The reality is that securing a single IoT network can be a challenge on its own. But when you’re securing an IoT network across multiple locations, and thousands of nodes and devices, you enter into a level of risk that requires serious consideration into network deployment and long-term monitoring.
With end-to-end security arguably being one of the most important pieces of IoT edge infrastructure, let’s consider what it takes to truly secure the commercial IoT environment from the point of manufacture to long-term updates in the field.
What threats should we look for?
Commercial IoT is comprised of tens of thousands of edge computing devices and sensors deployed and networked across hundreds to thousands of locations. Also, as IoT devices can support multiple, simultaneous applications and connections, the IoT network is vulnerable to attack from several directions:
- Nosy neighbor: Just as there are many different devices running many different applications together on a single piece of hardware in the cloud, the same level of security should cover multi-tenancy in commercial IoT environments. Yet, with limited resources, there needs to be a lower-cost implementation to allow for this.
- Tampering: With most of the devices being deployed in unsecured or only semi-secured environments, individuals could physically tamper with them and possibly change the behavior of an application. This is of particular concern in high-traffic commercial deployments such as hospitality or retail environments.
- Extracting secrets: Beyond tampering, physical access to devices provides an opportunity for an attacker to open the device and extract proprietary information, such as security keys and IP information.
- Exploiting software: IoT has become a readily available platform for remote attackers to steal data, gain access to a network or deploy distributed denial-of-service attacks.
How can IoT teams mitigate these threats?
The solution to these issues is to take a layered security approach to IoT edge infrastructure. For each class of threat mentioned above, there is a cost-effective and scalable approach.
To combat the nosy neighbor, utilize containers specifically designed for constrained hardware environments — or, in Ubuntu, “snaps.” Snaps are self-contained, read-only file systems that do not affect neighboring apps. This also makes apps easier to manage and update. It means the app is confined, has its own library and can’t modify the operating system. It can exchange information with other snaps via granular policies and permissions, but the confinement keeps malicious code from spreading and allows for easy and quick policy violation research.
To prevent tampering and physical attacks from changing the behavior of deployed IoT devices, consider these preventative controls as crucial to design:
- Prevent root access by making it impossible to plug a serial cable into the PCB header and avoid open debug consoles.
- Don’t set default logins and passwords.
- Ensure apps are immutable by using snaps.
- Verify that everything from public keys to kernels are signed and verified.
To keep individuals from extracting intellectual property use automatic encryption for the entire file system, including code, configuration and credentials. This should be complemented with secure boot to retain integrity of the system during power cycling and system start up. Also, avoid embedding credentials in the app code — use a secrets management technology instead — and ensure each device has its own unique credentials.
Finally, to avoid software exploits, the most important factor is automatic and frequent security patches. This can be challenging without a clean, clear separation of the operating system and the individual applications. With snaps, patches are easily deployed as often as they are needed. As a bonus, snap confinement combined with an automatic update service allows for flexible and immediate scheduling of patches. This allows IoT teams to concentrate on their product feature releases and upgrades, and not security patches.
No replacement for constant monitoring and consistent partners
Keep in mind that no security measure is perfect, and a plan must be in place for when security is breached. This is where monitoring becomes crucial. It is best practice to have a centralized application performance monitoring system that can quickly identify anomalies and weaknesses and give developers a fair shot at remediation. In the same vein, centralized logging can provide forensics to determine the extent of a breach and troubleshoot specific areas of concern.
For those designing IoT applications or managing IoT networks, it is critical to find a manufacturing partner that has considered these risks. Hardware should be designed in a way that not only addresses the above threats, but also provides a high level of observability into what is occurring in your hardware and on your network.
End-to-end security of commercial IoT is an ongoing process that spans from point of manufacture to deployment and on to the updates in the field. It should be embedded throughout the edge infrastructure, with an eye toward each of the threat classes.
By finding the right partners, IoT teams can greatly reduce business risk while spending more time focusing on the competencies they excel in: developing applications that drive business value.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The IoT market continues to grow, with investments expected to top $1 trillion by 2020, according to IDC. With the rollout of 5G, Ericsson forecasts that the number of cellular IoT connections is expected to reach 3.5 billion by 2023, and DBS Asian Insights predicts that IoT devices and services will reach an inflection point of 18-20% adoption in 2019 alone.
Security continues to be one of the greatest barriers to IoT adopters in 2019. Insecure components, prevalent malware and shortsighted attempts to apply traditional security measures to IoT networks act as formidable challenges to these adopters. Heeding to this new zephyr, threat actors are also adapting and innovating new attack services and hacking tools that will be more complicated and more difficult to detect and respond to. In accordance, we can anticipate a substantial increase in supply chain attacks, IoT botnets and cryptominers alike.
We predict that device manufacturers will put an increased focus on security in 2019 versus previous years, but the number and scope of attacks will continue to rise. Microsoft reports that more than 90% of consumers want manufacturers to step up their security practices, and 74% would pay more for a product with additional security built in. This demand will drive innovation and an increased adoption of trusted hardware and software systems. It will also force manufactures to adopt and adhere to industry recommendations for data management and privacy, bring about increased awareness of supply chain security management and so forth. Manufacturers will also look to include bug bounty programs and responsible disclosure programs for manufactured and deployed devices to improve the security of their products.
Alternatively, consumers will also pay heed to IoT security governance and adopt processes and technologies that assist in the governance of the IoT landscape — an amalgam of several technologies comprised of the cloud, device, mobile, edge devices and so forth. For instance, they will look for IoT monitoring systems and platforms for better visibility and management, data protection technologies for better security and privacy, cloud protection technologies and active threat detection technologies.
Moreover, consumers and manufacturers alike will invest heavily in technologies that assist them in determining the maturity of their security programs. Companies will also look to cyber-risk insurance to safeguard their business from formidable cyberattacks nonetheless.
Furthermore, as IoT security products and services innovation and adoption gains momentum, assisting technologies, such as machine learning, artificial intelligence and blockchain, will make strong and forced inroads into IoT security products, assisting in building improved trust, threat detection, identity management, and data and device management at scale. But, to a large extent, government regulations will bring about a culture of shared responsibility for protecting the IoT landscape.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Forget keeping up with the Joneses. Who has time for that? We’re busy enough trying to keep tabs on ourselves. As consumers we continually build our IoT islands of connectivity augmenting our own internal home functionality. Thanks to new innovations in connected lighting, security and temperature control, coupled with our smart appliances, we’re doggedly focused on sensing activities and situations within our individual property, yet do very little to react to our adjacent neighbors with whom we share our street or building.
Imagine, for a moment, the focus of IoT shifting from individual sensing to communal connection with our neighbors. As device signals become more standardized, our homes could start looking outward and become more reactive and supportive to external-domestic stimuli. This will allow us to use IoT technology to create stronger local communities by sharing resources and information more seamlessly.
Closer communities that address peoples’ needs in ways they truly desire sounds good, right? Then what’s the hurdle to achieving this level of scalar change? Interoperability. To create a truly connected community on a large (at least town-sized) scale, we need to solve the challenge of interlinking our widely distributed smart devices to monitor neighborhood environments in real time. IoT infrastructure isn’t there yet, but in time, local governments will be able to invest in a network of intelligent sensor nodes as well as data centers, where information can be stored and shared. Despite the excitement about smart cities, enormous barriers are likely to slow or stall these initiatives. As I dream about the promise and possibilities, the likelihood that these initiatives could be decades from achieving city- or town-linked services is the reality of our current civic infrastructure.
Smart homes of the future will be infinitely more intelligent than they currently are. Soon, biometric data, such as fingerprints, body temperature and even heartbeats, will enable our homes to distinguish between family members and other home-dwellers to personalize the home based on the needs of each individual. We can pool local sensor data with neighbors to make our immediate community more safe, secure and sustainable. We can start to interlink our resources, energy conservation, homecare, interpersonal communications, social activities and entertainment — not just for peace of mind, but for communal harmony. This is starting to happen in newer construction communities and residential buildings. We see services like i-Neighbour already pulling together resources for building management companies. Development plans, like Sidewalk Labs in Toronto, are building a smart neighborhood from the ground up as a model for future smart cities. Let’s dream about how developing connected communities can help alleviate daily challenges with solutions to improve citizen engagement and better neighborhood integration.
Can home climate, security, lighting and power monitoring be heightened depending on what is happening outside of our home? As our homes become more aware, they’ll begin taking pulse of our larger environmental context, adjusting indoor temperature and air moisture levels according to external factors such as natural solar gain imparted by afternoon sunshine or humidity loss brought about by a winter blizzard. Using this externalization to help bring our neighborhoods closer together, adjacent homes can react to upcoming weather events or police activity in the area through a heightened level of security, warning or internal environment change. Also, for the safety of people living alone or when neighbors are traveling, our homes could become more digitally transparent, broadcasting internal activities, when homeowners wish, to alert their neighbors. Apps like Nextdoor are already pioneering this direction, creating a neighborhood watch network that not only raises the security, but also the emotional and financial value of our neighborhoods.
As we rely more and more on IoT devices to run our daily lives, we need assurance that we have the power required to drive each device flawlessly. The need to share power and backup energy will be important to a community to give piece of mind if a primary power source fails. Continuity of power is obviously critical for home security and fire protection, but it’s also important for emerging home applications such as patient monitoring and senior care. Communities can solve this by having a collective battery backup that can be charged with renewable energy generated. This energy can be generated by neighbors or the collective community by, for example, harnessing the energy from a shared workout facility. Sensors can be deployed to show details of unseen infrastructure resources as well, such as sewage flow, water quality, gas leaks and so forth. This information can be used by local city councils and townships to help residents gain insights about local traffic patterns, power outages, trash collection, power peak hours, etc.
What’s the IoT equivalent in the future of borrowing a cup of sugar from your neighbor? Portable items that can be shared by the community, such as bikes, home-maintenance equipment and even vehicles, have become a key element in the drive toward smart cities. Scale down this city model to a local neighborhood scale and a single community garage provides access to communal tools, vehicles and other outdoor equipment, and shifts focus from siloed ownership to a conversation between neighbors. There are many neighborhood sharing sites out there, like Streetbank, that promote this model of sharing the physical things in our lives and getting to know our neighbors in the process.
Linking up with the Joneses
A community is not smart because it’s technologically advanced. A community is smart because it uses technology to improve convenience, promote resident engagement and increase general desire to live in that community. IoT and smart sensors can and should help neighborhoods achieve a more connected relationship among residents. To enable better lifestyle experiences and stronger communities, we need to break down our IoT silos and discover the societal benefit of interlinking shared resources and devices. Yes, it will cost local governments a lot to deploy an open civic platform to connect a vast number of IoT devices across cities and towns. This must be an opt-in that is carefully constructed and monitored to understand peoples’ comfort zone and their mental concept of home before designing this civic network. This system should serve all communities and not cater to certain tech-enabled neighborhoods. Hopefully examples of communal sharing on a small scale can be the harbinger for the sophisticated connected cities of the future as we uncover peoples’ need within our local buildings and neighborhoods to test the benefits of connecting beyond our own personal domestic property.
The Wi-Fi bubble
With homes and businesses becoming smarter, the number of IoT devices operating in these areas is on the rise. In most cases, Wi-Fi is the perfect connectivity option, offering high bandwidth and very low cost due to using existing infrastructure and common tried-and-tested technology.
For most noncritical applications, such as smart appliances, Wi-Fi is fully capable and reliable enough to get by. If the bubble bursts and devices are left with no connectivity, it’s no big deal. However, for applications such as fire alarms, security and healthcare applications, Wi-Fi’s occasional unreliability is potentially dangerous. These devices will need their own connectivity option that is guaranteed to send the necessary data, whatever the conditions.
Bigger areas call for bigger bubbles
For a business that operates over large areas, such as a mine or farm, Wi-Fi is not usually the best option due to being fundamentally designed for short-range operation. Essentially, a wider area needs a bigger connectivity bubble. For these applications, low-power wide area networks like LoRa, Sigfox and cellular-based LPWAN’s come into their own. These networks allow for a much larger connectivity bubble and are ideal for operating sensor-based devices across a wide area at a low cost. With this, devices optimized correctly for these kinds of networks use very little power to send and receive data, meaning batteries can last a very long time in the field without recharging.
These networks give devices a great deal of freedom with regards to moving around inside the coverage area of the network, but unfortunately, that might be as far as it goes. A LoRa network, although capable of covering very large areas, is still essentially a localized solution. If devices are expected to leave base, they will also be leaving the network.
A sea of Sigfox bubbles
Sigfox is a good option for moving devices, offering coverage in over 50 countries worldwide. Like LoRa, Sigfox can offer a bigger connectivity bubble, meaning devices can operate over a wider area. On top of this, Sigfox is a global network, meaning that unlike LoRa, devices will connect in London just as well as they will in Paris or Berlin.
There is, however, one drawback with Sigfox: Your devices will only work where there is a Sigfox network. Presently, Sigfox offers good coverage across many major cities in the countries covered, but not so much in rural or remote areas where there is less investment in technology and infrastructure. If your business case requires your devices to be connected and contactable everywhere they go, Sigfox might not be the answer for this reason.
Is there such a thing as a ubiquitous global LPWAN?
If the business case requires devices to move around and staying in contact is mission-critical, a truly global connectivity option is required. At present, there are a couple of ways to achieve this. One option is cellular data. Due to the ubiquity of GSM mobile, you can now find 3G or 4G LTE data coverage almost anywhere in the world. However, you may need to use and manage multiple SIMs and contracts, and relying on mobile data can lead to spiraling roaming data costs. On top of that, there are still some areas of the world where there is nothing more than 2G mobile coverage, meaning devices set up for mobile data just won’t work.
The second option is to use a global IoT connectivity, such as that provided by Thingstream. Using the global GSM network, devices in more than 190 countries worldwide can connect to it as long as they are within reach of a GSM cell tower. Connected devices are ensured ubiquitous connectivity by connecting to the strongest GSM signal available. The network is accessible via 2G, 3G and 4G masts, making it possible to connect almost anywhere in the world, from big cities to the tiniest backwaters.
Happy trees: A perfect use case for Thingstream
When South African farmer Robert Carlton bought his 100-hectare macadamia nut farm in 2011, it presented him with a challenge: getting the right amount of water to each tree as and when required. To achieve this, he deployed moisture probes across the farm to collect the necessary data to inform the decision of how much water to use.
Despite having accurate data, this solution still presented Robert with problems. To know how much water was needed, he would have to drive out to the sensor, download the data and process it before deciding how much water was needed. This process in itself proved time consuming and costly. A wireless system was needed.
In remote areas such as the location of this farm, businesses are limited by what is available. In rural South Africa, that leaves very few options. Cellular data was tried first, but the process of topping up and managing multiple SIMs was still too inefficient and expensive.
A final working solution was found in systems integrator Pylot, who equipped each device with a Thingstream SIM card and software, allowing them to stay connected and report environmental conditions back to base in real time.
“I can now efficiently manage the irrigation of a 100 hectare farm and optimise machinery workflow, all from one cost effective and reliable IoT dashboard.”
– Robert Carlton-Shields, owner, R&K Estates Macadamia Farm
In finding the right solution, testing is key. It may seem obvious to test before choosing, but we’ve seen many cases where it has happened the other way around, often at a large cost. Other LPWA options were also considered and tested on the macadamia farm, but when it came to the critical topics of cost and coverage, this was the only immediately practical solution offering coverage of the whole farm.
“For IoT solutions out in the field there were previously two options. A LoRa network or a Sigfox solution. Both of these have significant challenges. With LoRa there’s a long and costly process of setting up a bespoke network. We’ve run into planning issues for the masts and this can be just to provide a trial of the service – it’s simply not quick enough or cost effective. With SigFox we’d be reliant on their network coverage – if you are lucky this is can be available in densely populated areas, but not a viable option for a remote farm covering hectares of land. Thingstream has a ubiquitous coverage footprint, is low power, cheap and can be up and running in hours. Combine that with the data collection and reporting platform they provide and this is an IoT revolution.”
– Trevor Hart-Jones, CEO, Pylot
Choose your IoT connectivity wisely
IoT connectivity is not a decision to be taken lightly. Every installation is different and must be assessed to find the best fit for the business case. Connectivity comes in many flavors and each has its merits; from the many available options, very few will actually be a perfect fit. The key takeaway here is to choose wisely based on research and testing. Making the wrong choice could bring the whole project back to square one.
A new report by the World Economic Forum found that the value of keeping things “within” our economy, rather than letting items pass “through” our economy could unlock a global growth potential of $4.5 trillion and also address the global challenges we’re faced with today, ranging from resource depletion to climate change. This is the core tenet of the circular economy, an ecosystem in which sustainability is embedded throughout a product’s lifecycle in the global economy. In a circular economy, products are designed to be made with recycled components, are produced with longevity in mind, are made to be refurbished at maximum and can ultimately be broken down into reusable metal and plastic parts at end of life.
Increasingly, tech companies are employing corporate sustainability programs that follow the principles of a circular system. Popular tech buyback programs and refurbishments remind us of the importance of the circular economy toward the end of a product’s lifecycle, but what many don’t realize is that the process begins at a product’s inception and is increasingly facilitated by emerging technologies such as the internet of things.
How IoT powers the circular economy
According to a European Commission report, 80% of a product’s environmental impact is influenced during the design process. Here, product designers and engineers make the production and manufacturing decisions that determine the energy efficiency, refurbishment costs and reusability factors that end users will face years into the future. By deploying IoT sensors, manufacturers can maintain a record of each component part’s makeup and origin to determine its environmental impact, and to help end customers make ethical decisions about the products they purchase and utilize. This attention to detail in a product’s initial stages will be critical throughout the life of the product as IT leaders use IoT sensors that are intuitive enough to enable visibility into real-time conditions, resource demands and future maintenance needs by analyzing data and performance.
From day one, end users are using IoT trackers on assets in the field to monitor key functions and essentially manage, diagnose and repair their enterprise tech products both at the plant or remotely. These assets in the field can include anything from machinery on a factory floor to wind turbines in an energy grid to IT equipment in a data center.
While the average lifespan for a piece of enterprise technology is about three to five, IoT sensors can read product performance and match it with data intelligence throughout its lifespan, alerting IT leaders to potential future systems issues, necessary system-wide updates or maintenance needs before anything goes haywire. This level of intelligent and autonomous diagnosis via IoT can save millions of dollars in repairs, replacements and failure costs in the long run. What would have traditionally required downtime or technical support is simplified through IoT sensing, minimizing downtime and unnecessary equipment overhaul.
Data collected through IoT sensors is also providing valuable business insights to IT and finance leaders alike. Valuable data analyzed is unveiling needs and inefficiencies in energy use, underutilized assets, materials consumption and stock inventories to help leaders make informed, data-backed decisions while fueling sustainability.
Refurbishment and accountability
As a product runs through its lifecycle, it naturally may pass through a number of users, undergoing refurbishments that allow the product to remain in use in the circular economy. As security threats continue to evolve, so must the technologies to defend against them. IoT systems, combined with emerging AI and blockchain technologies, can also defend against cyber-risks throughout the product lifecycle by ensuring secure authentication and advanced scanning and detection. Increasingly, value chain players are utilizing “product passports” that are generated by IoT which give a comprehensive view into the product as it passed through the economy, providing a complete view into past service tickets, recent software updates and potential future hiccups or security breaches. The product passport can also include repair and disassembly instructions to facilitate reuse and recovery in the hands of a future owner. These passports are a great alternative to shredding hard drives to protect information.
While the prospect of IoT has delivered unprecedented connectivity and data integrity, it has also helped pave the path for a more sustainable product lifecycle. This not only spares the environment from unnecessary waste and production byproducts, but it also helps organizations and IT leaders maximize their technology and trim costs associated with product repairs, unnecessary downtime and underutilization.
With the rapid growth of connected IoT devices, device management strategies have become increasingly important for industrial IoT deployments. In fact, Gartner is predicting that the number of IoT devices will surpass 20 billion by 2020. This explosive growth across industries such as manufacturing, healthcare and more has generated greater demand for scalable, turnkey IoT device management technologies.
Industrial organizations are looking for better ways to monitor and control fleets of intelligent devices, with seamless connectivity and the flexibility to accommodate an ever-changing device landscape. Some organizations have anticipated this growth and developed a correspondingly robust device management strategy. For these organizations, this rapid proliferation — both in number of devices deployed and their diversity — isn’t daunting. But if you fall into the category of an organization that has yet to adopt a device management strategy, then it is time to act.
Before diving headfirst into developing a skunkworks solution or asking IT to allocate budget, set yourself up for success by first addressing the basics. These most commonly asked IoT device management questions can help you to evaluate the structure of your organization and determine the best device management strategy to achieve your IoT goals.
1. What is the nature of IoT devices?
IoT device management often deals with mission-critical, industrial equipment, where uptime is crucial. These are the types of devices that perform functions that are key to a business’ core operations. So, if one is compromised or fails, the repercussions are felt throughout the entire organization. The variety and complexity of IoT devices is also extremely broad. The spectrum stretches from two-dollar temperature sensors to million-dollar wind turbines, which is why IoT device management systems are designed to control a range of device types with varying levels of sophistication.
2. What is the focus of IoT device management?
Organizations that turn to IoT device management are looking to take their IoT to the next level and enable advanced functionality. For example, some companies use IoT device management to get more from their digital twins — virtual representations of physical objects in the digital domain, whose information is typically stored and updated in a device registry. Advanced digital twin design allows companies to analyze devices as a group and model their behavior as a collective.
IoT device management can also help businesses use predictive capabilities by extending its reach into the field. They can analyze historical data across devices such as device state, telemetry and prior failure information, which can be matched against current fault data and other devices for root cause analysis. For example, an oil refinery can use data insights about a pump’s health, as well as like assets across a population, to foresee an upcoming failure.
3. How much can IoT devices scale?
As the population of connected devices grows exponentially by the day, there’s seemingly no limit to how far the ratio of devices to people can skew in favor of the former. In a modern IoT deployment, it is not uncommon for the scale of devices to grow to hundreds of thousands, millions or even tens of millions of devices. The sheer number of devices generates countless extra details and issues, which can lead to a host of scalability problems that only IoT can solve.
4. How frequently do IoT devices need to be updated?
In a modern IoT deployment, cloud-based systems need to update IoT devices often. Although devices vary broadly, many types include human-facing aspects used for marketing or training purposes. For example, a connected vending machine may require content updates that can include images or videos, which go well beyond the capabilities required for your average configuration change or software update. It’s not inconceivable that this content will need to be updated as frequently as daily.
As connected devices become more prevalent and IoT adoption continues to spread, the challenges surrounding device management will only grow. The best way a business can equip itself to navigate the changing landscape is to approach digital transformation efforts with careful planning and the right strategies. To set yourself up for success, continue to innovate on features that keep you focused on business goals and make sure key enablers are in place to support the long-term scalability of your device management strategy.
Connected cars are moving past commonplace and becoming standard. According to data from Statista, by 2020 connected cars will account for 98% of the global car market, with 100% market saturation by 2025. That’s just six years from now. The implications for connectivity at this scale are immense, especially on the security infrastructure side.
Within the connected transportation ecosystem, the actual vehicles are the most visible part. But behind the scenes, the most important part of such an environment is the underlying electronic standards that make it possible. With any distributed and connected system you have to get everyone working on the same page. Connected cars are no exception; the engineers, auto manufacturers, maintenance personnel and regulators need to all be on the same technical page so they can work together seamlessly.
Making standards standard
There’s a precedent for useful and connective standards in the automotive industry. For example, there’s the on-board diagnostics II (OBD-II) standard, which became mandatory for all cars made or sold in the United States since 1996. This standard specifies the diagnostic connector (typically located near the driver’s left knee), its pinout, signaling protocols available and the messaging format. Vehicle parameters and guidance on encoding the data are also included.
The results of the OBD-II standard were and remain profound. Drivers were no longer tied to the dealer for repairs; they now had the flexibility to take their car to any experienced mechanic, at any repair shop, to have their car serviced and maintained using OBD-II standard diagnostic tools. Within IT there are analogous standards. For example, the standardization of the Transport Layer Security (TLS) protocol enables users of any browser — Chrome, Firefox, Safari, Edge, etc. — to reasonably expect that they can securely navigate to any webpage hosted by any webserver — Apache, NGINX, Tomcat, IIS, LiteSpeed, etc. — without worrying about compatibility.
Open standards are necessary for connected cars to provide enhanced safety and better use of data. Manufacturers, regulators and service providers need such standards to build vehicles independently yet still create interoperable products. The vehicles need to seamlessly and securely communicate with not only with each other, but with other elements in a smart city, regulatory agencies and other outside services.
How do we get these standards? It doesn’t require a lightbulb moment of discovery, but instead diligent and patient work. Teams of people will need to collaborate together in a similar fashion to the Internet Engineering Task Force (IETF), whose members generously donate large amounts of their time developing the standards that make the internet work.
Current vulnerabilities and potential safeguards
While there are stories of engineers hacking connected cars, it’s important to remember these attacks in non-lab conditions are statistically insignificant. There’s no epidemic of breaches, as we’re at the very nascent stages of the smart car era. This means the industry still has time to step back and assess. It’s during this period that the industry must get things right. This means working with the government and organizations such as the IETF and the Society of Automotive Engineers (SAE — whose J1939 standards collection defines the CAN bus). Academia needs to jump deeper into automotive IoT to help create meaningful and verifiable standards for connected vehicles. The connectivity, security, maintainability and reliability of the vehicles all warrant protection through smart design and universal standards. Without such agreement from manufacturers, subcontractors, OEM vendors and regulatory organizations, the connected car promise is no longer practical and certainly not economical for those involved.
There’s a convergence of operational technology (OT) and IT within the industry. We have OT-related technology such as the CAN bus, a standard allowing manufacturers to build vehicle control systems that enable microcontrollers and devices to communicate without a host computer. Cars are essentially moving industrial control systems, and the requirements for these types of systems are reliability and safety. Contrast that with IT, such as that found in mobile phones. Manufacturers can accept them to work flawlessly 96% of the time in exchange for occasional restarts and better graphics. Automakers can’t take that remaining 4% risk of brake or transmission failure.
Problems arise with current automotive manufacturing when they take shortcuts. For example, they might converge IT and OT onto a “super-bus” single network. IT must be open to communicate with the outside world. Consumers want apps such as Pandora or Waze within their cars. The problem is the open interface making the apps possible opens the system to bad actors. When IT and OT systems are converged, then the avenue of attack goes beyond audio controls and navigation into steering, braking or engine control.
Securing device-to-device communications and interoperability presents multiple challenges in the IoT environment. Without a prevailing standard for automotive security, many industry and technology experts struggle to navigate a variety of protocols, processes and compatibility issues. The auto industry outpaces many others in the IoT space with a comparatively high number of in-automobile devices (all requiring security technologies) that are now being connected. In cases where traditional encryption methods simply don’t fit well, such as public/private key certificate management between small components, a reliable lightweight commercial encryption mechanism is needed.
A non-PKI framework for exchanging strong symmetric encryption keys between devices (within the car), while also capable of scaling to cloud and global services levels, would be a real game-changing and disruptive technology that meets critical security needs head-on. Integrating scalable security systems in the automotive and telematics spaces is still a loosely defined practice, ready for a next-generation solution.
Mundane and memorable at the same time? That’s how I felt on my virgin self-driving voyage at CES in Las Vegas earlier this month. Lyft, the rideshare company, and Aptiv, the mobility company building driverless technologies, offered me the chance to ride the Strip and nearby highways in their new retrofit BMW Series 5. My ride was both underwhelming (I’ve had more clutch moments in a Vegas cab) and overwhelming (all that technology) at the same time.
Turns out that I’m not so special either. There are 30 of these cars in Las Vegas that are now part of the Lyft fleet. Lyft riders in Vegas can actually opt-in for a self-driving ride right on the app. If there’s one nearby, it’s yours. And Lyft has already completed 30,000 of these rides around the country.
The self-driving experience turns out to be a bit oxymoronic. Driverless cars in Vegas have drivers. Actually two. I had a human driver behind the wheel and an Aptiv representative in the passenger seat. Plus, I had a Lyft PR person and my photographer. That made for a crowded ride.
The human driver is there as a precaution should the system fail. It’s also still illegal to have driverless cars on private property in Vegas, which meant that we were on human control while entering and exiting the hotel grounds. Once we got into the driverless mode, I had to peer over the driver’s shoulder to make sure that there was no hanky-panky with a foot on the brake or a hand on the wheel. The driver was pretty chill and into the experience, especially when you consider he’s on his way to obsolescence.
You have to be a bit of a geek to appreciate the exciting part. The driverless experience is like a bunch of IoT devices that forgot to include you in the party. There are 21 different sensor devices. The car’s Lidar, which is like radar but based on laser light emissions, are mounted at the front beneath the grill and on the mirrors, making it look like any other normal car, unlike the bizarre Lidar-topped Google cars that were so popular a few years back. There are also radar sensor mounts on either side of the vehicle.
State of the art GPS, along with IMU (inertial measurement unit) sensors, work to know the location of the car to within a quarter of an inch at all times. Altimeters, gyroscopes and magnetometers work in unison to constantly transmit the car’s exact position. At the same time, the cameras are surveying the landscape for lane-changers, crazy pedestrians and other obstacles. The combined effort of these thousands of dollars is basically digitizing the world around you and making it actionable using your car.
You get to watch the whole driving show on a visual display that looks a lot like SimCity. Buildings appear as squares, cars appear as moving car-blobs and roads show their traffic stops, turn lanes and more. It’s by no means realistic, and most of us, including our guides, couldn’t quite explain what the color-coding meant on the display. In the back seat of the car, however, was a full-screen GPS display showing where we were at all times.
Of course, there are lots more features, some of them already found in semi-autonomous cars. Real-time data can analyze the fastest route through Vegas traffic, and they’re even beginning to predict potential problems faster than a human could — like a driver cutting lanes.
My ride was short and totally uneventful. We kept within the excruciatingly slow speed limit. We passed one car (yeah!) and had zero close calls. Any bets on when I’ll be taking my first self-driving ride without the human driver backup? Not sure when, but I’m pretty sure it will be at CES.
Blockchain has been encroaching into all areas of life for more than five years now. Financial services are at the forefront of blockchain development, but logistics, utilities, security and government services are quickly gaining ground. An ever-increasing focus on efficiency and security make this the technology to watch. However, the interaction between the all-encompassing IoT and blockchain has been the subject of much debate, and this uncertainty is the reason many companies are reluctant to invest in blockchain technology.
A look at the challenges
Centralization vs. decentralization
The central values of both IoT and blockchain seem incompatible. IoT gravitates towards centralization, ensuring all IoT devices are connected to one main server or network for all of the benefits this affords, while the premise of blockchain is decentralization. Blockchain ensures that not all participants in an interaction have to trust each other by utilizing a disparate network of nodes, thus creating the legendary security for which blockchain is known.
IoT devices rely on speed, one of the major benefits of the centralized processing system. Whether it’s a phone, a smart card or some other customer-oriented device, timeliness is central to the usefulness of IoT devices. However, blockchain is inherently slower. It takes time for a decentralized system to come to a consensus, and this kind of delay would be impractical for many IoT devices. For example, you can’t have a smart car waiting 10 minutes to make a decision that needs immediate attention.
Blockchain is widely lauded for its security benefits achieved through asymmetric cryptography. Users hold a private key to create a digital signature of every asset on the blockchain, proving ownership. If anyone gains access to this key, they effectively own your property. In IoT, each device performs its own business, and no human interaction is needed between the device and the network it connects to. How to marry these two concepts is one of the major hurdles of any IoT blockchain technology: Would each device store its own private key? What happens when that device is hacked?
Finally, there is the disparity caused by the age of the blockchain market versus the age of the IoT market. Many IoT sensors are designed to be operational for years at a time, but blockchain technology is in its infancy, constantly changing and, at times, quite volatile in its development. Any IoT device incorporating blockchain technology must take this into account to ensure that it doesn’t immediately become obsolete.
A look at the possibilities
If the logistics can be managed, blockchain implementation in IoT devices promises a solution to the difficult problem of running secure, efficient and independent IoT systems. Privacy and security in IoT is a fundamental concern, especially given the vast amounts of personal data IoT collects and transfers on a daily basis. Ensuring a secure process between validated, legitimate stakeholders is of primary importance, and blockchain ensures this through user verification and provenance checks.
Blockchain also allows for consensus and agreement models to detect hackers and unverified users, thereby mitigating cyberthreats. And on a system with tens of thousands of IoT nodes, the possibility of hacking the network is remote at best.
Blockchain allows for automated interactions to occur between different nodes in the system, predicated on preset embedded criteria, without the need for communicating with the central network. This means that business logic can be executed automatically, without human intervention, significantly streamlining processes and increasing efficiency of IoT devices while still maintaining high levels of security.
Model tracking allows a system to record metadata and the results of logic executed by the network, creating an immutable history of what has happened and what (and why) certain decisions were made during IoT processing. This allows regulatory compliance, troubleshooting and model improvement to become an integral part of every system.
IDC estimated that by 2019 as many as 20% of IoT deployments will include blockchain technology. This growth isn´t casual since many large companies have been exhibiting their IoT blockchain technologies for several years.
In 2017, SAP and IBM partnered to demonstrate how IoT and blockchain can automate a pharmaceutical supply chain. Individual units needed to be refrigerated to specific temperatures which were measured via sensors, and that data was fed back to the main network. SAP’s Leonardo software platform was combined with IBM’s blockchain cloud service to create a system that could track these sensors, manage unit refrigeration and ensure unit security from shipping through to final retail delivery, with the transactional history of the process visible to all parties.
This same methodology is being replicated in IBM’s newly launched IBM Food Trust, which is already being used by Walmart to track vegetables from farm to table. IoT provides the monitoring and tracking infrastructure, and blockchain provides a trusted way to audit whether the movement of goods complies with policy.
The promise of IoT blockchain benefits, unfortunately, does not mean solutions to the challenges listed above. A lot of infrastructure, integration, technology and governance work are needed before integrated IoT and blockchain technologies are widely available. Small steps taken by companies like IBM are the first in a long journey toward designing an IoT system with in-built blockchain capabilities.