Almost four years ago, I wrote two posts in my IoT blog — “Are you prepared to answer the M2M/IoT security questions of your customers?” and “There is no consensus on how best to implement security in IoT” — explaining the importance that security has to fulfill the promise of the internet of things.
I have been sharing my opinion about the key role of security in IoT with other international experts in articles including “What is the danger of taking M2M communications to the internet of things?” and at events including Cycon and the IoT Global Innovation Forum 2016.
Security has always been a tradeoff between cost and benefit; the opportunities generated by IoT far outweigh the risks.
But who cares about security in IoT?
A decade of breaches and the biggest attack target yet is looming
We all know the negative impact that news about cyberattacks has on society and enterprises. In less than a decade and according to ICS- CERT, incidents have increased from 39 in 2010 to 295 incidents in 2015.
In a survey published by AT&T, the company logged a 458% increase in vulnerability scans of IoT devices in the last two years.
It is a temptation for hackers to test their skills on connected objects, whether they are connected cars or smart homes appliances. But I’m afraid they will go far beyond attacking smart factories or smart transportation infrastructure or smart grids. With millions of unprotected devices out there, the multitude of IoT networks, IoT platforms and developers with lack of security, I believe the biggest attack target yet is looming.
With the internet of things, we should be prepared for new attacks and we must design new essential defenses.
The OWASP Internet of Things Project is designed to help manufacturers, developers and consumers better understand the issues associated with security in IoT, and to enable users in any context to make better security decisions when building, deploying or assessing IoT technologies.
Who owns the problem?
With IoT, we are creating a very complicated supply chain with lots of stakeholders, so it’s not always clear who “owns” the problem.
Manufacturers can’t divest themselves of responsibility simply because the home owner bought several component parts from different retailers. As a manufacturer, you have a responsibility to ensure that your product is secure and reliable when used in any possible scenario and use case, which means that manufacturers need to work together to ensure interoperability — we all own the problem!
This might come as a shock to some companies or industries, but at some level even competitors have to work together to agree and implement architectures and connectivity that is secure and reliable. Standardization is a good example of this. If you look at the companies actively working together in ISO, ETSI, Bluetooth SIG and so on, they are often fierce competitors, but they all recognize the need to work together to define common, secure and reliable platforms around which they can build interoperable products.
If cybersecurity is already top of mind for many organizations, why the lack of security in IoT?
According to the AT&T State of IoT Security 2015 survey, 85% of global organizations are considering exploring or implementing an IoT strategy, but the bad news is that only 10% are fully confident that their connected devices are secure.
It scares me that only 10% of developers believe that most IoT devices on the market right now have the necessary security in place.
In a publication from Ernst & Young titled “Cybersecurity and the IoT,” the company defines three stages to classify the current status of organizations in the implementation of IoT security:
- Stage 1: Activate — Organizations need to have a solid foundation of cybersecurity.
- Stage 2: Adapt — Organizations must adapt to keep pace and match the changing business requirements and dynamics, otherwise they will become less and less effective over time.
- Stage 3: Anticipate — Organizations need to develop tactics to detect and detract potential cyberattacks.
What enterprises need to do
If you are thinking only about the benefits of IoT without considering security as a key component in your strategy, you will probably regret it very soon. Here are some recommendations to consider before you start your IoT journey; or if you are already started, I hope it is not too late for wise advice:
- Adopt a comprehensive framework and strategy for IoT with end-to-end security and prioritize security as a key IoT technology element.
- Conduct a full audit and assess likely risks within IoT initiatives. Prioritize the opportunities and risks of deploying IoT.
- Bake security into devices and processes early. Include embedded device testing, firmware, protocols, cloud and application security assessments.
- Mobilize the larger workforce around IoT security.
- Bring partners up to rigorous security standards. Evaluate third-party partners with expertise.
- Rethink the roles of IT and OT.
With the proliferation and variety of IoT devices, IoT networks, IoT platforms, clouds and applications, we will see new vulnerabilities and a variety of new attacks over the next few years. The progress in security technologies and processes that prevent these attacks will be key for the adoption of IoT by both enterprises and consumers.
In the future IoT world, an end-to-end security approach is critical to protect physical and digital assets. The ecosystems of this fragmented market must understand the need of security by design and avoid the temptation to reduce costs at the expense of security.
Do not stop asking for security when you buy a connected product or use an IoT service; the temptation of time to market, competitive prices and lack of resources must not be an excuse to offer secure IoT solutions to enterprises, consumers and citizens.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
I recently had the privilege of speaking at the SIIA’s conference on “Deciphering the internet of things” in San Francisco. The session started with a report that I had come across two years back — the World Economic Forum’s 2015 report on the industrial internet. The report’s framework concerning the adoption of IoT is still applicable today. Here is a brief summary:
While the world is recognizing the potential for IoT to deliver value, it is still in the early stages. We are only beginning to unlock the full potential of IoT, specifically industrial IoT. Many large industrial companies, while well-established, are not quick to change. Startup and tech companies move at a more rapid pace and strive for constant innovation. These two types of companies must come together for IIoT to move further along.
The four waves of IIoT adoption
There are four waves of adopting IIoT, and currently we are still in the first wave: operational efficiency. This includes activities such as asset utilization, operational cost reduction and worker productivity. These types of activities can produce fast results. Therefore, ROI can be readily determined and it is easier to fund these types of projects.
The first wave lays the foundation for the infrastructure required to drive the next wave. The second wave is new products and services. This consists of new business models, software-based services and data monetization. For example, Michelin sells tires. But if the company sells tire as a service, it takes on monitoring the usage of the tires as well as replacing them when they exceed the maximum mileage. Michelin now creates more value and generates higher revenue. The second wave connects both products and services in a way that improves each.
The third wave of adoption is the outcome-based economy. This wave drives unconventional revenue from products and services. It spans three areas: products, equipment services and information services. For example, the traditional offering is a product with a service contract attached to it. And there is usually a set of information services that come with it, such as maintenance, inspection and monitoring. However, products are increasingly digital. For instance, tractors are connected with sensors. Once connected, a tractor can now be enabled with remote diagnostics and optimization services. This is the very beginning of digital services. With the sensors, the farm equipment is capable of soil, plant and equipment analysis. As this kind of example grows, there will be a new marketplace for agricultural information services.
The final wave of IIoT adoption will be the autonomous pull economy. This is an economy where automation occurs end to end. It results in resource optimization, waste reduction and even continuous demand sensing. This final wave will unlock value on multiple fronts. Consider the example of the Rio Tinto autonomous mine. The mine covers all the different aspects of IIoT — heavy machinery, transportation, critical operations and high downtime costs. See the video for more info:
Even though some of the leaders like Rio Tinto have taken the leap, this wave is still under development. Every single company will have its own unique approach as operations move from manual to self-organizing and demand-driven.
Technology and IoT
According to a recent report published by ReadWrite, there are about 2,888 companies focused on delivering IoT technology and solutions. This follows a report by IoT Analytics covering the 450 IoT platforms that are available today. Yet, according to a survey done by the World Economic Forum, only a small fraction of the companies that could use this technology have a funded budget for IIoT.
The technology vendors, new as well as established, are investing heavily in addressing the technology challenges that arise because of IoT. As a result, even though the ecosystem and standards are still in flux, technology is available for buyers who are looking to get ahead of their competitors. The biggest emerging IoT opportunities are created by companies that can deliver value today by combining deep domain expertise while also laying the foundation for the later adoption waves of IoT. Successful companies have in-house experts with domain, industry and technical knowledge that is combined together to deliver solutions to the problems that these industries are facing.
As IoT adoption grows, the value of IoT in the near future will no longer be based on how many billions of things are connected. It is not even the terabytes of data that are being generated from these connected things. It is the valuable insights that are being unlocked to generate new revenues, reduce costs or mitigate risks.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
With IDC predicting that 30 billion “things” will be in use by 2020, it’s clear that the internet of things is playing a critical role in many companies’ digital transformation strategies — whether to enhance customer experiences or improve operational efficiencies.
While most enterprises are already enjoying some benefits, they often struggle with a series of challenges when attempting to aggregate available technologies and devices to maximize the potential of their IoT ecosystems. Most of these IoT device management challenges are common, irrespective of the industry vertical or the specific problem they’re trying to solve. Working with some early adopters, we’ve identified five key challenges and ways to address these in your organization:
Defining device needs. First you need to identify what you want to measure to obtain the necessary insights to support your business decisions and define the device types required to do this. Picking the right devices depends on the atmospheric conditions/locations where they will be deployed, connectivity options available, source of power, local data processing capabilities, remote management and monitoring capabilities, and a way to extract and analyze the data.
For instance, consider an organization that wants to monitor and control the temperature of its geographically distributed warehouses. Key criteria include the sensor’s coverage range; warehouse size; inter-sensor connectivity options, such as BLE, XBee and Z-Wave; and the sensor’s internet connectivity options, such as gateway support. Some deployments might even require local computing (edge computing) capabilities to enable these to work in offline mode when connectivity to the cloud is unavailable. Devices may not be limited to sensing and will have actuators that can do things on command.
IoT data integration. Once you’ve deployed the required devices, the next challenge is to seamlessly integrate them with existing applications to ensure the data collected and transmitted is sanitized and error free. An IoT gateway helps bridge the internal network of sensor nodes and the organization’s external infrastructure.
Data from a deployment, such as a warehouse, could end up going through several physical layers before reaching the cloud application layer given the communication and computing capabilities of sensors. Once devices are connected to the IoT platform, making these available externally is a challenge. On one side, devices bring in sensor data that needs to be stored, summarized and grouped or that requires real-time decisions.
Meanwhile, there are device actuations exposed externally via proper authorization, and some may have management interfaces that will allow its functions to be controlled. For example, a cooling plant inside the warehouse can be programmed to operate from 6:00 am to 6:00 pm. Once the data is received and control functionality and management capabilities of devices are connected to an IoT platform, you need to expose these via managed APIs for external parties to make use of this data.
Device management. Once a properly working system is established, the next challenge is to streamline the day-to-day management of these devices given the business’ dependence on the available data. This includes the ability to monitor a device’s outgoing performance, push updates to remote devices, and carry out resets as needed to ensure proper maintenance. To do this, you need to ensure your cloud-based server has detailed records of every device that’s connected so they can be programmed.
Sensors in the warehouse, for instance, will have a preconfigured pattern of pushing data into the cloud, and the data will have a networking route through which it reaches the IoT platform where failure and anomaly detections are done. It also must be able to perform predictions based on historical data to support business demand. The type of operating system that runs on the device/gateway, too, has a direct impact on how you sync the device within a distributed device deployment. For example, a fairly powerful device running Android or a lightweight Linux variant will have existing platforms through which they can be updated.
Scalability. Another challenge is the deployed device network system’s ability to scale to accommodate future needs. Scaling, however, can be multifaceted. It could be device deployment at the ground level, computing in the edge gateways or related to the IoT platform that facilitates all communication. Given the high cost of deploying devices, it is essential for you to plan for failover and scaling for future needs. Failover is usually achieved through duplicate devices or reconfiguring another device to take on additional load. Scaling involves demanding increased physical actions from devices or an increase in computing actions by the edge gateway or central IoT platform. In both these scenarios, the IoT platform or edge computing platform’s ability to scale and remote re-configurability will play a vital role.
Security. The last, and possibly most critical challenge, is to ensure the organization’s now fully functional system is completely secure and not vulnerable to threats like tampering or loss of sensitive data. Security threat levels might not be the same for all devices, but the technology platform should include a security layer that will prevent potential risks in each instance.
Security in an IoT platform is applicable at multiple levels. It can be between devices to the platform via multiple communication hops, communication between components within the platform, data received from devices, and how devices are stored and shared with other systems. The security layer will typically address these scenarios with policy-driven device management by enabling compliance monitoring for applied policies and role-based access control.
While connected devices are playing a central role in organizations’ digital transformation, the key is to ensure a centralized management environment for efficiently managing all of these devices and extending its benefits. And to do this, in addition to just deploying IoT devices, enterprises should incorporate a complete technology platform with seamless integration, smart analytics and security capabilities to address the common issues that arise when managing these devices.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The internet of things promises a simpler, more intuitive world. Instead of asking people to do special things for the sake of technology, like pushing buttons, navigating screens or following a specific sequence of steps, products can now be designed around natural human experiences. Google Home and Amazon Echo do away with the interface entirely: Just speak into the air, and your wish is granted. (Steve Wilson, Citrix VP of product for Citrix Cloud and IoT, has a great blog on this: “4th gen user interface“). It’s a radical transformation, and one with thrilling potential — but the first step is to change the way engineers think about product design.
I don’t mean to bad-mouth engineers — I am one myself. But it’s our nature to approach things from an engineering perspective: How do I make these technologies fit together to accomplish a purpose? How do I get them to perform at a consistently high level? How much functionality can I deliver? These are all good intentions, and they make plenty of sense in many applications. But when it comes to IoT, it’s the consumer’s perspective that matters most — their context, their perspective.
Think about electricity. Consumers don’t care about the technical challenges that had to be solved to make it flow through their walls, and they couldn’t tell an amp from an ohm if their life depended on it. All that matters to them is that when they flip a switch (or trigger a motion sensor), the lights come on. That’s the kind of natural simplicity and invisibility IoT needs to achieve.
As an engineer, I also understand our inclination to figure things out ourselves. Solving problems is what we do. But in this case, it’s important for us to understand and accept the value of reaching out for real design expertise and following a structured design process. Instead of just engineering a product to work 150% better, you’ll end up creating an experience that delivers 10 times better for the consumer.
Here are a few of the principles I’ve learned by collaborating with the designers at Citrix.
Know what problem you’re solving
You’d be surprised how often people design products without a clear idea of the problem they’re solving. They’ve got a technical innovation that they’re eager to productize, or they’re on a mission to squeeze even more functionality into an existing product. One good reality check is to see if the product manager can tell a simple narrative about the proposed product in a consumer’s daily life. If it seems contrived or farfetched, you’ve got a problem.
To begin with, pay attention to people’s behaviors today so you can document the friction they encounter. What frustrates them? What gets in the way of more interesting or important things? What would they like to be able to do more easily? Meeting technology is a classic case; we’ve all suffered the agony of watching someone fumble with computers, projectors and videoconferencing gear while valuable minutes tick away. But remember, the goal isn’t to make it easier to connect a computer — the goal is to make it easier for people to share information and collaborate. Don’t mistake the tool for its purpose.
Empathize with the consumer
As you’re researching the right problems to solve, remember that the beauty of your technology will be in the eye of the consumer. It’s their priorities and needs that matter, not yours. How many applications end up with barely usable interfaces because they’ve been assembled from an engineering perspective instead of a user’s point of view?
Take the Nest thermostat, for example. Did I buy one because of its technical horsepower or engineering brilliance, or because of purely rational considerations of energy conservation and money savings? I’d like to say yes, but in reality, I just couldn’t resist the way its elegant design called out to me and made me want to interact with it. Like so many Apple products, the Nest thermostat put design front and center while getting technology out of the consumer’s way. And as with Apple, it didn’t even matter how much more expensive the Nest was than the alternatives. Now, this beautiful widget has led the way for a whole host of home automation products from cameras to smoke detectors under the Google umbrella.
Citrix applied this kind of approach in designing the interface for our Octoblu IoT platform. IoT automation can get complicated quickly, from the devices you need to connect to the protocols that make it work, but the goal of Octoblu users is to create experiences, not write code. We made a point of providing a drag-and-drop interface that lets people build complex automations simply by specifying a sequence of actions — when your car pulls into the driveway, your garage door opens, the lights come on and your house unlocks. Remember, elegance wins.
Use a design brief
So, you’ve identified the problem you’re going to solve, and you’ve put yourself in the mindset of the consumer. How are you going to deliver the product? A formal design brief can ensure focus and discipline so you can avoid getting carried away with extraneous features or mission creep. It’s also a good vehicle for collaboration between engineers and designers — it’s an opportunity to check each other’s thinking, so that designers work within the realm of engineering reality, and engineers maintain a design-thinking orientation.
The brief should encompass:
- A problem statement. What are you solving? What’s the narrative from the consumer’s perspective?
- The business rationale. From both a design and an engineering perspective, why is this the right problem to be solving?
- The “before” picture. How are people doing it today? Where does the friction reside?
- The “after.” What is the kind of experience you’re seeking to design? See if you can tell a few stories about people interacting with the experience. How are you meeting their needs?
My colleague Todd Rosenthal, Citrix director of product design for IoT, analytics, mobility and app management, likes to think of this in terms of creating a better relationship between technology and people. Your goal is to support the user’s ability to smoothly move between activities (such as driving, walking and sitting), places (a room, a car, a campus) and things (devices, apps, sensors). Your goal is to ensure that the user’s needs are met in the context of these three variables — Todd represents them as points of a triangle in the diagram below. You’ll note that the user is always at the center of the experience.
You can find more of Todd’s design insights here.
It doesn’t necessarily take a designer to create a design brief. The important thing is to get the product manager and engineer together to agree on the design principles that will guide the project, with a common language, equal ownership and the flexibility to evolve as needed to get the product right. At Citrix, we’ve used a Slack channel to complement weekly meetings with real-time communication between engineers, product managers and designers, and I’ve been struck that the more we work together, the more our thinking comes into sync, so we end up having similar ideas at the same time.
If you think you don’t have time for a design brief — that you’ll just tweak the design as you go — remember that the world is full of $30 thermostats that may have even more features than Nest, but don’t have a fraction of its appeal or sense of purpose. Engineers want to see how many feature bullets they can put on the box, but people are digitally distracted enough as it is — they want simplicity. With this process, you’ll find the one or two features people will benefit most from right away; you can always add more in future releases. Remember, the original iPhone didn’t even come with the App Store ecosystem — it was laughably under-featured in today’s terms. But it became one of the most important and successful consumer products in history.
Products that don’t consult design aren’t maximizing their full potential and opportunity. By bringing design into the process from the very beginning, you have a chance to deliver a product that’s 10 times better from the consumer’s perspective — while bringing in 10 times as much revenue per unit for your business.
Your goal isn’t to impress the consumer. It’s to help them. Sometimes, that means leaving some things in their own hands and resisting the temptation to over-automate. When people walk into a conference room, they don’t necessarily want all of its systems to fire up right way — that might feel pushy or annoying. They’d prefer to spend a few minutes shaking hands and making small talk before Skype starts capturing every word they say. Don’t assume that more automation is always better, and don’t engineer in a silo. Social norms may not be an engineering principle, but they should be a key part of your design context.
Of course, IoT design can have a way of humbling any engineer. Adding features is easy; simplicity is hard. It takes discipline to deliver experiences designed around human needs and quirks rather than technical wizardry. But when you get it right, you can change the world.
In our previous blog post, we discussed how concerns over online security and privacy began to work its way into the public consciousness during the early days of the PC revolution. Those concerns never really went away with smartphones and tablets, and have only multiplied as IoT devices continue to proliferate. At the same time, many industry players are now starting to wonder if the traditional way of addressing security concerns with frequent software patches and updates makes sense for IoT.
There is a growing awareness that IoT security shouldn’t be treated as an afterthought, but rather as a first-class design parameter. In a best-case scenario, this new approach to security for IoT will shape up to be a holistic one, with semiconductor companies seeing devices secured throughout their lifecycle from chip manufacture through day-to-day deployment and all the way to end-of-life decommissioning.
One of the most effective ways of achieving this goal is to equip IoT devices with a silicon-based hardware root of trust. And while hardware-based security may have previously carried a steep price tag, the relentless progression of Moore’s Law over several decades has helped to significantly reduce transistor costs, making this type of implementation quite feasible. So we can now think of IoT as having entered a transitional stage, with the industry actively reevaluating security strategies.
This isn’t surprising, as petabytes of sensitive data are being generated by a wide range of diverse IoT devices and platforms, including wearables, connected vehicles, medical equipment, maker boards and intelligent appliances in smart homes. An additional challenge is to avoid vulnerabilities in products that may be deployed in the field for 10 years or more. It’s difficult to contemplate every possible attack that might happen over a device’s lifetime, which makes it complicated to protect against newly discovered vulnerabilities and fresh exploits.
Differential power analysis (DPA) side-channel attacks are a relatively new method of compromising silicon that has been gaining a lot of attention in recent months. These attacks involve monitoring variations in the electrical power consumption or electromagnetic emissions from a target device. These measurements can then be used to derive cryptographic keys and other sensitive information from chips.
The threat of DPA side-channel attacks is quite real, as even a simple radio can gather side-channel information by eavesdropping on frequencies emitted by electronic devices. In fact, in certain scenarios, secret keys can be recovered from a single transaction secretly performed by a device several feet away. The internet of things already comprises billions of connected endpoints powered by chips, many of which are vulnerable to DPA side-channel attacks. Fortunately, a number of countermeasures are available to help protect chips from DPA attacks.
In conclusion, securing IoT will require a holistic approach that offers robust protection against a wide range of threats through carefully thought out system design using techniques like hardware roots of trust. This paradigm will allow companies to see devices secured throughout the product lifecycle from chip manufacture all the way to end-of-life decommissioning.
Adoption of artificial intelligence in different fields is growing at a rapid pace. AI-based systems are going way beyond the usual expectations from machines, as they can rival, even better, human capabilities in certain areas. AI can now outwit and outperform humans in various comprehension and image-recognition tasks. Apart from a robot’s ability to survive deadly environments like deep space, deep learning has been widely used to teach AI-based system fine motor skills for doing tasks such as removing a nail and placing caps on bottles.
AI is also helping machines develop their reasoning skills, with the potential level matching that of a PhD scholar. Biologists at Tufts University made a system that combined genetic algorithms and genetic pathway simulations. The system enables AI to devise a scientific theory on how the planaria (flatworms) species can regenerate body parts.
Transforming images into art
Google Brain team has also advanced AI’s capability towards art. The Google Deep Dream program uses a machine learning algorithm to produce its own artwork. The images resemble paintings from the surrealism movement, mixed media works or colorful renditions of abstract art.
But how was the program able to render such artistic impressions? It began by scanning millions of photos for it to distinguish between various shades and colors. It then proceeded to differentiating the objects from one another. Eventually the program made itself a catalog of objects from the scanned images and recreated various combinations of these items. A prompt enables the AI to place the object composites to a landscape, leading to a work of art that appears to be made by a human being.
Deep learning technologies: Getting better than humans
Deep learning is the AI field responsible for these progressive leaps in image interpretations. The technologies employ a convolutional neural network (CNN) to instantly recognize specific image features. This capability has led to CNN finding application in facial identification programs, self-driving cars, measurable predictions in agriculture, such as crop yield, and machines diagnosing diseases. CNNs aren’t your typical AI programs. The deep learning approach utilizes improved algorithms, stronger CPU power and increased data availability. The internet feeds the necessary high volume of data, particularly the tagging and labeling functions of Facebook and Google. These companies use the collective massive uploads by users all over the world to provide the data needed for improving their deep learning networks.
CNNs don’t rely on programming — instead they are trained to recognize the distinctions and nuances among images. Let’s say you want the CNN to spot dog breeds. This would begin with providing the system thousands of animal images and specific examples of their breeds. The CNN would learn to decipher the breeds through its layer-based organization. So when training itself to recognize dog breeds, the CNN begins by understanding the distinctions among the basic shapes. It then gradually moves on to features particular to individual breeds such as fur textures, tails, ears and so on. The network can gradually gather data that concludes the breed based on the recognized characteristics.
CNNs’ complex processing capabilities enable deep learning algorithms employed in IoT technologies that don’t just identify images, but also speeches, behaviors and patterns. Better recognition of pedestrians using deep learning is improving self-driving cars. The insurance industry uses deep learning for a better assessment of car damage. Crowd control can be better through behavioral recognition in security cameras.
Bringing deep learning to everyday living
The industrial internet of things is witnessing a myriad of deep learning applications. Companies such as Facebook even have plans to build systems “better than people in perception,” showcasing an image-recognition technology that can actually visualize a photo for the blind. Other IIoT applications are also enriching gaming, bioinformatics and natural language processing. The computer vision field is also improving vastly through deep learning technologies that also offer user-friendly programming tools and reasonably priced computing.
One of the most exciting areas that is witnessing a lot of action is medicine. AI-based vision systems can rival doctors in reading scans faster or taking a more detailed look at pathology slides, thus performing better diagnosis and screening. The U.S. Food and Drug Administration is already working to have a deep learning approach to help diagnose heart disease. At Stanford University, researchers are working on an AI system that could recognize skin cancer as accurately as dermatologists. Such a program installed on one’s smartphone could provide universal, low-cost diagnostic care to individuals anywhere in the world. Other systems are addressing the assessment of problematic conditions such as bone fractures, strokes and even Alzheimer’s disease.
A progressive partner for humanity’s future
All these deep learning technologies hinge their value on purposeful applications. Today’s vision technologies are performing better than human beings in some aspects, but general reasoning remains a human function. These developing IIoT applications are meant to do separate tasks — in this case, visual recognition and categorization — better than a person, but no AI has been able to do multiple functions at the same time. A deep learning system might identify individuals in photos, but it has yet to recognize emotions such as sadness.
With time, AI systems will develop such capabilities, but for now we must appreciate numerous advantages they provide. They’re not meant to replace human skills but instead remove the burden of low-level tasks from us. Instead, we can focus on other more important and reasoning-based tasks that require human attention. Martin Smith, a professor of robotics at Middlesex University, uses spreadsheets as an example. The software has hastened computations but the analysis still comes from human experts.
The possibilities are just beginning to emerge with AI and deep learning. It is ultimately up to researchers, innovators and practitioners to transform these technological advances to something that contributes to humanity’s progressive goals.
The current state of identification verification produces two types of problems. On the one hand, accurate and accessible records are necessary for people so that the old-school problem of misplacing a simple card never creates an issue. At the same time, a giant challenge exists in crafting a system that can do this while minimizing fraud, theft and other such issues. Bridging this gap is where many security technologies are currently focused, with a future in digital records seen for things such as government IDs, medical records and other critical documents. This would expand the way personal information can be protected and transmitted, crafting a more efficient system.
Without the proper security measures, though, data is left vulnerable to all types of nefarious exploitation. For example, minorities can benefit from proper representation if this type of data becomes secure and accurate. However, if exposed to the wrong hands, it can also lead to targeted threats and discrimination. In developing countries, exposing identification records to security risks means that falsified records lead to issues such as human trafficking.
Clearly the need is there to find a digital solution, one that is permanent, accessible and accurate — all while creating the necessary security. Such a system needs to create a safe space for accessing basic services, such as health care and education, while protecting identity and defending against discrimination. Blockchain technology has been identified as a possible way to meet these needs, and while there’s still work to be done on making the technology functional and scalable, it provides the traits necessary to be a foundation for such a revolution in identification technology.
In emerging countries, governments are looking at new technology as a means to accomplish more with fewer resources. The country of Estonia recently launched an initiative to use blockchain technology to authenticate e-voting in conjunction with its advanced electronic ID program. Estonia’s ID cards include electronic tokens that enable two-factor authentication in conjunction with a PIN number. For a recent corporate shareholder election, blockchain tech was used to authenticate and record data as part of a pilot e-voting program. With the pilot successfully completed, the group behind the initiative (NASDAQ) is looking to push the boundaries of digital ID capabilities. Everything from voting to smart contracts could be accomplished faster, easier and, most importantly, more accurately with a permanent and transparent solution acting as the backbone.
Using blockchain as the foundation for legal data transactions in the internet of things age makes sense in many ways. However, there are still many steps to go, with the biggest one being the relative immaturity of the technology. Blockchain has only existed for less than a decade, and while pundits across industries hailed 2017 as the year of the blockchain, that doesn’t fix all of its issues overnight. Organizations have already been built around the idea of making the blockchain accessible to all types of industries — not just digital currency, but any industry requiring permanent and secure records.
In the grand scheme of things, these initiatives are still in relatively early stages. Security flaws are still being tracked and no unified standards exist, which limits the ability to integrate universally. Along the same lines, scalability is a concern. Blockchain developers are examining this issue and recognize that it is the key to widespread acceptance around the globe.
As with any emerging tech, questions about scalability and security provide hurdles to mass adoption and implementation. The good news, though, is that the core traits of the blockchain fulfill the requirements for any type of secure transaction, be it currency, identification or medical records. With that foundation, proponents know where the path is taking them.
Businesses should be very concerned with industrial IoT security. Cybercrime is on the rise and could cost businesses upwards of $6 trillion annually by 2021, according to research firm Cybersecurity Ventures. This threat to IIoT is sizable, but it doesn’t have to be.
IIoT presents huge opportunities for makers and providers of industrial equipment and related systems. By connecting machines to the cloud, revolutionary new approaches to customer service and process automation can begin to thrive, predictive maintenance being one of the fastest-growing business lines.
Critical to the success of disciplines such as predictive maintenance or process automation is the ability to connect these machines to the cloud. The majority of machines are not designed with native internet connectivity built in, and certainly not wireless connectivity. They are typically designed to be securely connected to control systems (such as SCADA) which monitor and manage them via fixed cable connectivity.
For machines and devices which could benefit from being remotely connected via a wireless network, the issue of securely bridging the air gap between an operational technology (the machine) and an IT systems (the cloud) is a major challenge holding back progress.
There is a wide assumption, often true, that many firms overlook security when designing industrial internet of things products. Connectivity products are often sold with old software and glaring holes in their operating systems, which ultimately makes it easier for hackers to get ahold of data and sometimes take control of devices. On top of this, customers often fail to implement the proper safeguards that come with technology. As many as half of employees use the same two or three passwords to access confidential information. The result of these issues is inevitably breaches, which in turn makes customers skeptical when they examine integrating IoT as part of efforts to automate key business applications. Research by Forrester argued that for this reason, among others, 2017 is likely to see a wide-scale IoT breach.
As a result, it is critical for organizations to find a new framework to deliver secure industrial IoT. The security sector has an important role to play. The high levels of coverage and potentially damaging results of breaches has helped to turn “cyber” into a negatively perceived term. The moment someone questions the cybersecurity credentials of a product, panic ensues. Equally, when someone else says they can “fix” cyber-issues, claims are heavily scrutinized by penetration testers from around the globe.
If progress is going to be made, we need to shift this stigma while introducing a better, more secure means for connectivity. Part of this challenge is in complexity; for example, a core application of IIoT is predictive maintenance. In order to predict whether a mobile piece of machinery is going to break down, the IoT device must transfer data via the internet back to the customer who can then resolve the issue. The problem with this, however, is that the data has to go through multiple layers and will ultimately require the aid of a network provider. This type of solution includes multiple levels that need to be secured, making it both expensive and difficult to guarantee safety. As a result, any effort to reduce cost of devices in this example could leave them more susceptible to interception by distributed denial-of-service or botnet attacks.
Simpler connectivity could therefore reduce the threat and likelihood of breaches. The common view is that the cloud is the problem, however, it is in fact the transmission to the cloud where the majority of breaches happen and information is stolen.
Many of the existing technologies have looked to prevent breaches by wrapping existing communication means with security technology. In the home, for example, consumers can purchase network access products that restrict who and what can access devices. The problem these pose in industrial environments is firstly, they can be hacked and secondly, they add complexity. What is required is a means of connection that doesn’t require heavy security products. As a result, a connection that moves directly between device and server that does not allow for interception is the ideal happy medium.
A potential solution could be USSD (Unstructured Supplementary Service Data). This technology, present in all mobile GSM networks, can be used to provide unprecedented security as there is effectively no “internet” present when connecting a machine or IoT device to a cloud system. It is therefore impervious to internet-related security threats such as botnets, distributed denial-of-service attacks and, more recently, WannaCry.
To ensure future growth and evolution of the sector, removing security as a barrier to applications of industrial IoT is crucial. Arguably, IoT has enormous potential to transform how industry operates, from improving monitoring to simplifying processes. It also presents a significant opportunity for the security sector to innovate and develop simple and secure processes rather than simply securing existing ones. In short, hacking is draining businesses of trillions of dollars, but adopting safe and secure technologies can ensure the future growth of the entire IoT sector.
When it comes to interoperability, the tech industry is well-versed on the benefits it can bring. Despite this, BI Intelligence’s U.S. Smart Home Market report in 2016 found that smart home devices were stuck between the early adoption phase and mass-market phase due to fragmentation. This occurs when different equipment and technology are used by the numerous operators and service providers launching IoT services, with well-known drawbacks including overly complex and time-consuming operations, vendor lock-in and reduced innovation, hindering overall progress.
However, these are not the only barriers when dealing with a lack of interoperability in smart systems, especially those deployed on a large scale, for example smart cities.
Why semantic interoperability?
For IoT to deliver true value to consumers, businesses and city planners, the data delivered by smart technology needs to have meaning, so that numerous applications can interpret the data and use it to respond correctly.
This is semantic interoperability — a key factor in the future success of the IoT market. It uses metadata and ontologies to allow different applications to share information that is “meaningful.” Using meta-tagged data ensures all information can be understood and reused. This avoids the need for multiple standalone systems of sensor devices and their applications trying to gather the same data but for different purposes.
To give a simple example, roadside sensors would generate various numbers, such as temperature values in Celsius which might be used for local ice-warning electronic signs. But unless we know what these figures stand for, the information has little meaning. If meta-tagged data is used, though, the user can see what the information represents and what it can be used for. It can also be shared with other apps, for example, ones monitoring and forecasting weather. Semantic interoperability is therefore significant and necessary to many different smart technology industries. As an increasing number of applications are developed, integration costs will rise if data formats require as much integration as communication technologies.
On a wider scale, consider the thousands of potential data sources which could be found in a smart city. While many of these will generate data to be exploited by only one application, wouldn’t a smart city be even smarter if all of this data could be combined, cross-compiled and reused by many applications?
Semantic interoperability and oneM2m
Semantic interoperability was introduced in oneM2M’s latest set of specifications, Release 2, to allow meaningful, secure data distribution and reuse. Building semantic capabilities into the standard now will allow integration to be significantly easier in the future as the number of devices and applications in use increases.
The oneM2M standard enables the posting of meta-tagged data to a oneM2M resource on a gateway, which notifies interested entities, or which can be found by semantic discovery.
Making semantic interoperability a reality
In a small IoT setting, it might not be necessary to attach meaning to what the data represents as it is often implied by apps developed for a purpose. City planners seeking to fully exploit data assets, however, will be greatly restricted without semantic interoperability.
While there will be some initial costs in bringing apps up to speed with semantic interoperability, achieving similar levels of interaction via traditional data integration processes will see costs shoot up exponentially as apps and devices grow in numbers. Information available for multiple uses is also likely to be limited in such a scenario.
With the number of IoT devices increasing every year, cities serious about getting smart know they can no longer rely on traditional methods if their IoT projects are going to deliver true value. Semantic interoperability is just a small part of the standardization, but it will be integral to enabling this new way of working.
You have a great idea for an IoT initiative. Maybe improving your insight into your business operations. Maybe increasing the productivity and satisfaction of your workforce. Maybe building customer loyalty with exceptional experiences. Maybe getting a leg up on the competition with a new digital business model. In any case, selecting your IoT platform is an important choice with long-term ramifications.
The market is awash with IoT platform options today. Some are proprietary platforms, some are cloud-based, some are general-purpose PaaS with IoT features along the edges, and a few are open source. This article is intended to help you think about the options and consequences of your choice — and highlight the strategic advantages of choosing an open source option.
Think long term
The life of a software project gets shorter every year. It’s common for a software package to be obsolete and replaced by a new version every couple of years, and any software older than four to five years is considered a dinosaur. However, hardware devices typically operate over a much longer lifespan. Appliances, automobiles, home and office infrastructure all have expected lifespans measured in decades. How can you assure your customers that the digital components of these devices will enjoy similar lifespans?
With the long term in mind, the problem of lock-in to a specific system and vendor looms larger. Are you confident the platform you choose today will still be available in a decade or two? If not, what are the costs of moving from one platform to another — especially for a diminishing set of legacy customers? Can you opt out of changes in a platform that may have negative impacts on your customers, or require you to invest in costly re-architecture and implementation? What if a platform is discontinued or becomes commercially non-viable for you? What risks to your business could result?
Open source protects you in the long term. It is licensed perpetually and gives you the possibility of locking a system down in a solid working state for an extended legacy support period. Freedom to access the platform’s source code allows you long-term freedom to support the system yourself or to seek out alternative vendors for support.
Think retaining control
As IoT supports your digital transformation, your digital assets — software, systems and data — will increasingly represent the core competitive advantage of your business. Handing control of your core competencies to any third party will increase risks and limit your future options. Your IoT platform is likely to become one of the core business assets that you should own instead of outsource.
With open source (especially a permissive license such as Apache License 2.0), you have many of the same rights that an owner does — you can use, adapt, evolve, make derivatives, develop intellectual property around, redistribute, commercialize, relicense and support the platform. (Note: About the only thing the Apache 2.0 license requires of licensees is to maintain copyright and other notices during redistribution.) Building your core systems on open source retains strategic ownership-like advantages that proprietary licenses or cloud service terms cannot.
Today you may be a “user” of the IoT platform to build your connected product. But tomorrow you might open up your platform to a wider ecosystem, and even evolve into a “provider” of an IoT platform that others can use. Open source licensing terms preserve your ability to commercialize and productize your product as a platform.
“Owning” your platform can free you from:
- Technical divergence from your platform provider. What if the provider discontinues the platform, or makes unilateral changes to the features, capabilities or qualities of service that impact your ability to serve your customers? What if you find a need to customize the platform in unique ways that the platform provider is unwilling to support?
- Commercial divergence. What if the provider changes the commercial terms in a way that has a negative impact on your business model, or is incompatible with the long-term assurances you have made to your customers? Or what if the commercial terms don’t change, but your business reality does?
- Strategic divergence. What if the provider becomes strategically problematic as a result of adverse acquisition, changing market position, reputation or regulation?
Open source is specifically designed to give you options and retain your independence in the face of changes large and small, technological or commercial, incidental or strategic.
Cost is relative. Commercial open source is generally thought of as the most cost-effective option, but there are many circumstances that can affect ROI. Many of these factors can change over time. Here are some examples:
- Cloud-hosted options can accelerate early prototyping and developing, and offer great agility for projects in early and iterative stages — at low entry costs. However, a cloud product offering per-device pricing that is attractive when the number of devices is low can quickly scale to unreasonable levels as the number of devices soars.
- For organizations lacking operations expertise needed to manage a scalable, highly available on-premises system, a cloud product can be attractive. However, for organizations that already have or find it cost-effective to build the capacity to self-manage open source deployments, on-premises software may offer lower long-term costs.
- IoT platform features added (presumably at low cost) to an existing IaaS platform might be very attractive when positioned as an incremental cost. However, in the long term, mixing IaaS and PaaS layers limits your ability to migrate to other IaaS platforms or into your own data center and limits your negotiating power. Clean architecture layers allow more possibilities to adapt to take advantage of the lowest cost at each layer.
Migration between different platforms can be quite expensive and time-consuming. Structural capacity to migrate between different deployment forms (cloud hosted or self-hosted open source) is much easier than migrating to a completely different platform.
The best balance between these models is achieved with full fidelity migration between public cloud, managed cloud and on-premises or self-hosted systems. Under this model you retain your ability to adapt as needed to the best price appropriate for your current conditions.
Open source allows many eyes on code to help detect and address security vulnerabilities promptly. Devices historically followed a “security by obscurity” approach. The revelations of many attacks through remote devices show this approach is insufficient. Open source hardware and software allows many eyes on the design and implementation and makes early detection of vulnerabilities possible.
Commercially supported open source vendors are engaged at a reasonable cost to provide support services and maintain vigilance on security threats and mitigations.
Open source gives you the option to contribute back to the platform, which can have valuable benefits for your business. It ensures you can obtain specific features of importance to you, on your own timeline, under a sweat equity model. By contributing a particular feature of importance to you back into the code base, you are relieved of long-term maintenance of the feature, which gets picked up, improved and maintained by the open source community.
Network effects are at play in open source communities. Each contribution helps build a vibrant ecosystem that can benefit your business. Android is a great example: As a fully functional open source device operating system it allowed many device manufacturers to come up with creative hardware designs and fostered innovation worldwide. The more contributors, the more possibilities emerge in the platform; for instance, the number of devices it supports. You can do your part to protect the vibrancy of the platform with a strategic commitment to participation.
Open source offers benefits closer to home as well. It is well-known to attract and retain quality employees — many top engineers see use of and participation in open source communities as an exciting benefit and good for their careers. They often seek out employers who support participation in open source communities. These employees can help your organization adopt open source distributed development and governance practices that have proven effective at spurring collaboration and sharing that lead to outstanding efficiencies and innovation.
Contributing to open source is viewed as a form of corporate social responsibility, increasing stocks of goodwill among customers and the industry.
Build your future with open source
Open source will be an important force in the internet of things. Many devices already run open source operating systems such as Linux and Android, and pairing these device platforms with an open source IoT platform for device management, security and analytics offers natural synergies. Visibility into the deep code, the development activity and roadmap, and security features can provide insights that will improve your decision-making power.
Business and technical leaders would be well-served by considering their open source strategy with respect to open source, and seeking out open source IoT platforms for proofs of concept, for evaluation matrices and for any IoT project of strategic importance.