The internet of things promises a simpler, more intuitive world. Instead of asking people to do special things for the sake of technology, like pushing buttons, navigating screens or following a specific sequence of steps, products can now be designed around natural human experiences. Google Home and Amazon Echo do away with the interface entirely: Just speak into the air, and your wish is granted. (Steve Wilson, Citrix VP of product for Citrix Cloud and IoT, has a great blog on this: “4th gen user interface“). It’s a radical transformation, and one with thrilling potential — but the first step is to change the way engineers think about product design.
I don’t mean to bad-mouth engineers — I am one myself. But it’s our nature to approach things from an engineering perspective: How do I make these technologies fit together to accomplish a purpose? How do I get them to perform at a consistently high level? How much functionality can I deliver? These are all good intentions, and they make plenty of sense in many applications. But when it comes to IoT, it’s the consumer’s perspective that matters most — their context, their perspective.
Think about electricity. Consumers don’t care about the technical challenges that had to be solved to make it flow through their walls, and they couldn’t tell an amp from an ohm if their life depended on it. All that matters to them is that when they flip a switch (or trigger a motion sensor), the lights come on. That’s the kind of natural simplicity and invisibility IoT needs to achieve.
As an engineer, I also understand our inclination to figure things out ourselves. Solving problems is what we do. But in this case, it’s important for us to understand and accept the value of reaching out for real design expertise and following a structured design process. Instead of just engineering a product to work 150% better, you’ll end up creating an experience that delivers 10 times better for the consumer.
Here are a few of the principles I’ve learned by collaborating with the designers at Citrix.
Know what problem you’re solving
You’d be surprised how often people design products without a clear idea of the problem they’re solving. They’ve got a technical innovation that they’re eager to productize, or they’re on a mission to squeeze even more functionality into an existing product. One good reality check is to see if the product manager can tell a simple narrative about the proposed product in a consumer’s daily life. If it seems contrived or farfetched, you’ve got a problem.
To begin with, pay attention to people’s behaviors today so you can document the friction they encounter. What frustrates them? What gets in the way of more interesting or important things? What would they like to be able to do more easily? Meeting technology is a classic case; we’ve all suffered the agony of watching someone fumble with computers, projectors and videoconferencing gear while valuable minutes tick away. But remember, the goal isn’t to make it easier to connect a computer — the goal is to make it easier for people to share information and collaborate. Don’t mistake the tool for its purpose.
Empathize with the consumer
As you’re researching the right problems to solve, remember that the beauty of your technology will be in the eye of the consumer. It’s their priorities and needs that matter, not yours. How many applications end up with barely usable interfaces because they’ve been assembled from an engineering perspective instead of a user’s point of view?
Take the Nest thermostat, for example. Did I buy one because of its technical horsepower or engineering brilliance, or because of purely rational considerations of energy conservation and money savings? I’d like to say yes, but in reality, I just couldn’t resist the way its elegant design called out to me and made me want to interact with it. Like so many Apple products, the Nest thermostat put design front and center while getting technology out of the consumer’s way. And as with Apple, it didn’t even matter how much more expensive the Nest was than the alternatives. Now, this beautiful widget has led the way for a whole host of home automation products from cameras to smoke detectors under the Google umbrella.
Citrix applied this kind of approach in designing the interface for our Octoblu IoT platform. IoT automation can get complicated quickly, from the devices you need to connect to the protocols that make it work, but the goal of Octoblu users is to create experiences, not write code. We made a point of providing a drag-and-drop interface that lets people build complex automations simply by specifying a sequence of actions — when your car pulls into the driveway, your garage door opens, the lights come on and your house unlocks. Remember, elegance wins.
Use a design brief
So, you’ve identified the problem you’re going to solve, and you’ve put yourself in the mindset of the consumer. How are you going to deliver the product? A formal design brief can ensure focus and discipline so you can avoid getting carried away with extraneous features or mission creep. It’s also a good vehicle for collaboration between engineers and designers — it’s an opportunity to check each other’s thinking, so that designers work within the realm of engineering reality, and engineers maintain a design-thinking orientation.
The brief should encompass:
- A problem statement. What are you solving? What’s the narrative from the consumer’s perspective?
- The business rationale. From both a design and an engineering perspective, why is this the right problem to be solving?
- The “before” picture. How are people doing it today? Where does the friction reside?
- The “after.” What is the kind of experience you’re seeking to design? See if you can tell a few stories about people interacting with the experience. How are you meeting their needs?
My colleague Todd Rosenthal, Citrix director of product design for IoT, analytics, mobility and app management, likes to think of this in terms of creating a better relationship between technology and people. Your goal is to support the user’s ability to smoothly move between activities (such as driving, walking and sitting), places (a room, a car, a campus) and things (devices, apps, sensors). Your goal is to ensure that the user’s needs are met in the context of these three variables — Todd represents them as points of a triangle in the diagram below. You’ll note that the user is always at the center of the experience.
You can find more of Todd’s design insights here.
It doesn’t necessarily take a designer to create a design brief. The important thing is to get the product manager and engineer together to agree on the design principles that will guide the project, with a common language, equal ownership and the flexibility to evolve as needed to get the product right. At Citrix, we’ve used a Slack channel to complement weekly meetings with real-time communication between engineers, product managers and designers, and I’ve been struck that the more we work together, the more our thinking comes into sync, so we end up having similar ideas at the same time.
If you think you don’t have time for a design brief — that you’ll just tweak the design as you go — remember that the world is full of $30 thermostats that may have even more features than Nest, but don’t have a fraction of its appeal or sense of purpose. Engineers want to see how many feature bullets they can put on the box, but people are digitally distracted enough as it is — they want simplicity. With this process, you’ll find the one or two features people will benefit most from right away; you can always add more in future releases. Remember, the original iPhone didn’t even come with the App Store ecosystem — it was laughably under-featured in today’s terms. But it became one of the most important and successful consumer products in history.
Products that don’t consult design aren’t maximizing their full potential and opportunity. By bringing design into the process from the very beginning, you have a chance to deliver a product that’s 10 times better from the consumer’s perspective — while bringing in 10 times as much revenue per unit for your business.
Your goal isn’t to impress the consumer. It’s to help them. Sometimes, that means leaving some things in their own hands and resisting the temptation to over-automate. When people walk into a conference room, they don’t necessarily want all of its systems to fire up right way — that might feel pushy or annoying. They’d prefer to spend a few minutes shaking hands and making small talk before Skype starts capturing every word they say. Don’t assume that more automation is always better, and don’t engineer in a silo. Social norms may not be an engineering principle, but they should be a key part of your design context.
Of course, IoT design can have a way of humbling any engineer. Adding features is easy; simplicity is hard. It takes discipline to deliver experiences designed around human needs and quirks rather than technical wizardry. But when you get it right, you can change the world.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
In our previous blog post, we discussed how concerns over online security and privacy began to work its way into the public consciousness during the early days of the PC revolution. Those concerns never really went away with smartphones and tablets, and have only multiplied as IoT devices continue to proliferate. At the same time, many industry players are now starting to wonder if the traditional way of addressing security concerns with frequent software patches and updates makes sense for IoT.
There is a growing awareness that IoT security shouldn’t be treated as an afterthought, but rather as a first-class design parameter. In a best-case scenario, this new approach to security for IoT will shape up to be a holistic one, with semiconductor companies seeing devices secured throughout their lifecycle from chip manufacture through day-to-day deployment and all the way to end-of-life decommissioning.
One of the most effective ways of achieving this goal is to equip IoT devices with a silicon-based hardware root of trust. And while hardware-based security may have previously carried a steep price tag, the relentless progression of Moore’s Law over several decades has helped to significantly reduce transistor costs, making this type of implementation quite feasible. So we can now think of IoT as having entered a transitional stage, with the industry actively reevaluating security strategies.
This isn’t surprising, as petabytes of sensitive data are being generated by a wide range of diverse IoT devices and platforms, including wearables, connected vehicles, medical equipment, maker boards and intelligent appliances in smart homes. An additional challenge is to avoid vulnerabilities in products that may be deployed in the field for 10 years or more. It’s difficult to contemplate every possible attack that might happen over a device’s lifetime, which makes it complicated to protect against newly discovered vulnerabilities and fresh exploits.
Differential power analysis (DPA) side-channel attacks are a relatively new method of compromising silicon that has been gaining a lot of attention in recent months. These attacks involve monitoring variations in the electrical power consumption or electromagnetic emissions from a target device. These measurements can then be used to derive cryptographic keys and other sensitive information from chips.
The threat of DPA side-channel attacks is quite real, as even a simple radio can gather side-channel information by eavesdropping on frequencies emitted by electronic devices. In fact, in certain scenarios, secret keys can be recovered from a single transaction secretly performed by a device several feet away. The internet of things already comprises billions of connected endpoints powered by chips, many of which are vulnerable to DPA side-channel attacks. Fortunately, a number of countermeasures are available to help protect chips from DPA attacks.
In conclusion, securing IoT will require a holistic approach that offers robust protection against a wide range of threats through carefully thought out system design using techniques like hardware roots of trust. This paradigm will allow companies to see devices secured throughout the product lifecycle from chip manufacture all the way to end-of-life decommissioning.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Adoption of artificial intelligence in different fields is growing at a rapid pace. AI-based systems are going way beyond the usual expectations from machines, as they can rival, even better, human capabilities in certain areas. AI can now outwit and outperform humans in various comprehension and image-recognition tasks. Apart from a robot’s ability to survive deadly environments like deep space, deep learning has been widely used to teach AI-based system fine motor skills for doing tasks such as removing a nail and placing caps on bottles.
AI is also helping machines develop their reasoning skills, with the potential level matching that of a PhD scholar. Biologists at Tufts University made a system that combined genetic algorithms and genetic pathway simulations. The system enables AI to devise a scientific theory on how the planaria (flatworms) species can regenerate body parts.
Transforming images into art
Google Brain team has also advanced AI’s capability towards art. The Google Deep Dream program uses a machine learning algorithm to produce its own artwork. The images resemble paintings from the surrealism movement, mixed media works or colorful renditions of abstract art.
But how was the program able to render such artistic impressions? It began by scanning millions of photos for it to distinguish between various shades and colors. It then proceeded to differentiating the objects from one another. Eventually the program made itself a catalog of objects from the scanned images and recreated various combinations of these items. A prompt enables the AI to place the object composites to a landscape, leading to a work of art that appears to be made by a human being.
Deep learning technologies: Getting better than humans
Deep learning is the AI field responsible for these progressive leaps in image interpretations. The technologies employ a convolutional neural network (CNN) to instantly recognize specific image features. This capability has led to CNN finding application in facial identification programs, self-driving cars, measurable predictions in agriculture, such as crop yield, and machines diagnosing diseases. CNNs aren’t your typical AI programs. The deep learning approach utilizes improved algorithms, stronger CPU power and increased data availability. The internet feeds the necessary high volume of data, particularly the tagging and labeling functions of Facebook and Google. These companies use the collective massive uploads by users all over the world to provide the data needed for improving their deep learning networks.
CNNs don’t rely on programming — instead they are trained to recognize the distinctions and nuances among images. Let’s say you want the CNN to spot dog breeds. This would begin with providing the system thousands of animal images and specific examples of their breeds. The CNN would learn to decipher the breeds through its layer-based organization. So when training itself to recognize dog breeds, the CNN begins by understanding the distinctions among the basic shapes. It then gradually moves on to features particular to individual breeds such as fur textures, tails, ears and so on. The network can gradually gather data that concludes the breed based on the recognized characteristics.
CNNs’ complex processing capabilities enable deep learning algorithms employed in IoT technologies that don’t just identify images, but also speeches, behaviors and patterns. Better recognition of pedestrians using deep learning is improving self-driving cars. The insurance industry uses deep learning for a better assessment of car damage. Crowd control can be better through behavioral recognition in security cameras.
Bringing deep learning to everyday living
The industrial internet of things is witnessing a myriad of deep learning applications. Companies such as Facebook even have plans to build systems “better than people in perception,” showcasing an image-recognition technology that can actually visualize a photo for the blind. Other IIoT applications are also enriching gaming, bioinformatics and natural language processing. The computer vision field is also improving vastly through deep learning technologies that also offer user-friendly programming tools and reasonably priced computing.
One of the most exciting areas that is witnessing a lot of action is medicine. AI-based vision systems can rival doctors in reading scans faster or taking a more detailed look at pathology slides, thus performing better diagnosis and screening. The U.S. Food and Drug Administration is already working to have a deep learning approach to help diagnose heart disease. At Stanford University, researchers are working on an AI system that could recognize skin cancer as accurately as dermatologists. Such a program installed on one’s smartphone could provide universal, low-cost diagnostic care to individuals anywhere in the world. Other systems are addressing the assessment of problematic conditions such as bone fractures, strokes and even Alzheimer’s disease.
A progressive partner for humanity’s future
All these deep learning technologies hinge their value on purposeful applications. Today’s vision technologies are performing better than human beings in some aspects, but general reasoning remains a human function. These developing IIoT applications are meant to do separate tasks — in this case, visual recognition and categorization — better than a person, but no AI has been able to do multiple functions at the same time. A deep learning system might identify individuals in photos, but it has yet to recognize emotions such as sadness.
With time, AI systems will develop such capabilities, but for now we must appreciate numerous advantages they provide. They’re not meant to replace human skills but instead remove the burden of low-level tasks from us. Instead, we can focus on other more important and reasoning-based tasks that require human attention. Martin Smith, a professor of robotics at Middlesex University, uses spreadsheets as an example. The software has hastened computations but the analysis still comes from human experts.
The possibilities are just beginning to emerge with AI and deep learning. It is ultimately up to researchers, innovators and practitioners to transform these technological advances to something that contributes to humanity’s progressive goals.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The current state of identification verification produces two types of problems. On the one hand, accurate and accessible records are necessary for people so that the old-school problem of misplacing a simple card never creates an issue. At the same time, a giant challenge exists in crafting a system that can do this while minimizing fraud, theft and other such issues. Bridging this gap is where many security technologies are currently focused, with a future in digital records seen for things such as government IDs, medical records and other critical documents. This would expand the way personal information can be protected and transmitted, crafting a more efficient system.
Without the proper security measures, though, data is left vulnerable to all types of nefarious exploitation. For example, minorities can benefit from proper representation if this type of data becomes secure and accurate. However, if exposed to the wrong hands, it can also lead to targeted threats and discrimination. In developing countries, exposing identification records to security risks means that falsified records lead to issues such as human trafficking.
Clearly the need is there to find a digital solution, one that is permanent, accessible and accurate — all while creating the necessary security. Such a system needs to create a safe space for accessing basic services, such as health care and education, while protecting identity and defending against discrimination. Blockchain technology has been identified as a possible way to meet these needs, and while there’s still work to be done on making the technology functional and scalable, it provides the traits necessary to be a foundation for such a revolution in identification technology.
In emerging countries, governments are looking at new technology as a means to accomplish more with fewer resources. The country of Estonia recently launched an initiative to use blockchain technology to authenticate e-voting in conjunction with its advanced electronic ID program. Estonia’s ID cards include electronic tokens that enable two-factor authentication in conjunction with a PIN number. For a recent corporate shareholder election, blockchain tech was used to authenticate and record data as part of a pilot e-voting program. With the pilot successfully completed, the group behind the initiative (NASDAQ) is looking to push the boundaries of digital ID capabilities. Everything from voting to smart contracts could be accomplished faster, easier and, most importantly, more accurately with a permanent and transparent solution acting as the backbone.
Using blockchain as the foundation for legal data transactions in the internet of things age makes sense in many ways. However, there are still many steps to go, with the biggest one being the relative immaturity of the technology. Blockchain has only existed for less than a decade, and while pundits across industries hailed 2017 as the year of the blockchain, that doesn’t fix all of its issues overnight. Organizations have already been built around the idea of making the blockchain accessible to all types of industries — not just digital currency, but any industry requiring permanent and secure records.
In the grand scheme of things, these initiatives are still in relatively early stages. Security flaws are still being tracked and no unified standards exist, which limits the ability to integrate universally. Along the same lines, scalability is a concern. Blockchain developers are examining this issue and recognize that it is the key to widespread acceptance around the globe.
As with any emerging tech, questions about scalability and security provide hurdles to mass adoption and implementation. The good news, though, is that the core traits of the blockchain fulfill the requirements for any type of secure transaction, be it currency, identification or medical records. With that foundation, proponents know where the path is taking them.
Businesses should be very concerned with industrial IoT security. Cybercrime is on the rise and could cost businesses upwards of $6 trillion annually by 2021, according to research firm Cybersecurity Ventures. This threat to IIoT is sizable, but it doesn’t have to be.
IIoT presents huge opportunities for makers and providers of industrial equipment and related systems. By connecting machines to the cloud, revolutionary new approaches to customer service and process automation can begin to thrive, predictive maintenance being one of the fastest-growing business lines.
Critical to the success of disciplines such as predictive maintenance or process automation is the ability to connect these machines to the cloud. The majority of machines are not designed with native internet connectivity built in, and certainly not wireless connectivity. They are typically designed to be securely connected to control systems (such as SCADA) which monitor and manage them via fixed cable connectivity.
For machines and devices which could benefit from being remotely connected via a wireless network, the issue of securely bridging the air gap between an operational technology (the machine) and an IT systems (the cloud) is a major challenge holding back progress.
There is a wide assumption, often true, that many firms overlook security when designing industrial internet of things products. Connectivity products are often sold with old software and glaring holes in their operating systems, which ultimately makes it easier for hackers to get ahold of data and sometimes take control of devices. On top of this, customers often fail to implement the proper safeguards that come with technology. As many as half of employees use the same two or three passwords to access confidential information. The result of these issues is inevitably breaches, which in turn makes customers skeptical when they examine integrating IoT as part of efforts to automate key business applications. Research by Forrester argued that for this reason, among others, 2017 is likely to see a wide-scale IoT breach.
As a result, it is critical for organizations to find a new framework to deliver secure industrial IoT. The security sector has an important role to play. The high levels of coverage and potentially damaging results of breaches has helped to turn “cyber” into a negatively perceived term. The moment someone questions the cybersecurity credentials of a product, panic ensues. Equally, when someone else says they can “fix” cyber-issues, claims are heavily scrutinized by penetration testers from around the globe.
If progress is going to be made, we need to shift this stigma while introducing a better, more secure means for connectivity. Part of this challenge is in complexity; for example, a core application of IIoT is predictive maintenance. In order to predict whether a mobile piece of machinery is going to break down, the IoT device must transfer data via the internet back to the customer who can then resolve the issue. The problem with this, however, is that the data has to go through multiple layers and will ultimately require the aid of a network provider. This type of solution includes multiple levels that need to be secured, making it both expensive and difficult to guarantee safety. As a result, any effort to reduce cost of devices in this example could leave them more susceptible to interception by distributed denial-of-service or botnet attacks.
Simpler connectivity could therefore reduce the threat and likelihood of breaches. The common view is that the cloud is the problem, however, it is in fact the transmission to the cloud where the majority of breaches happen and information is stolen.
Many of the existing technologies have looked to prevent breaches by wrapping existing communication means with security technology. In the home, for example, consumers can purchase network access products that restrict who and what can access devices. The problem these pose in industrial environments is firstly, they can be hacked and secondly, they add complexity. What is required is a means of connection that doesn’t require heavy security products. As a result, a connection that moves directly between device and server that does not allow for interception is the ideal happy medium.
A potential solution could be USSD (Unstructured Supplementary Service Data). This technology, present in all mobile GSM networks, can be used to provide unprecedented security as there is effectively no “internet” present when connecting a machine or IoT device to a cloud system. It is therefore impervious to internet-related security threats such as botnets, distributed denial-of-service attacks and, more recently, WannaCry.
To ensure future growth and evolution of the sector, removing security as a barrier to applications of industrial IoT is crucial. Arguably, IoT has enormous potential to transform how industry operates, from improving monitoring to simplifying processes. It also presents a significant opportunity for the security sector to innovate and develop simple and secure processes rather than simply securing existing ones. In short, hacking is draining businesses of trillions of dollars, but adopting safe and secure technologies can ensure the future growth of the entire IoT sector.
When it comes to interoperability, the tech industry is well-versed on the benefits it can bring. Despite this, BI Intelligence’s U.S. Smart Home Market report in 2016 found that smart home devices were stuck between the early adoption phase and mass-market phase due to fragmentation. This occurs when different equipment and technology are used by the numerous operators and service providers launching IoT services, with well-known drawbacks including overly complex and time-consuming operations, vendor lock-in and reduced innovation, hindering overall progress.
However, these are not the only barriers when dealing with a lack of interoperability in smart systems, especially those deployed on a large scale, for example smart cities.
Why semantic interoperability?
For IoT to deliver true value to consumers, businesses and city planners, the data delivered by smart technology needs to have meaning, so that numerous applications can interpret the data and use it to respond correctly.
This is semantic interoperability — a key factor in the future success of the IoT market. It uses metadata and ontologies to allow different applications to share information that is “meaningful.” Using meta-tagged data ensures all information can be understood and reused. This avoids the need for multiple standalone systems of sensor devices and their applications trying to gather the same data but for different purposes.
To give a simple example, roadside sensors would generate various numbers, such as temperature values in Celsius which might be used for local ice-warning electronic signs. But unless we know what these figures stand for, the information has little meaning. If meta-tagged data is used, though, the user can see what the information represents and what it can be used for. It can also be shared with other apps, for example, ones monitoring and forecasting weather. Semantic interoperability is therefore significant and necessary to many different smart technology industries. As an increasing number of applications are developed, integration costs will rise if data formats require as much integration as communication technologies.
On a wider scale, consider the thousands of potential data sources which could be found in a smart city. While many of these will generate data to be exploited by only one application, wouldn’t a smart city be even smarter if all of this data could be combined, cross-compiled and reused by many applications?
Semantic interoperability and oneM2m
Semantic interoperability was introduced in oneM2M’s latest set of specifications, Release 2, to allow meaningful, secure data distribution and reuse. Building semantic capabilities into the standard now will allow integration to be significantly easier in the future as the number of devices and applications in use increases.
The oneM2M standard enables the posting of meta-tagged data to a oneM2M resource on a gateway, which notifies interested entities, or which can be found by semantic discovery.
Making semantic interoperability a reality
In a small IoT setting, it might not be necessary to attach meaning to what the data represents as it is often implied by apps developed for a purpose. City planners seeking to fully exploit data assets, however, will be greatly restricted without semantic interoperability.
While there will be some initial costs in bringing apps up to speed with semantic interoperability, achieving similar levels of interaction via traditional data integration processes will see costs shoot up exponentially as apps and devices grow in numbers. Information available for multiple uses is also likely to be limited in such a scenario.
With the number of IoT devices increasing every year, cities serious about getting smart know they can no longer rely on traditional methods if their IoT projects are going to deliver true value. Semantic interoperability is just a small part of the standardization, but it will be integral to enabling this new way of working.
You have a great idea for an IoT initiative. Maybe improving your insight into your business operations. Maybe increasing the productivity and satisfaction of your workforce. Maybe building customer loyalty with exceptional experiences. Maybe getting a leg up on the competition with a new digital business model. In any case, selecting your IoT platform is an important choice with long-term ramifications.
The market is awash with IoT platform options today. Some are proprietary platforms, some are cloud-based, some are general-purpose PaaS with IoT features along the edges, and a few are open source. This article is intended to help you think about the options and consequences of your choice — and highlight the strategic advantages of choosing an open source option.
Think long term
The life of a software project gets shorter every year. It’s common for a software package to be obsolete and replaced by a new version every couple of years, and any software older than four to five years is considered a dinosaur. However, hardware devices typically operate over a much longer lifespan. Appliances, automobiles, home and office infrastructure all have expected lifespans measured in decades. How can you assure your customers that the digital components of these devices will enjoy similar lifespans?
With the long term in mind, the problem of lock-in to a specific system and vendor looms larger. Are you confident the platform you choose today will still be available in a decade or two? If not, what are the costs of moving from one platform to another — especially for a diminishing set of legacy customers? Can you opt out of changes in a platform that may have negative impacts on your customers, or require you to invest in costly re-architecture and implementation? What if a platform is discontinued or becomes commercially non-viable for you? What risks to your business could result?
Open source protects you in the long term. It is licensed perpetually and gives you the possibility of locking a system down in a solid working state for an extended legacy support period. Freedom to access the platform’s source code allows you long-term freedom to support the system yourself or to seek out alternative vendors for support.
Think retaining control
As IoT supports your digital transformation, your digital assets — software, systems and data — will increasingly represent the core competitive advantage of your business. Handing control of your core competencies to any third party will increase risks and limit your future options. Your IoT platform is likely to become one of the core business assets that you should own instead of outsource.
With open source (especially a permissive license such as Apache License 2.0), you have many of the same rights that an owner does — you can use, adapt, evolve, make derivatives, develop intellectual property around, redistribute, commercialize, relicense and support the platform. (Note: About the only thing the Apache 2.0 license requires of licensees is to maintain copyright and other notices during redistribution.) Building your core systems on open source retains strategic ownership-like advantages that proprietary licenses or cloud service terms cannot.
Today you may be a “user” of the IoT platform to build your connected product. But tomorrow you might open up your platform to a wider ecosystem, and even evolve into a “provider” of an IoT platform that others can use. Open source licensing terms preserve your ability to commercialize and productize your product as a platform.
“Owning” your platform can free you from:
- Technical divergence from your platform provider. What if the provider discontinues the platform, or makes unilateral changes to the features, capabilities or qualities of service that impact your ability to serve your customers? What if you find a need to customize the platform in unique ways that the platform provider is unwilling to support?
- Commercial divergence. What if the provider changes the commercial terms in a way that has a negative impact on your business model, or is incompatible with the long-term assurances you have made to your customers? Or what if the commercial terms don’t change, but your business reality does?
- Strategic divergence. What if the provider becomes strategically problematic as a result of adverse acquisition, changing market position, reputation or regulation?
Open source is specifically designed to give you options and retain your independence in the face of changes large and small, technological or commercial, incidental or strategic.
Cost is relative. Commercial open source is generally thought of as the most cost-effective option, but there are many circumstances that can affect ROI. Many of these factors can change over time. Here are some examples:
- Cloud-hosted options can accelerate early prototyping and developing, and offer great agility for projects in early and iterative stages — at low entry costs. However, a cloud product offering per-device pricing that is attractive when the number of devices is low can quickly scale to unreasonable levels as the number of devices soars.
- For organizations lacking operations expertise needed to manage a scalable, highly available on-premises system, a cloud product can be attractive. However, for organizations that already have or find it cost-effective to build the capacity to self-manage open source deployments, on-premises software may offer lower long-term costs.
- IoT platform features added (presumably at low cost) to an existing IaaS platform might be very attractive when positioned as an incremental cost. However, in the long term, mixing IaaS and PaaS layers limits your ability to migrate to other IaaS platforms or into your own data center and limits your negotiating power. Clean architecture layers allow more possibilities to adapt to take advantage of the lowest cost at each layer.
Migration between different platforms can be quite expensive and time-consuming. Structural capacity to migrate between different deployment forms (cloud hosted or self-hosted open source) is much easier than migrating to a completely different platform.
The best balance between these models is achieved with full fidelity migration between public cloud, managed cloud and on-premises or self-hosted systems. Under this model you retain your ability to adapt as needed to the best price appropriate for your current conditions.
Open source allows many eyes on code to help detect and address security vulnerabilities promptly. Devices historically followed a “security by obscurity” approach. The revelations of many attacks through remote devices show this approach is insufficient. Open source hardware and software allows many eyes on the design and implementation and makes early detection of vulnerabilities possible.
Commercially supported open source vendors are engaged at a reasonable cost to provide support services and maintain vigilance on security threats and mitigations.
Open source gives you the option to contribute back to the platform, which can have valuable benefits for your business. It ensures you can obtain specific features of importance to you, on your own timeline, under a sweat equity model. By contributing a particular feature of importance to you back into the code base, you are relieved of long-term maintenance of the feature, which gets picked up, improved and maintained by the open source community.
Network effects are at play in open source communities. Each contribution helps build a vibrant ecosystem that can benefit your business. Android is a great example: As a fully functional open source device operating system it allowed many device manufacturers to come up with creative hardware designs and fostered innovation worldwide. The more contributors, the more possibilities emerge in the platform; for instance, the number of devices it supports. You can do your part to protect the vibrancy of the platform with a strategic commitment to participation.
Open source offers benefits closer to home as well. It is well-known to attract and retain quality employees — many top engineers see use of and participation in open source communities as an exciting benefit and good for their careers. They often seek out employers who support participation in open source communities. These employees can help your organization adopt open source distributed development and governance practices that have proven effective at spurring collaboration and sharing that lead to outstanding efficiencies and innovation.
Contributing to open source is viewed as a form of corporate social responsibility, increasing stocks of goodwill among customers and the industry.
Build your future with open source
Open source will be an important force in the internet of things. Many devices already run open source operating systems such as Linux and Android, and pairing these device platforms with an open source IoT platform for device management, security and analytics offers natural synergies. Visibility into the deep code, the development activity and roadmap, and security features can provide insights that will improve your decision-making power.
Business and technical leaders would be well-served by considering their open source strategy with respect to open source, and seeking out open source IoT platforms for proofs of concept, for evaluation matrices and for any IoT project of strategic importance.
In the industrial world, and specifically the energy sector, the amount of connected devices, sensors and machines is continuously growing, resulting in the internet of energy, or IoE. IoE can be broadly defined as the upgrading and automating of electricity infrastructures, making energy production more clean and efficient, and putting more power in the hands of the consumer.
Given the vast amount of data the energy sector generates and the increasing number of sensors added, it is the perfect environment for machine learning applications. Artificial intelligence (AI) excels at finding subtle patterns in data sets of all shapes and sizes, particularly under complex or changing conditions.
Although data within IoE is growing at exponential rates, much of that data is traditionally siloed across business units (generation, transmission and distribution, energy trading and risk management, and cybersecurity). Extracting the wealth of data out of each of the silos and putting that data to work is needed to promote a better IoE experience and receive the benefits out of machine learning. Artificial intelligence capabilities can be incorporated to gain insight from all the data uniformly, allowing business units to transform into a collaborative system.
Generation: Prescriptive maintenance of turbines
Generation, the first major silo in the energy sector, is largely dependent on the work of turbines. Turbines consist of thousands of moving parts, and even the smallest disturbance can create major problems, causing unscheduled downtime, loss of power, safety concerns and other issues.
Applying AI and machine learning techniques to prevent unplanned downtime and catastrophic breakdowns is revolutionizing how utility companies operate. A standard approach of subject matter experts (SMEs) developing static, first-principle models places a tremendous burden on organizations to maintain and update them. Furthermore, the static nature of traditional models means operators are only able to view steady state operation of turbines, whereas the meaningful data is transient events like startups and coastdowns.
Transient conditions are where critical issues first materialize, but they are challenging to monitor because they occur over indeterminate lengths of time. Where a static model-based system is unable to solve this issue, an AI-based technology can. An artificial intelligence approach can start analyzing data and providing insights on day one, and continue to improve upon its own accuracy and effectiveness by learning from SME input.
Transmission and distribution: More than just smart meters
For the second silo of transmission and distribution, AI is able to tackle much larger problems. While smart meters and end user control of home appliances have generated excitement, they are not the most challenging big data problems being solved by machine learning.
Three specific areas in transmission and distribution where AI is playing a key role are:
- Energy disaggregation
- Power voltage instability monitoring
- Grid maintenance
In these areas, the collection, ingestion and action upon the data have created efficiencies in expenses and operations for companies using machine learning and AI technologies, as well as for their customers.
Energy disaggregation requires the utilization of machine learning because thousands of energy “signatures” must be analyzed to find patterns of usage. An analysis of energy signatures can predict suspicious consumption values, for example, due to physically or digitally manipulated devices, sophisticated thefts or meter malfunctions.
The second area, power voltage instability monitoring, faces an explosion of dynamic data surrounding minute instabilities in which human analysis falls short. Researchers can utilize machine learning techniques to identify voltage instabilities, thus preventing brownouts and blackouts on the grid.
The last area of transmission and distribution where AI is playing a key role is grid maintenance. While many companies are still struggling to use the data they are collecting, a machine learning algorithm can use the data or features to classify and ultimately predict failures well in advance. Because machine learning algorithms can automatically break features down into additional data and analyze them at machine speed, previously unseen correlations in the data are leading to new discoveries.
Cybersecurity: The modern battleground
The third major data silo in utilities is cybersecurity. The recent and continuous onset of attacks to critical infrastructure makes the need for new cybersecurity methods vital. An AI offering can identify, categorize and remediate a variety of threats including loss of personally identifiable information, zero-day malware and advanced persistent threat attacks.
To a mathematical algorithm, there is little difference between the aforementioned data and cybersecurity data. All input, regardless of source (a vibration sensor or a firewall log, for example), is simply a piece of information with unique patterns to an algorithm.
To combat the cyber front of industrial threats, an artificial intelligence product can automate the threat research process, prioritize threats based on confidence and display corroborating evidence to the analyst, significantly reducing both time to threat remediation and overall risk.
Energy trading and risk management
Energy trading and risk management is the final data silo in the energy sector. In the highly competitive and regulated utility business, there is a clear link between the company’s bottom line and forecast accuracy and reliability. If new techniques can provide more accurate forecasting, utilities can begin to offer better pricing to their customers.
AI techniques are providing insight into this process. With thousands of features from hundreds of sources, there are infinite ways to combine and correlate information. Looking for subtle, transient movements of price data on an hourly or even a second-by-second basis with millions of combinations is where AI excels.
Because utility companies need to buy oil, gas, coal, nuclear fuel and electricity, they are constantly at the mercy of volatile commodity prices. For this reason, utilities are using AI techniques to develop methodologies for market and credit risk aggregation.
With improvements in the sharing of data from data silos, the utilities industry can reap the wealth of new knowledge. From prescriptive maintenance to energy trading to cybersecurity, analytics will play an important role in how energy is produced and provided to consumers long into the future. As adoption increases, AI technologies will continue to learn and adapt, providing more value in the internet of energy.
Since the early days of the internet of things, those of us who work in the world of vulnerabilities and threats have been warning about the risks associated with IoT.
When the Mirai botnet attacks came in late 2016, many felt that IoT attacks were finally here and started looking at the past for parallels. We didn’t have to look far: Over sixteen years after the distributed denial-of-service attacks that took down Yahoo, Fifa.com, Amazon.com, Dell, E-Trade, eBay and CNN in February 2000, here was another massive DDoS attack.
These early attacks came at the beginning of what turned out to be years of large-scale attacks against PCs. So the logical question is: Does Mirai represent the same thing? Are IoT attacks here and are we looking at the beginning of another era of large-scale attacks?
At first glance, this would look to be the case. After all, one thing that enabled the large-scale PC attacks was the lack of truly effective patching against vulnerabilities. It’s notable that major attacks like Code Red, Nimda, Blaster, Sasser, Zotob and Conficker all attacked vulnerabilities that patches were available for when the attacks hit. When we look at IoT, and the fact that in many cases vulnerable devices will never be patchable, let alone patched, it’s reasonable to think that this problem will be even worse. Add to this the sheer scale of IoT compared to PCs in the early 2000s, and not only does it seem reasonable to conclude that IoT attacks will be like those that we saw in the PC era, it already seems like a foregone conclusion.
And the specifics of the Mirai attacks seem to support this conclusion. One thing that made everyone take notice of Mirai was, again, the sheer scale. The Mirai attacks against Brian Krebs’ site was clocked at up to 620 gigabits per second of network traffic, and a follow-on attack against French web host OVH hit a peak at 1.1 terabits per second. As the world was reeling from these attacks, the Mirai source code went public and everyone was bracing for the worst.
But then something funny, and important, happened.
In the months since Mirai there’s been no additional follow-on attacks. The security press has moved off IoT altogether, focusing in the spring and summer of 2017 on WannaCry and then Petya. You’d be excused if you happened to miss Mirai last fall and thought that we were still waiting for IoT attacks to begin.
Much like the dog that didn’t bark in Conan Doyle’s “Silver Blaze” Sherlock Holmes story, the post-Mirai non-events tell us a lot about what the world of IoT attacks on the internet may look like. And it’s looking less dire than it did during the PC-era internet.
Check back to this column for part two in the series, which will take a closer look the Amnesia botnet as another recent example of a large-scale IoT attack that can be leveraged for lessons learned when securing the IoT.
In my conversations with industrial companies looking to start or accelerate their journey toward the industrial internet of things, I’ve begun to see a phenomenon among the ranks of industrial technologists that’s not all that different from Darwin’s theory of evolution. Adaptation is the key to this theory, something industrial technologists need to do well as their environment is changing around them.
In the past, there has been a clear divide between IT teams — that control the data center — and operational technology teams — which are responsible for the care and feeding of operational automation systems. These two distinct teams had different skill sets, backgrounds and priorities. Today, in order to bridge the gap that has traditionally separated the two, a new breed of what I like to call “hybrid OT” professionals is emerging. This is where IT and OT responsibilities and skillsets are converging, making the individual who can do both a valuable technologist.
What is causing this shift? There are two big things I see driving this change:
New responsibilities breed new roles — As more computing power and data collection have made their way to the edge of industrial networks, a new combination of skills is required to manage these assets (historically the domain of OT), giving “birth” to the IT/OT hybrid. We saw a similar shift occur with the rise of cloud computing: developers struggled to get IT to respond to their needs, so they turned instead to public cloud services for answers. As developers took this responsibility of securing the IT infrastructure needed to run their applications in clouds upon themselves, the role of DevOps was born.
A generation ready for 21st century challenges — Many OT professionals who have been in the industry a long time are now approaching retirement and a new generation is taking their place. This generation of younger, digital natives is not intimidated by technology — they were in fact raised on it. They see the potential of IIoT and will look to realize its potential as they push intelligence out to the edge and leverage data and analytics in new ways.
The most forward-looking industrial enterprises are the ones that see the value in hiring professionals that are just as comfortable working with servers as they are working with machine tools, packaging lines, pumps and valves. Enterprises actively recruiting these hybrid OT professionals are attracted to the skills they’re seeing that will be valuable in managing both IT and OT technologies. Whatever their background — IT, data science, industrial engineering — these individuals share a passion for the intersection of technology and industrial operations.
New expectations for the technology they use will also come along with the role. System availability has become an absolute necessity for business continuity and is something hybrid OT professionals will expect out of their systems and technology vendors. Similarly, these hybrid OT professionals see the value in the data produced at the edge and will lean on technology vendors to help make data protection a top priority for the enterprise.
When will we see this new breed emerge? The answer is that the evolution will happen soon — much more quickly than it occurs in the natural world. Over the next two to three years, I believe the industry will see a major influx of this hybrid breed.