If IoT is deeply concerned with data and “data is the new oil” (as UK mathematician Clive Humby asserted in 2006), does that make IoT the new oil?
The answer, perhaps unsurprisingly, is that it’s complicated. To see why, let’s return to the part of Humby’s original statement that’s less frequently quoted. He went on to say that “it’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc. to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.”
How IoT uses data
IoT does, in fact, often involve vast amounts of data. Indeed, an important aspect of architecting IoT systems is deciding how and where the data is processed and filtered. Some data requires an urgent response. Stop the train! Other data may just need to be appropriately filtered and sent back to headquarters for later analysis. Perhaps a part from a particular supplier is failing more often than the historical norm.
But there’s a common thread here. It’s making intelligent use of data for business outcomes, whether it’s preventing accidents or optimizing operations in some way.
An obvious point? Possibly. But it highlights why simply aggregating data naively doesn’t create value. The right data needs to be used in the right way and in the right place.
Thinking about IoT data in this light has a more subtle implication as well.
A common platform?
When enterprise IoT was first getting buzzy in the 2000s, a lot of people assumed that IoT would develop in the form of standard platforms that was sold across a variety of industries. After all, while vertical applications for industries like retail were also common, much of what we thought of as platforms (such as operating systems and enterprise middleware) ran more or less unchanged everywhere from banks to home improvement stores.
We do see some industry-specific IoT development. This is particularly true when there are specific legislative or regulatory mandates that an industry needs to comply with, such as in the case of positive train control in the U.S.
However, it’s proven challenging to generalize IoT to a platform that can be deployed across a range of companies and a variety of industries.
That’s because IoT and its associated data don’t exist in isolation within an organization. Instead, IoT is often the bridge between existing information technology (IT) and operational technology (OT) systems. Perhaps it connects a maintenance dispatch system to sensors that inform a decision about when a repair technician should be sent and with which part.
Of course, there are common patterns and building blocks. Business rules engines, messaging, IoT gateways, enterprise service buses and data caching are just a few of the pieces that typical industrial IoT systems will need. But the arrangement of those building blocks and their interconnections will typically be customized based on the systems already in place and the specific business problems that an organization has prioritized solving.
Data is one of the means by which that can happen. But it can’t do so in isolation. It needs the context of IT, OT and business strategy. It’s within that context that data can have great value. Deprived of it? It probably has negative ROI.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Technology innovations and enablement are flourishing, and companies are working quickly to implement relevant systems while allowing their employees to use the tools they know and love best to allow for all of the above — flexibility, choice, security and more.
Over the last decade, there has been a lot of hype around the internet of things as a silver bullet technology to improving, well, everything. From smarter and more efficient use of natural resources to safer driving conditions to lighter and trendier wearables — the benefits have seemed to be endless. However, it’s important to acknowledge that IoT is not one technology or system. It’s really an umbrella term for businesses that have more computers and more forums in which to accomplish more functions across organizations.
Let’s break down the different types of IoT for work to demonstrate the benefits of connected systems and technologies, and then explore the associated challenges and best practices for working through those obstacles to make modern work a reality in your enterprise.
IoT at work
IoT is a disruptive, ever-evolving concept that is becoming increasingly important to the workforce and a company’s ability to build a competitive advantage. When it comes to empowering the workforce via IoT, there are three different buckets to consider: employee, industrial and mobile IoT.
Employee IoT encompasses everything from the Fitbits employees wear to improve their health to the connected refrigerator in the office. It is the individual computing elements companies are introducing to the workplace to not only augment their employees’ productivity, but their everyday lives as well. Industrial IoT refers to the machine-to-machine perspective. The next generation of sensors connected to analytics is improving business processes without the need for human interaction (think temperature and humidity sensors at a chemical plant that alert employees there is an error). Finally, mobile IoT consists of a company’s smartphones, smartwatches and tablets. These computing devices allow employees to be more productive and have evolved tremendously in the past 10 years.
While we haven’t seen the full potential of personal and industrial IoT at work just yet, mobile IoT is having a big impact on employees in the workplace. In fact, the global mobile workforce is set to increase from 1.5 billion in 2016 to 1.87 billion in 2022 — accounting for 43% of the global workforce. Employees are seeing several benefits from mobile IoT, especially when it comes to increasing productivity, through reduced human efforts, decision analytics, higher quality data and real-time feedback.
The barriers to entry
While IoT has enabled employees to work more effectively, the technology has changed a lot of the ways companies operate, which creates some challenges. As a result, the role of CIOs is changing daily as they are working to create a model to showcase IoT’s business value to stakeholders.
To demonstrate the business value, IT needs to be at the forefront of these new technologies, influencing other groups in the organization and serving as a resource for general understanding of how these new systems deliver value. Serving as that resource means effectively explaining how the application of new technologies can help streamline operations and taking on the role of subject matter expert to appropriately vet and critically assess new technologies.
Though there is a gold mine of business value in IoT, there is also an endless amount of security challenges. And as companies race toward innovation, security often falls to the wayside. Breaches have become a commonality and organizations that don’t make security a priority will be left vulnerable and wide open to exploitation. Even with proper security precautions, regulations like GDPR mean IT will need to tread carefully and strike a balance between collecting useful data to drive business value while ensuring security protocol to avoid potential fines.
While IoT devices are generally lower powered than their desktop or laptop counterparts, they are still powerful. Unfortunately, they are still easier to break into because they don’t have secure default configurations. With any technology though, as long as IT departments have done their research, hardened the OS and application, and written good application codes, there is no reason these configurations can’t be secure.
Four tips to successfully use IoT
To successfully implement IoT in the workplace — whether it be employee, industrial, mobile IoT or a combination of the three — consider the following best practices:
- Don’t think about your IoT strategy; think about your data and business strategy. IoT isn’t one specific thing; it’s a journey your company embarks on to figure out how ubiquitous computing can impact your business.
- Understand how your data flows, what kind of data helps your company make decisions, from where that data is collected, and when that data drives action.
- Prioritize data flows that are most business critical and then consider IoT-style approaches that make those data flows faster and better understand through analytics.
- Secure, secure, secure! There are endless amounts of ways for hackers to compromise proprietary data. Design a structured model of deploying endpoints that all have a shared security framework to ensure you have a central way to protect and monitor your endpoints.
Find the right use cases for your business
There continues to be lots of hype around IoT and sometimes it is difficult to wade through the noise to determine how an enterprise IT organization should be using this broad array of technologies. The number of devices with connectivity will continue to grow exponentially over the next decade. Some of those devices will have a human interface and be multipurpose, like a smartwatch. Others will never interface with a person and be highly specialized, like a motion sensor. Ensuring your company is productively using IoT — and effectively navigating challenges along the way — is key to tapping into the innovation your employees want and your business needs.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The power of facial recognition has knocked at my door. My Nest Cam IQ Outdoor sends me a notification when a stranger stops at my front door. Facial recognition has the power to transform us to tightly integrated connected communities as security cameras scale across our neighborhoods and cities.
I have Nest Cam IQ Outdoors all around my home. Previously, the Nest Cam was doing machine learning to identify a human from a cat using image recognition. It now uses facial recognition to recognize faces and gives us the choice to tag people. It then can use anomaly detection to spot people who are not regular visitors and alerts us.
As connected cameras become more prevalent in our lives and cities, the images and videos will be used for facial recognition to identify people. This will create many opportunities and challenges.
Facial recognition: Consumer applications
We have facial recognition in our phones and social networks. When Google Photos or Facebook tags pictures of us and maps it to our past pictures, it is training facial recognition algorithms to get better.
My iPhone X uses facial recognition to open my phone. Security cameras in cities use facial recognition software to help law enforcement catch fugitives in real time. They identify people by comparing faces to a criminal database. Facial recognition is used to improve our airport experience from Lufthansa in Los Angeles and Qantas in Sydney. It verifies faces with databases to speed travel check-ins.
China has taken it a step further and put cameras in schools to watch kids’ expressions to see if they appear attentive or bored during a lesson. This is an example where facial recognition can go past identifying faces to combine with machine learning to create new applications which might invade our privacy.
Industrial application of facial recognition
Robots use facial recognition to identify people to provide security to banks and allow customers to check-in within seconds by scanning faces. Security robots patrol corporate campuses and malls in Silicon Valley and could be used to spot intruders.
Surveillance drones combine their video footage with body cameras used by the police in America. They use facial recognition to track people without their knowledge. These can protect lives and property, but today there is no transparency on what data is being stored or how it is used.
The challenge with facial recognition
Accuracy and biases
Amazon offers a facial recognition software called Rekognition that was piloted by the city of Orlando. Orlando discontinued the pilot after Amazon employees protested against the algorithm being inaccurate with biases. Biases are fed into the algorithm when the training data used to improve the algorithm is biased against a certain type of people which might lead to false positives. An ACLU test showed the limitation of Amazon’s Rekognition algorithm when it mapped 28 American members of Congress to criminal mugshots and incorrectly identified them as known criminals.
Privacy is a huge concern in the U.S. and Europe against using such AI software data in different situations without our permissions.
In the Westfield mall in San Jose, there is a security robot called Knightscope. It looks at the faces of people to track for security violations. Since a mall is a public space, we do not know what data is captured from people and kids and what uses it will be put to in the future. Without regulation, there is no transparency on what is captured and how it is interpreted or saved for future use.
What is possible in the future with facial recognition?
Today, security cameras at homes and airports do facial recognition with the promise to speed up airport check-ins and to keep our homes and offices safe. As a large volume of peoples’ faces is captured, it can be used for new uses that we are not able to conceive about today.
When Nest Cam has enough cameras in a community with well-trained facial recognition data, it can help with the search and rescue of missing kids or track intruders on the run. On the dark side, “eye in the sky” type of surveillance could be done by AI operating on facial recognition data.
What do you think about facial recognition applications? Are you excited about them or are you concerned about big brother monitoring of our lives? Do you see some new applications for the future?
Post a comment. I would love to continue the discussion.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
One purpose of computing is to accomplish tasks. Applications are a fundamental part of computing, and their use is seemingly ubiquitous today. Users often want them to be faster and cheaper. Containers can help drive agile software development processes such as those described by the Twelve Agile Manifesto Principles. They also help with lean software management and configuration.
Containers can make a lot of sense within development environments. Yet, understanding why folks are using containers at the edge of an IoT network can require a little more thought. Unlike developers who are often looking for quick and frequent updates, operators in the field tend to iterate more slowly. They tend to prefer as little change as possible, given the millions of endpoint devices that could be affected. So, why the interest in containers here?
Let’s talk a little about containers first
A key to today’s agile method of software development is the concept of continuous integration and continuous delivery (CI/CD). Continuous integration enables developers to validate their changes by merging them back into the main branch for testing. Continuous delivery enables them to release new changes to customers more quickly through an automated release process.
Linux containers can aid in this process by giving each application its own isolated environment in which to run, while sharing the host server’s operating system. Containers offer environments in which developers can write, test and iterate more frequently. Changes are constrained within individual services, which can improve protection for the rest of the code, speed up debugging and improve time to market. Containers are enablers for agile software development and delivery. And, depending on how they are configured, can conserve bandwidth to help reduce transmission costs.
Containers are a lot like a slice of pizza
Bear with me while I explain why pizza can be analogous to containers. At a recent lunch with a friend of mine, I suggested we try Hawaiian pizza (yes, the kind with bacon and pineapple toppings), my kids’ favorite.
My friend had never tried Hawaiian pizza and wasn’t sure if she’d like it. So, I suggested we make only half the pizza Hawaiian. And instead of cutting the pie in the traditional way, it could be cut into smaller slices. While she agreed that limiting the Hawaiian to only half seemed like a good idea, she was puzzled as to why we should cut it the entire pie into smaller slices. I explained that if she didn’t like it, she only had to try a small piece. Then I could package up the rest as a surprise for my kids.
When the pizza was put on the table, each slice was like a container. Each included different things, but was based on a common foundation of ingredients. With pieces of pepperoni on some, peppers on others, and pineapple and bacon on still others, each slice also delivered a different number of calories per “bite” based on its components.
The benefit of dealing with slices rather than a whole pie
Now, imagine that there are pizzas at each of your IoT endpoints. You are looking to update each endpoint with a new, tastier pizza. The calories found within the pizza represent all the bandwidth required to deliver the pizza update to the edge of the network. How long is it going to take to make full pizza deliveries to each of those endpoints?
When you deal with application development and delivery from a container perspective, you shouldn’t have to replace the whole pizza. You should just need to deliver a new slice. A slice has fewer calories than a pie. Like a slice, a container can take up less bandwidth to deliver to the edge. That can save money and help streamline the delivery process. There’s another benefit as well: If the consumers at the endpoints don’t like the new taste, it should be easier to roll back to a previous version without scrapping too much of your investment.
More on using containers at the edge
If you’re interested in more detail on the subject, there has been some work done looking at the use of “Containers and Clusters for Edge Cloud Architectures” in a report issued by Claus Pahl on this topic. Take a look. For a deeper dive, I recommend you check out the working group for the IoT edge.
So, you’re probably wondering if my friend liked the new taste of Hawaiian pizza. Actually, she didn’t. It’s a good thing we cut it up into smaller slices; I was able to bring the rest home in a takeout (ugh) container.
While IoT-connected devices began in fits and started several years ago, today they’re gathering momentum.
That’s because the internet of things offers immense possibility by connecting hitherto “dumb” devices to the internet to provide intelligence and decision-making capability in the moment. These sensors could send data on products’ operational state and location as well as a host of other key data to help companies gather insight to make business decisions.
IoT devices lay the foundation for so many efficiencies and innovations today, from connected cars and smart refrigerators that tell us how to get where we need to go to sensors that can alert us about product malfunctions to healthcare wearables and mobile apps that can guide our health regimen to the supply chain, where IoT devices can track a shipment start to finish.
But over the past several years, while connected devices offered insight, the data volumes generated were fast and furious, and enterprises’ ability to structure their insights into forward-looking behavior remained incremental. With machine learning and artificial intelligence maturing at exponential rates and combining with IoT devices, true business insight is on the way.
AI-enabled IoT may finally yield the speed, insight and scale that IoT needs to flourish in enterprises.
“The reality is we’re moving from connectivity to intelligence,” said Anthony Passemard, head of product management for Google’s Cloud and IoT, speaking at Google Cloud Next 2018 in late July. “Without actionable insights on the data, it’s hard to get return on investment. Intelligence is key to those investments.”
Harnessing that intelligence has continued to be a challenge, though. By 2025, IDC predicts that there will be over 80 billion connected things creating and replicating more than 180 zettabytes of data every year.
But the speed and accuracy of IoT data needs to improve to be ready for the enterprise given the massive influx of data. With AI-enabled IoT, companies can whittle down the billions of data points they have into truly meaningful kernels.
Aker BP, an oil and gas company in Norway with some 2,000 employees, produces nearly 150,000 barrels of oil a day. It uses IoT to monitor its equipment, to protect its workers from harm and to reduce costs.
The company can pull data quickly, then turn it into meaningful action through AI-enabled IoT activities.
“One of the thing we see quite good results from, we are pulling up to 1 million data points per second into a data store,” said Kjartan Nesse, SVP of operations at Aker BP. “Based on this data, once we get it contextualized in the right way and push it back to the operators, that really helps drive decisions.”
Further, Nesse said, AI-enabled IoT data can provide predictive maintenance insights for equipment or provide insight on conditions in environments and “drives opportunities to move people out of dangerous zones.”
Companies are also using artificial intelligence and IoT to digitally recreate what’s happening in the real world. Freight Farms, which creates year-round agricultural environments globally, uses environments outfitted with IoT sensors to genera ideal farming conditions, such as soil, air and Co2 levels.
“We’re looking to lower the barrier of entry [to get into the farming industry] and make food supply a loT more accessible,” said Jon Friedman, cofounder of Freight Farms. With IoT, it’s possible to use data to create consistency and to optimize farming conditions.
“IoT can set the environment for a certain crop,” Friedman explained. “You can create the perfect day of summer in that environment. You can take the nutrients of Italy, pair that with air quality of Salinas Valley and combine that with the Co2 near an active volcano, and give a plant the light spectrum it wants.”
In this way, Freight Farms isn’t just mimicking real-world conditions through IoT. Rather, it’s creating an ideal, other-worldly set of optimized conditions. With a mobile app, farmers can optimize conditions beyond the barriers of the physical world.
“IoT is central to build these environmental recipes to match what is out there in this world, but also what isn’t available in this world,” Friedman said.
AI and IoT devices push enterprises to the edge
But the proliferation of devices and data making round trips to the cloud only floods this infrastructure with volumes of data it necessarily handle.
That’s why IoT providers — indeed IT infrastructure providers of all stripes — are moving compute-intensive processes such as AI-enabled IoT processes to the edge — edge computing, to be exact.
“All the things that are producing data, and the people interacting with each other and with things, it’s going to push data to the edge,” said Gartner’s Thomas Bittman at a conference on infrastructure in late 2017. “We can’t have enough pipes cheap enough to accommodate the amount of data out there.”
Accordingly, Gartner estimates that, as data is pushed to the edge, enterprise adoption of the edge computing will follow suit. “While today some 10% of enterprise-generated data is processed outside a traditional centralized data center or cloud, by 2022, Gartner predicts this figure will reach 50%,” the research firm estimated in “What edge computing means for infrastructure and operations leaders.”
Consider data-intensive capabilities such as artificial reality-enabled games, using videoconferencing in meetings or asking queries of voice-based digital assistants. All these kinds of data-intensive processes have to execute tasks in fractions of a second and call on massive amounts of data. These kinds of tasks are best executed at the edge, where devices can access resources without having to call on the cloud. Further, these edge computing architectures are better suited to industries with compliance requirements that prohibit their data from being sent to the cloud.
This convergence of IoT, AI and edge computing might be known as the intelligent edge, with AI-enabled connected devices that don’t rely on centralized architecture. It’s bringing the data insight, the compute resources, and the users and devices right where they need to be to take intelligent action in real time. Experts say this intelligent edge is inevitable given our current data and device proliferation.
“In the next few years,” Bittman predicted at the conference, “you will have edge strategies — you’ll have to.”
By now, we have an understanding that IoT devices, while useful and convenient, come with many security concerns. The simplicity that consumers love about IoT devices is, in fact, what makes them so risky. IoT devices are easy to connect to remotely by just about anyone and, unfortunately, not just by the people one would wish to share access with. IoT devices are found everywhere — on your wrist, in the office, driving down the street, etc. — and therefore putting corporate and personal data at risk. It’s time for government agencies and organizations to get involved, and enforce regulations around the security of these devices.
The good news is that June was an important month in moving the conversation forward in regards to IoT security legislation in the United States. We’ve witnessed huge strides toward addressing the ever-increasing need for IoT security regulation and management in the United States and globally.
- An IoT bill cleared the House of Representatives Commerce Panel. This bill directs the Commerce Department to study and report to Congress within one year on the U.S. internet-connected device industry, including voluntary and mandatory standards that are being developed around the world for the IoT sector, clarifying which federal agencies have jurisdiction over the sector and any regulations or standards those agencies have put in place that would impact the IoT industry.
Insight: This seemingly small directive is actually a great step in the right direction and will advance the conversation on IoT as it will bring attention to the different areas that currently lack legislation as far as minimum IoT security standards and which agencies have jurisdiction where it comes to (the currently nonexistent) enforcement.
- A comment by the staff of the Federal Trade Commission’s Bureau of Consumer Protection to the Consumer Product Safety Commission about the potential privacy and security issues associated with internet-connected consumer products. The FTC warned that poorly secured IoT devices could pose a consumer safety hazard and outlined ways to mitigate such risks. For instance, a car’s braking system could fail if infected with malware, and carbon monoxide or fire detectors could stop working if they lost their internet connection
Insight: Beyond the obvious safety concerns that this comment raises and the outlined ways to mitigate the risks which could save lives, it also advances the general conversation about the lack of security standards in the IoT industry.
- The FTC suggested considering security disclosure rules for connected device makers.
Insight: If IoT makers are mandated to clearly disclose their security protections, it would presumably help consumers in making better decisions when purchasing IoT devices of different types. It would also provide an enforcement option for the FTC to go after IoT manufacturers that misrepresent their security protections.
What does this progress mean for consumers and manufacturers?
Most consumers may not yet understand the importance of assessing the potential damages that can be caused by their many IoT devices. This is primarily because the damage in many cases is not personal. An army of bots will cause damage to society at large and not just to the individual using the device (although clearly a bug that crashes a car would be considered personal damage).
From the viewpoint of manufacturers, once they start investing in security patching, the price of IoT devices will go up and a pricing competitive edge could be lost. Without legislation equalizing the playing field, there would be substantial inequality in the price that the vendors would be charging, and so legislation would provide an equalizing force in the market.
On the flip side, strong internet security regulations on manufacturers of IoT devices would encourage security upgrades for competing companies that would like to sell in the U.S. and any software improvements would be available in the devices anywhere they are sold, as it makes sense to have one concise version of software.
Bruce Schneier, CTO at IBM Resilient and longtime advocate for IoT security regulation, has said on many occasions that the government should impose basic security standards on IoT manufacturers, forcing them to make their devices secure, despite the fact that most customers are not aware of security’s importance. He rightly notes that this is an international issue and that IoT devices made in other countries could still be used in distributed denial-of-service attacks to bring down U.S. websites in botnet attacks.
Once governments see the benefits regulation will offer society, they will be enhanced, causing vendors to provide a basic standard of security measures. These would include investing in centralized security patch updates (security by default) and a standardized number of years in which each device must provide the software patches. Government should actively be seeking legislation to regulate and manage IoT risks and threats by expanding device security measures. There are so many vulnerabilities in IoT, and hacking IoT devices is so easy that we must proactively seek solutions rather than wait for disasters or emergency situations to force reactive responses.
Now with a firm hold on the industrial world, the internet of things is being used by companies across industries to transform their business operations with digitization. As the industrial process becomes increasingly digital, a more complete convergence for the virtual and physical worlds is imminent, offering companies the opportunity for a complete digital footprint of their products, facilities and plants. We’ve entered the age of the digital twin.
Forty-eight percent of organizations that implement IoT said they are already using or plan to use digital twins in 2018, according to a recent IoT implementation survey by Gartner. Further, the number of surveyed organizations using digital twins will triple by 2022. As manufacturers look to digitalization to transform operational effectiveness, improve safety and increase production, and simply build better products, interest in digital twin technologies is rising.
Despite the growing awareness of the benefits to manufacturers and other industrial entities, many questions remain in the minds of decision-makers who are considering a digital twin strategy: What does the lifecycle look like? What sort of environment needs to be in place before implementation? What are the best practices for continuing to derive business value?
First, let’s take a step back and outline what we mean by digital twin. Born from growing connection points brought on by IoT, a digital twin is a digital representation of a physical asset — such as a pump, motor, turbine or an entire plant — that represents the structure and behavior of that asset in real life. Digital twins provide a near-real-time digital image of the physical object, process or plant to help optimize performance.
The power of a digital twin is that it allows the manager or operator to observe the behavior of the asset and learn from past and present operations to make predictions about future operations. This knowledge is incredibly valuable because it allows you to anticipate problems before they happen and uncover new opportunities, resulting in safer, more efficient and profitable operations.
The maximum value of such a digital twin is derived through the continuous and consistent capture of relevant data throughout the entire lifecycle, from engineering to operations, converting the data to business contextual information and enabling a digital closed loop of value to every aspect of the lifecycle.
Until recently, digital twin capabilities have been limited because of the massive amounts of data it takes to process and create the digital replica. However, led by advances in the IT/OT convergence and more advanced computing and storage capabilities, the capability is now available to the masses, at scale.
Using the example of a manufacturing plant, I’ll illustrate what a complete digital twin lifecycle looks like and why ensuring the continuity of the digital value loop is key to maximizing one’s investment in the strategy.
Closing the loop on the digital twin lifecycle enables a 360-degree view into operations
- As part of the process design early in the lifecycle, a “first born” digital twin of the plant is created through a model to simulate the entire plant before the assets are even designed.
- The processes, equipment and operations are analyzed through multiple simulations for optimal safety, reliability and profitability.
- This digital twin matures further to incorporate the physical assets’ design information associated with the process design of the entire plant. Aspects of the plant digital model are used to train operators with virtual reality-based immersive training environments.
- During the operational stage of the plant, variations from optimal process and asset design are captured during runtime, and the digital twin is updated with this information.
- Operations and maintenance personnel use augmented and virtual reality technologies with mobile devices to address plant- and field-based maintenance issues for the involved assets. Given the current state of an asset, the digital model with predictive learning technology enables proactive identification of asset failures before they occur.
- Using artificial intelligence, predictive learning technologies used with advanced process control, control strategy design and process optimization, the necessary variations from process and asset design are fed back to the engineering stage of the lifecycle, enabling a complete and efficient digital value loop.
Closing the loop through the digital link across process engineering, augmented reality-based asset performance, cloud-based predictive analytics and artificial intelligence-based process optimization enables you to tap into the high-value reservoir of information provided by a true 360-degree digital twin of the plant.
The success of digital twin technologies relies on the standardization of open IoT
While many players have emerged in the digital twin space, there’s still work to be done to ensure the technology realizes its full potential. A digital twin can only be as good as the data on which it is built and the real-time data it continuously collects. But if the various hardware and systems involved are proprietary and not able to integrate, the amount of meaningful insights generated by the available data is significantly reduced. Interoperability is a critical factor in the power of digital twin technologies to drive effective, business-driven decisions.
Most digital twin implementations are not greenfield projects. In reality, digital twins are being introduced at many different stages in the asset’s lifecycle or in the overall process. That’s why it’s important that we arrive at a common industry standard of open, interoperable and hardware-agnostic IoT systems, so operators don’t have to worry about replacing operationally sound assets to take advantage of new technological developments such as digital twins.
Greater benefit also comes from open collaboration and co-innovation between technology leaders, not one player working to capture the market with a single magic bullet solution. Customers win bigger with interoperable systems that can take data from multiple vendor products and systems to create more sophisticated insights. It also encourages developers to spur innovation faster: Companies can build their own industrial applications to make their existing assets perform better.
Having a more robust vendor ecosystem and empowering developers to create more value around digital twin technology will speed that adoption of the strategy across industries. This will lead to even more efficiencies when you consider that the explosion of digital twins can ultimately help create an open database of ready-made digital assets. Rather than having to spend time and resources to develop digital twins of common equipment or parts, companies can use the database as a starting point to roll out digital twin strategies faster.
We’ve only reached the tip of the iceberg in realizing the full potential of digital twins, especially as technologies such as machine learning, AR/VR and edge computing evolve alongside best practices for modeling and implementation. One thing is for certain — having an open, interoperable standard for digital twin technologies will be paramount to reaping future benefits and achieving true digital transformation.
This is the second of a five-part blog series. Read part one here.
Ok, so now that we’ve covered the importance of digital transformation, a new product mindset and modern software development methodologies, let’s shift focus to IoT.
You’ve got mail (until you prefer Gmail)
Do you remember those AOL CDs you used to get? And once set up those three exciting words when you got your email? It took the world enough by storm for Tom Hanks and Meg Ryan to make a whole movie about it.
In the beginning of internet scale (for people)
Many people used to think that AOL was the only way to get connected to the internet. AOL excelled because it dumbed it down and made it easy to connect, learn and build communities. In the greater scheme of things, sure, it was relatively limited in function, but it worked well for a lot of people.
Then people started to learn about Google and its powerful search capabilities and realized you could type a keyword into a search engine instead and get even better results. Later, people shifted to direct internet connections and explored increasingly rich websites and along came those little trends of e-commerce, mobile and social.
IoT is about more than cheap devices and ubiquitous connectivity
Of course, lots of people have been building IoT-like technologies since before it was even called IoT, but the term really reflects embedded computing hitting scale. IoT is, of course, being driven by lower silicon costs, increased connectivity and the rise of the cloud, but equally important are these other trends, which have created an increasing demand for real-time information, an ever-increasing network effect and a pressing need to stay competitive. Combined with the maker movement and the likes of Kickstarter and Indiegogo, these trends have really accelerated the innovation cycle.
And yet, despite an increasing ramp in IoT projects and interest from key stakeholders, I like to say we’re in the “AOL stage of IoT,” just getting things online in scale.
OT and IT — the preeminent IoT conference Venn diagram
Speaking of stakeholders, it seems like you can’t go to an IoT event without seeing some variation of a Venn diagram with circles labeled OT and IT intersecting. Maybe some cute graphics of people with hardhats versus laptops for good measure. Before we go deeper on the technology side, it’s important to touch on the dynamics between these two key organizations because it’s a part of the reason why I say we’re in the AOL stage.
OT is all about uptime
Operations technology (OT) is historically about overseeing operations rooted in the physical world, and for them it’s all about uptime, efficiency and quality. As a result, the OT mindset has historically been “don’t touch it as long the process is running.” Despite having a highly sophisticated operation, an OT organization often has with no idea of what’s happening with that process in the moment, for example product quality. Instead, the team typically would only find out about an issue later in some summary report — something that’s increasingly expensive the further a defective part gets from the factory, especially if it makes its way to a customer.
Frankly, the notion of continuous software delivery freaks the typical OT person out. It’s for good reason because downtime in the OT world typically has immediate impact to production or safety. This is also why security is paramount and security by obscurity (i.e., by segmenting off operations from broader networks, much less the internet) has been the norm.
IT is about security, governance and reducing costs
On the other hand, you have IT — an organization that has historically been a cost center whose efforts are increasingly being commoditized. As such, IT has been embracing cloud to reduce costs for the past 10-plus years while savvy IT professionals have been adapting to help their business counterparts and developers modernize the way they build and deploy applications.
Security is also paramount to IT, but unlike OT, impacts of IT security breaches play out over long periods of time and often in great scale. For example, for a breach leading to stolen credit card numbers, the financial implications can be long-lasting and in great scale. IT understands security and manageability and scale.
OT and IT organizations have historically been at odds with one another because they’re motivated by different things. However, convergence of their skill sets is important for scaling IoT systems and embracing digital transformation in general.
Of course, another key stakeholder here (who typically gets overlooked on the aforementioned preeminent IoT conference Venn diagram) is the line of business. This is often the group that ultimately owns the keys to the IoT castle because it drives the business and ultimately controls the money flow. It would get a snazzy power suit if included on the diagram.
Pi and the sky
To get the power of IoT, you need to connect the physical world to the digital, with the evolution typically starting with real-time monitoring for visibility and then moving to optimization through analytics and eventually automation.
OT folks are increasingly experimenting to get this visibility. I referenced the maker movement shortly ago and I see many proof-of-concept (PoC) projects out there starting with a Raspberry Pi-class device and the public cloud. I call this “Pi and the sky,” and developers and engineers do it because it’s easy to get started. Often, this involves shadow IT by completely bypassing their company’s IT networks to the cloud through cellular connectivity.
Making the business case and lining up stakeholders
I also call this trend Pi and the sky because many of these early efforts start with no real business case. Therefore we hear about so many failed IoT projects and these sorts of PoCs tend to get us technology providers stuck in what I call the dreaded “PoC friend zone.” In general, the first and second biggest challenges in IoT are business case and stakeholder alignment — well before technology.
Those that get pilots approved for production often quickly realize that they need to re-architect for scale, both in terms of the hardware they use but also in how the software is built. Maybe, it wasn’t such a good idea to hardcode to one public cloud when the data meter really gets humming!
Starting and scaling
We always recommend that all stakeholders work together from the outset. When projects are ready to hit scale, OT can especially benefit from partnering with IT for its knowledge in scaling compute infrastructure, security, manageability and modern software development principles.
In summary, I like to say IoT starts with OT but scales with IT. In my next blog, I’ll talk about the evitable shift to the edge due to the sheer number of things coming online. In the meantime, I’d love to hear your comments and questions.
Cyberwarfare can devastate economies with connected infrastructure. Military strategy uses such attacks because they’re hard to defend against and very cost-effective.
The Allies bombed German ball-bearing plants in World War II because destroying them would degrade the German production of tanks and fighter jets. Cyberwarfare today can devastate an entire economy. As former Homeland Security Secretary Michael Chertoff recently explained, “Cyberattacks on critical infrastructure from state or state-sponsored actors are the biggest threat right now.”
Cyberattacks also have political, military and economic dimensions. What’s an adversary’s purpose behind an attack? How are targets chosen? What are asymmetrical warfare and the ROI of a cyberattack? How are such attacks conducted?
Picking a high value target
Public info from U.S. Department of the Army: “The emphasis of targeting is on identifying resources (targets) the enemy can least afford to lose or that provide him with the greatest advantage … Denying these resources to the enemy makes him vulnerable … an electronic attack could potentially deny essential services to a local populace, which in turn could result in loss of life and/or political ramifications.”
- Military — Exploit an adversary’s weakness and degrade their capability and/or will to fight
- Political or diplomatic — Weaken adversary’s status or power in the world or region
- Informational — Generate favorable press, gain information superiority
- Economic — Undercut adversary’s ability to sustain operations
The ROI of asymmetrical warfare
Cyberwarfare is asymmetrical. The parties at conflict use the means available to them to inflict as much damage as possible by careful exploiting their adversary’s weaknesses. An example of this tactic is in Palestinians flying “fire balloons” into Israel. The tactic has resulted in thousands of acres of valuable farmland and nature preserves being burned, the Times of Israel reported. A $10 incendiary balloon caused thousands in economic damage. Cyberattacks similarly target key infrastructure elements that will cause the most damage.
Cyberattacks on infrastructure
IoT devices, shared communications and cloud computing infrastructure expand the attack surface available to hackers. Electric power grids are especially vulnerable given the broad impact a power outage has on the economy. An electric utility with multiple partners has a broad and diverse attack surface — places where an attacker could attempt to access internal networks from the outside.
Employees at an electric utility or its partners are often unsuspecting targets. Spear-phishing attacks target a specific victim, and messages are modified to specifically address that victim, purportedly coming from an entity that they are familiar with and containing personal information. It’s the go‐to technique in the cybercriminal and nation state attackers’ arsenal. It is an effective and inexpensive way to harvest user credentials, implant various forms of malware, impersonate trusted people and collect intelligence on the target organization.
In cases described by the Department of Homeland Security, as presented to the electric utilities and outside experts, Russian hackers went into power plants through the networks of contractors, some of whom were ill-protected. Those contractors provided software to the utility company’s systems. Then they used spear-phishing emails, trying to trick utility operators into changing their passwords. Here are two other data points:
- In “Experts: North Korea targeted U.S. electric power companies,” it was reported that hackers linked to North Korea targeted U.S. electric power companies with spear-phishing emails which used fake invitations to a fundraiser to target victims, FireEye said. A victim who downloaded the invitation attached to the email would also be downloading malware into his computer network.
- In “The Ukrainian power grid was hacked again,” it was reported that experts say the country appears to be a testbed for cyberattacks that could be used around the world. The hackers conducted a coordinated attack against three power distribution companies, which began as part of a massive phishing campaign. The attackers sat on systems silently for months, conducting reconnaissance before making their presence known. They overwrote firmware on remote-terminal units that controlled substation breakers. This essentially bricked the devices and prevented engineers from restoring power remotely.
The financial damage of a cyberattack
The Hartsfield-Jackson Atlanta International Airport generates more than $34 billion in direct business revenue to metro Atlanta. A recent power outage, where more than 400 flights were canceled, wasn’t caused by a cyberattack, but illustrates how even a single disruption can have a huge financial impact. A simple projection reveals that a cyberattack on a large airport’s power systems, assuming that planes were forced to be idled, could exceed $100 million.
- Passenger time: 400 flights x 100 passengers x $50 per hour x 10 hours = $20 million
- Equipment cost : (Jet lease costs $20,000 per hour) 400 jets x 10 hours x $20,000 = $80 million
Beefing up cyberdefenses
The U.S. is outgunned in electronic warfare, says the country’s cyber commander. Two military leaders admitted at the TechNet conference in Augusta, Georgia this week that the country is falling behind in its electronic warfare capability. “When it comes to electronic warfare, we are outgunned,” Maj. Gen. John Morrison, the commander of Fort Gordon and the Army Cyber Center of Excellence, said during a Tuesday presentation. “We are plain outgunned by peer and near-peer competitors.”
With so much at stake, investments in cyberdefense have to be a high priority.
Battery life is the top concern for smartphone consumers and it’s easy to understand why. Who hasn’t experienced a smartphone battery draining at an unexpectedly fast rate? It’s a frustration that translates to the many smart devices that make up the IoT ecosystem today, especially smart medical devices.
When the battery in these devices drains too quickly or even dies, the outcome can be much more than a frustration (see Figure 1). A low battery in an implanted medical device might manifest itself as patient fatigue. In a patient monitoring system, it could result in a critical alert or medical data not being delivered to a healthcare professional, and that could be fatal.
With 40% of all IoT technology expected to be health related by 2020, and a sizable amount of that battery operated, it’s now more critical than ever for IoT product makers to pay close attention to battery runtime.
Need another reason why battery life should be a critical design consideration? How’s this: The cost of replacing batteries is often higher than the cost of the IoT device itself.
Make no mistake, developing battery-powered IoT devices doesn’t just involve replacing a power plug. To truly maximize battery life, product makers must perform battery drain measurements and obtain a thorough understanding of the device’s power-consumption patterns. That testing must be accurate and simulate the real world in which the device will operate. Failure to do so can come at a great price in terms of the company’s brand and pocketbook.
Here’s a look at four tips today’s product makers can take to ensure their IoT devices have a long battery life.
1. Take your device to the extreme … environments, that is
Battery life is dependent on several factors, including temperature, humidity and user behavior. Heat, for example, will kill a battery. And if the battery is stressed with frequent discharge, service life can drop dramatically.
To ensure environmental conditions don’t adversely affect battery life, its power consumption must be measured across the range of temperature, humidity and other conditions it will experience. The environmental and temperature extremes it will encounter during shipping and storage also must be considered. Having to change a device’s battery in an extremely hot or cold location is never a desirable task.
2. Test in difficult electromagnetic environments
A device may work perfectly when tested on a bench in a laboratory only to fail miserably in the field. That’s because transmission retries drain batteries and a transmission power that seems sufficient in the lab may be drowned out in a congested electromagnetic (EM) environment. Crowded spectrum can also impact transmission efficiency.
To prevent these issues:
- Test for co-channel and adjacent channel interference rejection, and for immunity to hostile and inadvertent interferers, including EM fields produced by motors and other heavy industrial equipment
- Ensure compliance with FCC requirements
- Keep frequency accuracy as tight as reasonably possible
- Avoid unnecessarily powerful transmissions
- Ensure the device transmits only when it has useful data
3. Don’t overlook cybersecurity
Cybersecurity is one reason consumers are reticent to embrace the IoT, and with good reason. These days even a smart thermometer is susceptible to hacking. But, there’s another reason security should be top of mind for today’s product makers: Lax security impacts battery life. A person with bad intent can intentionally try to destroy an IoT installation by draining the batteries of its sensors. Even if that is not the explicit intent of the hacker, any unexpected activity places an additional load on the battery and can cause it to drain quicker.
Avoiding this outcome means making device security a top priority. Relying on users to change a device’s default password can’t be the only security measure implemented. Vigorous device testing using an appropriate security test solution can help product makers quickly identify any potential security gaps and increases their confidence in a device’s ability to fend off cyberattacks.
4. Make the right measurements using the right technology
IoT devices have very dynamic current, with sleep or hibernate modes in nA or μA, and transmit modes measuring mA or A. Such variability makes accurately capturing current difficult. To do so, engineers must be able to measure low currents and switch to high current measurements quickly.
An instrument with demonstrated precision and that offers either dual ranges or seamless ranging is often the best bet for avoiding measurement errors due to range changing. Instruments with insufficient measurement bandwidth should be avoided. They severely degrade current measurements and may even cause the engineer to miss fast — transient — events that briefly draw an amp or more. Either scenario can result in a device failing prematurely.
Without question, battery life can make or break an IoT device. Fortunately, by following these four tips, product makers can begin to make the smart choices needed to ensure their devices have a long battery life and the greatest chance of success in IoT.