Industrial companies that have successfully embraced IoT are now convinced of its benefits. Harley-Davidson’s smart factory saw an uptrend in profits by 4% and a reduction in build-to-order cycle by a factor of 36. Rolls-Royce, Toyota and Royal Dutch Shell are a few other examples of companies whose smart plants have realized the return on IoT investment within months through improved efficiencies.
Insights from IoT-sensor-generated data have helped industrial companies optimize asset performance, detect anomalies and predict repairs to significantly reduce downtime. Manufacturers are also realizing new revenue streams with subscription-based services on the products they sell.
However, that’s only a fraction of what industrial IoT can really offer to manufacturers. Time-sensitive networks (TSN), an emerging IIoT connectivity paradigm, extend the promise of IoT well beyond predictive maintenance into the core of industrial control and actuation systems. Being a Layer 2 technology, TSN helps enterprises to establish a solid digital footprint for IoT-enabled services and to position themselves for higher returns.
If we take the example of any smart factory, IoT-sensor data from the field and factory floors include both control as well as telemetry/diagnostics information. Today, IoT applications are mainly designed to gain insights from this telemetry and diagnostics data. The problem is, due to tight control loops, this data must be acted upon immediately (typically within sub-millisecond response windows).
Besides, every industrial use case (think of an oil and gas drilling rig versus a jet engine plant) has its unique timing requirements. Open IP/Ethernet standards were never designed for these time-sensitive and highly deterministic control loops.
To deal with the control and actuation functions, enterprises still rely on propriety fieldbus protocols and simply can’t afford to move away from using those, thus extending their dependency on age-old legacy systems and fragmented networks. This predicament is further highlighted in the excerpt below from Practical Industrial Internet of Things Security:
IEEE 802.1 Ethernet, although a widely deployed low-cost Layer 2 technology, fails to match the deterministic performance requirements of industrial automation and control applications. To achieve deterministic performance, most industrial enterprises still continue to use fieldbus technologies and their proprietary enhancements to Ethernet, such as EtherCat, Profinet or Sercos III. These proprietary protocols are not built for security and interoperability. The result has been fragmented industrial networks that are incapable of integrating with advanced analytics services of the industrial internet and Industrie 4.0.
TSN, an evolution to the IEEE 802.1 Ethernet standard, addresses this problem head-on. It’s time synchronization and traffic scheduling capabilities are designed to cater to highly deterministic and tight control loops. Both early adopters and enterprises that are new to IoT can use TSN to upgrade legacy operational technology (OT) infrastructure with open standards-based IP/Ethernet technologies. This translates to efficiencies from IT/OT convergence across the entire organization for better productivity and innovations.
It’s also worthwhile to note that TSN is not simply about upgrading industrial operations with standard-based technologies for greater efficiencies. TSN utilizes a software-defined networking concept for the automated setup and configuration of devices and network equipment (defined in IEEE 802.1Qcc). Thus, TSN-enabled infrastructure can also use the power of intelligent, software-defined capabilities all across an enterprise’s digital fabric. Last but not least, TSN can help migrate away from fieldbus technologies which were never designed with security in mind, and thus make industrial IoT more secure.
As an emerging innovation, much of TSN’s promise is still hidden behind incubation testbeds and proofs of concept. However, it may not be too far in the future before we start seeing its mass adoption.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
IoT systems will present themselves at one point or another to many organizations. There are some key characteristics of a successful IoT deployment — such as partnerships across your organization, easy device management and intelligence about the data delivered through the IoT system — but there are some cautions to be taken now. Key mistakes on implementation can lead to a potential risk that could put the entire benefit of a system at risk. Sometimes mistakes that can be avoided can come from an established list of concerns from decision-makers, as those can be addressed during implementations. Common concerns generally gravitate towards security, scale and complexity of a system. Going into a project, addressing these concerns and more can drive your organization towards a successful IoT project.
Here are seven mistakes to avoid if you are considering an IoT solution:
Not making a business benefit first. We all have at some point wanted to engage in a new technology simply because it was new, novel or cool. When it comes to an IoT system for a modern organization today, the driver must be to solve a business problem or introduce some efficiency that otherwise would not be possible. Maybe even consider a mission statement or equivalent to address the why an IoT project is to happen.
Implementing a technology introduces security concerns. This can be as simple as separate Wi-Fi networks used or as advanced as using the cloud for IoT certificate management. End-to-end security is a concern upon deployment and ongoing use. There is one classic example of an internet-connected carwash using a default password and by having a security issue, it can also result in a safety issue. One recommendation is to perform an external security audit of an IoT implementation — this is a new service and can really give some confidence to new implementations.
Not having a good forecast of the data flow. This mistake can come as a surprise as an IoT project scales. Factors like network traffic amount (throughput and types) and storage requirements over time can add up with many devices. The mistake is not taking proof-of-concept pilot numbers and extending them out to a production deployment size and scale. Then take that result to forecast the network and storage needs over time, such as a two- or three-year plan.
Not having a plan for device updates and replacements. In the data center world, I used to jokingly say never underestimate the value of a good firmware update for storage and server systems. In a way, the same applies to an IoT deployment. Bugs will be fixed, new capabilities will be implemented and security issues can be addressed through updates. Avoid the mistake of not having a plan to update — and insist that not updating is not an option. Also, what is the set lifespan of a device? How are spares managed? How are purchases going to be made through the years?
Cost is a factor, but not the only one. Like many other technology decisions, if price is the sole reason for a decision, there may be a mistake coming. It is definitely wise to shop around and look at other systems, but simply taking the lowest-priced option may not be a good idea.
Not having a plan for outages. This is a tricky mistake to avoid. Many processes today simply cannot exist in a “manual mode” or in an offline situation. Will this IoT system be mission-critical and possibly prohibit the organization from making revenue or doing whatever it does? If there is an outage, how would that be managed from a troubleshooting perspective?
Not having a monitoring system. I have this discussion with many organizations when it comes to doing something different and new in technology: If it is important, it needs to be managed and made available. These responsibilities transcend different technology implementations and models.
IoT projects will mean different things to different people, and this list is a start. Maybe some of these you have thought of, and maybe some you have not addressed. Regardless, IoT is a very dynamic space right now and will continue to grow. Fundamental responsibilities like project management, security as part of the design and implementation, and training go into a successful IoT project.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Modernization of healthcare facilities and hospitals poses a unique challenge. With multiple operations serving to support patient health and a large number of maintenance efforts taking place to create the clean and fully operational environment that is required of these facilities, there is a significant opportunity to attain greater efficiencies utilizing technology from the world of the internet of things.
Advancements in LED lighting and the integration of IoT controls have made it possible to maximize facility efficiency without compromising patient experience. As IoT continues to add increasing value within healthcare establishments, LED lighting provides an easier way to integrate IoT and networking, ultimately creating a new future for patient care, facility management and operations.
IoT and controls enhance efficiency for maintenance and energy use
Unlike their fluorescent or metal halide predecessors, LEDs inherently offer a higher degree of controllability, making it even easier for healthcare facilities to become more precise with their light levels, ambiance and energy consumption. Smart fixtures and internet-based control systems provide a more transparent look into facility operations.
Smart fixtures and technologies are capable of gathering information that is important to facility operations, such as wattage, broken and burnt out lighting, and other maintenance issues, all in real time and then communicating information to control centers. Furthermore, smart fixtures can be designed to work as beacons for IoT technologies, furthering their benefits beyond just energy use. They can be used to communicate other information about the room use or maintenance required; for example, when a patient checks out, the beacon can signal to staff that the room needs to be turned over. With all of this information sent back to the control center, healthcare facilities can be much more efficient, quickly identifying and addressing maintenance demands in a timely fashion.
In addition to the smart fixtures, LED controls have the ability to either be global or localized, which can customize energy use throughout the facility as a whole and within specific sections, rooms or locations. These networked controls can be used to monitor and manage operations of LED fixtures and adjust energy use to accommodate for occupancy, external light levels and time of day. For example, while external lighting systems may utilize preset timer controls to turn LED lights on in the evening when it gets dark outside, select locations inside the facility may run on different sensor-based controls that turn off or dim down power in rooms that receive less foot traffic when they’re not in use.
Catering to patient comfort with LED controls
The lighting in senior living facilities must serve the needs of both the residents and the healthcare professionals. Residents often do better with lighting that best reflects natural light as it helps to synchronize their circadian rhythm and creates a comfortable, low-stress environment. Simultaneously, visiting doctors and nurses need high-quality light to perform examinations and provide proper care for residents. Smart lighting controls are a critical component to establishing this balance while curtailing energy overuse.
LEDs are tunable, helping to adjust light output, brightness and color of the lighting, enabling healthcare facilities to establish an optimal balance between comfort for patients and usability for staff. In resting areas, for example, lighting may be warmer and dimmer to create a subconscious comfort as residents fall asleep while living areas may use cooler, full intensity lighting in waiting areas for greater visual acuity for visitors.
The tunability of LED lighting in regards to output and color can be used for visual communications with patients, signaling start and end times for routines and triggering safety warnings. For example, facility operators can set LEDs to illuminate gracefully from 0 to 100% over a predesignated period of time in the morning, acting as a wakeup alarm for patients. Simultaneously, for patients that may be deaf or hard of hearing, the color of LEDs can be adjusted to signal emergency situations or notify patients when they need to leave an area. If a facility is receiving patients into its emergency room, for example, lights may shift from a blue or white hue to a red hue to alert existing patients that they need to clear the halls and common areas.
Each facility will be different based on the health of their residents, and lighting controls are completely customizable to the facility based on current needs and the regulatory demands of the future.
LED technologies provide simpler benefits
Automation and IoT have helped to add a certain degree of precision to facility operations, increasing the efficiency of energy management. LEDs on their own, however, can better support the well-being of residents through their durability, longevity and cost-saving capabilities.
A simple shift from fluorescent, metal halide and halogen lighting to high-efficiency LED lighting technology with controls will, on its own, boost efficiency by reducing lighting energy use up to 65%. This coupled with the 100,000-hour lifespan of LEDs ensures healthcare facility managers less maintenance for their lighting technologies.
LEDs have become a clear option for facilities due to their higher-quality light output and longer lifespan, however, the integration of IoT and controls provides additional levels of ambiance control, energy usage and response capability to events. A simple change in lighting will create a more pleasant environment for residents, a lower-cost operation for facilities and a more informed place for managers.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Despite all of the promises offered by the internet of things, the biggest limitation of this fast-moving space could prove to be security and deep network visibility. To get the most value out of an IoT investment, organizations need to appreciate the capabilities and resources that must be in place and respond accordingly.
And there’s no time like the present. McKinsey predicts that by 2020 the IoT market will be worth $581 billion for information and communications technology-based spending alone, growing at a compound annual growth rate between 7% and 15%. By 2020, the discrete manufacturing, transportation and logistics, and utility industries are projected to spend $40 billion each on IoT platforms, systems and services.
The value proposition of IoT for these verticals is well known. Sensors embedded in manufacturing equipment can transmit data about their condition so they can be serviced before a breakdown, for example. But to remotely control a vast array of devices transmitting data over the internet, corporate networks that provide deep visibility and analytics become an important area of focus. The more devices that come online, the broader the surface area becomes for potential cyberattacks. This can be a difficult task to curb with thousands or even millions of IoT devices to manage and monitor. Plus, as the list of connected devices grows, enterprise network traffic increases exponentially, leaving IT teams to decide how to best support the increase in demand.
Here are the strategies and tools you need to overcome these challenges and become an IoT disruptor.
Create an agile foundation with software-defined networking
Enterprises can improve network agility and visibility by looking to technologies such as software-defined networking (SDN), which centralizes control of the network through intelligent software.
Advanced segmentation allows you to easily “spin up” new virtual environments where IoT network traffic is separated from critical network infrastructure for security and management purposes. Plus, centralized control allows you to seamlessly provision resources for peak-load demands and streamline processes. And a complete history of all network activity that is both readily searchable and sortable in an easy-to-use console is the secret to deploying and managing IoT while minimizing security risks. With SDN, enterprises can allocate dedicated resources to IoT infrastructures without impeding other business processes.
Implement deep network visibility
For effective management, IoT-created data needs to be stored, processed, analyzed and acted upon. IoT devices constantly generate and transmit trickles of data, and being able to view all of the data and effectively process and analyze it helps create efficiencies. This approach focuses resources on data that could have security implications and overlook superfluous information that presents less of a threat.
Continuously monitor and secure
Enterprises implementing IoT programs need to take into consideration a list of security precautions including policies for connected devices, security monitoring, protection against botnets and security patches for all connected devices.
Coordinated segmentation strategies and security policies make it possible to dynamically monitor network behavior in response to typical IoT events and report on anomalies that could indicate a security threat. And comprehensive security ecosystems based on machine learning and behavioral analytics detect early indicators of network infiltration. Additionally, 24/7 security monitoring by a team of experts can accelerate the processes needed to identify attackers, watch for lateral moves and mitigate threats before they cause extensive damage.
IoT in action: How Teknion became an IoT disruptor
Using global cloud and IoT strategies, high-end workplace furniture manufacturer Teknion shifted to a new analytics-based business model. With a secure software-defined platform, the company established industrial IoT capabilities to collect massive amounts of data from equipment and apply advanced analytics. The project resulted in dashboards that help leadership make real-time decisions about usage and repair scheduling. These insights, alongside the network’s agility to create an unlimited number of virtual environments where R&D teams can safely test and improve the latest technologies, are part of Teknion’s innovation incubator.
- Agile IT infrastructure, accelerating cloud migration and empowering a sophisticated IoT strategy;
- Simplified IT complexity and management with a global software-defined network platform;
- Enhanced network efficiency with on-demand bandwidth and deep analytics; and
- Reduced equipment, maintenance and support costs.
Ensuring a network can adequately support the advanced needs of IoT is critical, particularly as global markets become increasingly competitive. Making decisions based on data and applying this intelligence will be a prerequisite for industry domination. In fact, 53% of enterprises expect IoT data to assist in increasing revenues in the next year. With numbers like that, the necessity and ultimate advantage of investing in software-defined networks are clear.
Rest assured, IoT is still in its infancy — but the impact across consumer and commercial fronts is obvious. The goal of this column is to highlight the various applications of IoT, the practicalities of implementation and the technology that will be necessary to make it all happen. It’s an exciting time, as IoT is a hotbed for innovation.
Based on that, there’s a new era on the horizon that will change the way we store, manage and protect data, as well as challenge us to rethink the way we view the cloud, security and big data. It should be a thought-provoking exercise as we break it down.
Demand for enterprise applications is on the rise, with many IT departments staring down the barrel of a massive app backlog that they can’t possibly work through fast enough. And so, the gauntlet has been thrown — IT needs to rethink its approach to app dev to increase the speed of delivery. Enter two unique solutions to this problem: low-code and serverless.
But what are low-code and serverless? And on top of that, what does this all have to do with IoT? Sit back and relax — I’ll get there eventually. But first, let’s take a look at low-code and serverless and get you up to speed if you aren’t familiar with these technologies.
A song of ice and fire
Low-code and serverless technologies were both designed to simplify the app development process, thus speeding the delivery of new apps. Serverless does this by relieving developers of the burden of server management. While the name may imply there aren’t any servers, servers ARE involved. It’s just that from the developer’s perspective, it appears serverless because she can simply focus on developing the app instead of worrying about provisioning, managing or scaling servers.
On the other hand, low-code is all about simplifying app development by abstracting the developer from the code. The thought process is that if a developer can drag and drop GUI components to create a user interface and then use flowchart-like diagrams to create business logic, he will be able to deliver apps much faster.
Both of these technologies exist to solve what is basically the same issue — accelerating app development. However, the companies behind these technologies have taken dramatically different approaches, which makes serverless and low-code seem more like ice and fire.
Public cloud vendors like AWS, Google, Azure and IBM all provide serverless options, but for the most part, they focus on lower-level capabilities and most organizations can’t address the complexity to build on these technologies. Organizations that work directly with these vendors have greater control over the output, but it requires much more development effort.
Meanwhile, the traditional low-code vendors are heralding the rise of the “citizen developer” by making app development accessible to business users. Given that most business users don’t have computer science degrees, the low-code approach is perfect for them. Unlike serverless offerings, low-code enables faster application delivery but at the cost of control — the developer is greatly limited in terms of what they can do by the low-code environment set by the vendor.
Opposites attract: Combining ice and fire
With pressure mounting to find a solution to the app dev challenge, there is no reason that these technologies can’t coexist. It’s just that traditional low-code vendors predate the serverless concept and use older technologies that require an app server. Yes, app servers are considered legacy technology, even if they are open source! And providing a low-code approach for citizen developers is just not in the DNA for AWS, Google or IBM. Microsoft is a little different, but its business development efforts are not currently tied to its serverless work.
So, what problem does this pose? For people looking at low-code options, they should carefully consider the architecture of the system. This can be difficult because vendors like to throw technology names around, which complicates the research. Unfortunately, the fact of the matter is that many traditional low-code vendors often rely on older technologies. Compared to serverless, you can think of them as monolithic architectures — and that means you don’t have the flexibility to design, develop, test, deploy and scale capabilities independently.
The good news is that there are now low-code options that are also based on serverless. These options take a different approach by focusing on making professional developers more productive instead of shifting the responsibility of app dev to citizen developers. In essence, they are designed to provide developers with a higher level of control by using common web skills that are designed for existing tools and processes.
Serverless, low-code and the IoT app experience
So, why is all of this important to IoT? Traditional low-code touts the ability to support IoT applications, but they are limited to calling packaged services (e.g., analytics) and consuming the service within the application.
Serverless, on the other hand, is a great architecture for IoT because event-based workloads run very well in a serverless environment. And event-based is key to IoT applications, as events that are derived from the analysis of sensor data can enable a more natural, interrupt-driven application experience. This allows the application to act on behalf of the user, minimizing the work that the user has to do.
So, if you are considering low-code, make sure you also consider the architecture of the system carefully instead of focusing only on drag-and-drop UI — it can make a big difference on the final product.
If there’s one truth to operational security that many don’t want to hear, it’s that any system can be compromised. As a multitude of industries like utilities, manufacturing, and oil and gas are adopting industrial internet of things devices — a market set to boom to 100 billion devices over the next five years, according to PricewaterhouseCoopers — ensuring security in these systems is a challenge that will grow exponentially in the near future.
While industrial companies that are taking a cutting-edge approach to IIoT to transform their industry and businesses are out there, many are still stuck in a more traditional IT cybersecurity mindset — focusing on network security defenses. If operational technology (OT) professionals don’t reimagine how to actually protect IoT endpoint devices, they could hamper the IIoT revolution.
Layered networks in information technology environments have traditionally allowed institutions to monitor and rapidly respond to any security threat. But when it comes to OT security, those defenses are simply not enough. Many critical infrastructure industries simply cannot tolerate downtime and risk human safety — detection and response approaches are too little too late. Cybersecurity breaches can result in millions of dollars lost, and also — and more devastatingly — the loss of life. Because of this, it is vital that industrial businesses take a proactive approach when ensuring IIoT security. Unlike IT security, OT security must ensure an attack doesn’t happen in the first place — and that means protecting only the network isn’t enough anymore.
There is a solution to this problem: Critical infrastructure operators must seek out IoT and IIoT devices with security built into these systems — not bolted on. At the chip level, extensively tested and secured cryptography can prove the trustworthiness of a device, ensuring a far more secure system, from boot to application execution. It’s impossible to guarantee 100% security on a device, but IoT and IIoT devices that have embedded security software integrated at the chip level create a nearly impenetrable system regardless of the network or environment.
Trustworthy operations in OT security are no longer just a target concept — it’s an achievable, measurable and demonstrable end state with built-in device security. The advantage of built-in security in the OT environment is each of these platforms has been built with a specific purpose. These aren’t general purpose chips, and thus, they can be built to combat security issues at an explicit level.
NIST, the International Electrotechnical Commission and the Industrial Internet Consortium provide excellent guidelines on cybersecurity and processes that will hopefully rise to the level of auditable and enforceable measures instead of merely guidelines. And in the meantime, silicon vendors and original equipment manufacturers (OEMs) are taking their own steps to ensure the fidelity of these devices right now. Frameworks like the Platform Security Architecture for Arm-designed processors provide silicon vendors with the guidance to protect a multitude of connected devices. By using cryptographic controls built into these processors and coprocessor subsystems, the OT community has a new starting point, whereby endpoints, gateways and communications operate in a trustworthy state.
Security in OT doesn’t just mean data privacy. Security means preventing the unimaginable, and that must start at device protection. The future is uncertain, but the recent HatMan malware attack, also known as Triton or Trisis, proves that critical infrastructure will continue to be a primary target of bad actors. To address these threats, it’s imperative that silicon vendors and OEMs take a leading role in embedding security into their systems. It’s far easier, cheaper and safer than imagining security can be bolted onto a device later. Gone are the days when OT security meant guns, guards and gates. It’s time to engineer OT systems that are tasked for these new challenges.
Interconnected homes, smart refrigerators and digital assistants — all promising technologies that have come to fruition in the last 10 years. With the exception of the flying car, many devices that were once science fiction have become reality.
It’s easy to see why IoT is so appealing. Consumers have become accustomed to information at the tips of their fingers, in real time. As organizations across all industries embrace digital transformation as the means to deliver new benefits and competitive advantage, the danger exists of creating security vulnerabilities that could erase those benefits and worse yet jeopardize their business.
An open window to the internet
When shopping for a home appliance — like a toaster or TV — it is getting harder to find one without a Wi-Fi connection or Bluetooth. Despite the quickening embrace of technologies that provide modern convenience, there are in many cases lurking security vulnerabilities to consider.
This isn’t to say that adopters of these devices should halt in fear, but consumers must educate themselves and understand basic protection. The easiest way to think of these devices is as windows into the internet. When people go on vacation, they don’t leave the windows of their home open; they close and lock them to protect against intruders. The same considerations should be made when connecting new devices in your home. If one of these new devices is compromised, everything else that it touches is at risk.
The risk of connected devices doesn’t end in the home. The U.S. and global population are seeing a rise in the remote workforce, people working entirely or partially from home. This connected workforce poses more risks to both homes and employers. Imagine a hacker exploiting an unchanged default password on your latest connected IoT gizmo and eventually nesting some malware onto your company-issued laptop. Because this laptop travels with you and connects to multiple networks, the malware can travel and spread with relative ease.
Today’s enterprise IT and OT teams are scrambling to make sure they know what devices are connected and to adjust their defenses appropriately. Even the most innocuous connected device can provide a path into a valuable resource, as the operators of a casino in Nevada found out when their high-roller database was compromised through a connected thermometer in a lobby fish tank.
PKI can, and will, help
So how can we address these issues with IoT connectivity and security? Well, you can’t manage what you don’t know about, so device discovery is an important first step. Once you know a device is on the network, a few of the important fundamentals are authenticating it (i.e., proving its identity), keeping it updated with security patches and updates throughout its lifecycle, protecting data it collects and transmits, and monitoring its behavior. Existing, proven technology like public key infrastructure (PKI) is ready and able to play a key role in authentication by issuing unique identities and digital certificates to devices. It also is the linchpin of secure code signing systems that can ensure the authenticity and integrity of security patches and other updates that devices need — which is important because unsecured update mechanisms are a quick and easy path in for malware. Finally, PKI techniques enable negotiation and creation of encryption keys to protect IoT data, both at rest on devices, in motion on networks and in their ultimate storage location.
PKI, specifically the creation and injection of keys and digital certificates into devices, helps device makers guard against counterfeiting and provide eventual device buyers assurance that they’ve received the device in an initial, verified state. Although its role is typically “behind the scenes,” the majority of enterprises deploy PKI to help secure their most important enterprise applications — sometimes 10 or more different applications. A recent report found that IoT is the fastest-growing influence on PKI planning, indicating the pivotal role it will soon play.
So, where do we go from here?
PKI is well positioned to address some of the fundamental issues of security and trust in IoT — not all of them, but some of the pretty important ones. If you can’t trust the devices and the data they produce, all those benefits that you charted out for your IoT projects might never come to fruition. The best approach is to understand the risks an IoT project poses to your business and choose proven security protections of a strength that matches the risk. And don’t get caught in the trap of thinking your IoT device isn’t a threat just because of what it does; it can simply be the entry point to a more interesting — and dangerous — destination.
As 2018 draws to a close, industry would be wise to acknowledge the now-urgent necessity of prioritizing security across the industrial internet of things.
Since the Industry 4.0 movement began sweeping across the globe, it’s been firmly established that IIoT initiatives generate enormous efficiencies and cost savings in everything from government infrastructure to manufacturing to energy production. But several factors indicate that shortcomings in IIoT security threaten the upward trajectory of connected automation, casting a pall over the positive potential of deployments moving forward. In 2019, it’s time to get serious about IIoT security.
Industrial cybersecurity firm CyberX recently released its second annual “Global ICS & IIoT Risk Analysis Report” detailing the state of industrial control systems and IIoT deployments. The study spans all sectors and analyzes data obtained from over 850 production networks assessed from September 2017 to September 2018 across North and South America, EMEA and Asia-Pacific. The results paint a grim picture of IIoT networks that are easy pickings for cybercriminals and malicious intrusion. Among the findings:
- 84% of industrial sites have at least one remotely accessible device
- 69% of sites have plaintext passwords traversing their networks
- 57% of sites aren’t running feasible antivirus protections
- 40% of industrial sites have at least one direct connection to the public internet
- 16% of sites have at least one wireless access point
Separately, the cybersecurity firm Vectra coordinated observations and data for the 2018 Black Hat Edition of the “Attacker Behavior Industry Report,” which reveals attack behavior in networks from more than 250 opt-in customers in manufacturing and eight other industries. The report examines cyberattack trends sampling more than 250 Vectra customers with over four million devices and workloads from nine different industries. It noted a sharp threat increase in 2018 from 2017, with an average of 2,354 attacker behavior detections per 10,000 devices. Drilling down, examination of IIoT networks in its “2018 Spotlight Report on Manufacturing” found that:
“The monthly volume of attacker detections per 10,000 host devices in the manufacturing industry shows a much higher volume of malicious internal behaviors [than in other industries]. In many instances, there is a 2:1 ratio of malicious behaviors for lateral movement over command-and-control. These behaviors reflect the ease and speed with which attacks can proliferate inside manufacturing networks due to the large volume of unsecured IIoT devices and insufficient internal access controls.”
The report further concluded that “IIoT devices collectively represent a vast, easy-to-penetrate attack surface that enables cybercriminals to perform internal reconnaissance, with the goal of stealing critical assets and destroying infrastructure.”
And if easy IP theft and infrastructure interference and/or damage aren’t warning enough on their own, government is now also entering the fray.
While the United States federal bill known as the IoT Cybersecurity Improvement Act of 2017 remains stalled in committee, one state just enacted the first U.S. law mandating IoT device manufacturing security provisions, effective as of January 1, 2020. California’s SB 327 states:
“A manufacturer of a connected device shall equip the device with a reasonable security feature or features that are all of the following: appropriate to the nature and function of the device; appropriate to the information it may collect, contain or transmit; and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification or disclosure.”
A “reasonable security feature” for any connected device equipped with a means for authentication outside a local area network requires either that preprogrammed passwords are unique to every device manufactured or that the device contains a security feature that forces a user to generate a new means of authentication before access is granted to it for the first time. While the legislation has been criticized for superficiality, neglecting encryption and failing to address the myriad underlying bad practices identified in the aforementioned cybersecurity reports, it reflects a new reality. This is the first U.S. law stipulating security specific to “things,” and more are sure to follow.
There is hope on the horizon. Blockchain technology, for example, works as a distributed database that cryptographically and immutably records every “block” of data moving through a system — and it may point to a more secure future for our connected devices. As cybersecurity firm Trend Micro noted, “Given its decentralized nature, blockchain, in theory, can prevent a vulnerable device from pushing false information and disrupting the network environment, whether it’s a smart home or a smart factory.” There are experiments already underway using blockchain to validate and secure smart city functions in Europe. On a separate front, in the semiconductor space, there are new chip designs being explored aimed at layering or injecting artificial intelligence functionality into devices and applications that include better security at every point of computation from the edge to the cloud.
These are promising developments, but they don’t negate the present danger. Serious review, investment and a renewed commitment to security best practices are required across IIoT now. That’s a 2019 resolution worth making — and keeping.
According to a report by the National Safety Council, there were more than 40,000 motor vehicle fatalities in the U.S. in 2017, 90% of which were the result of driver error. However, it may soon be possible to replace fallible human drivers, all too often subject to distraction, with autonomous self-driving vehicles.
The benefits of this are considerable. The University of Florida’s Professor Peter Hancock suggested that eliminating this capacity for error could save more lives in two years than were lost in the entire Vietnam War, while KPMG estimated that crash frequencies could drop by up to 80% by 2040.
And when these vehicles arrive, they will bring a wealth of valuable data with them.
New and emergent infrastructures such as the internet of things and 5G are being viewed by businesses across all industries as opportunities for driving new business value, of which the development of driverless cars is but one example. According to Intel, autonomous cars alone could provide a $7 trillion boost to the economy over the coming decades.
But it won’t necessarily all be smooth sailing…
Technology failure used to be little more than an annoyance. Today, however, the failure of an IoT device can have potentially serious life or death consequences, which can’t be fixed by a simple reboot. An autonomous vehicle responsible for a fatal accident can’t just be dusted off, restarted and put back on the road.
As a result, liability and the allocation of responsibility are now deeply intertwined with applications and service dependencies. The continuous addition of moving parts, such as the thousands of microservices that touch millions of sensors through the IT infrastructure, is making the relationship between these dependencies increasingly opaque and the need for visibility without borders using the power of smart data more strategic.
Liability and risk
While most businesses welcome digital transformation as a way of improving their operational efficiencies and customer experience, its inherent complexity now makes the potential for harm that much greater. There are very real consequences when new, sophisticated IT architectures and systems unlock access to new data frontiers; enterprises may achieve greater speed and agility, but there is the potential for more to go wrong.
The Center of Democracy & Technology noted in a report that the piling up of software defects in emerging smart technologies can put the commercial viability of enterprises at risk, not to mention the well-being of the people that use these technologies. As these technologies become ever more ubiquitous, this issue will only become bigger. In Los Angeles recently, a class-action suit was brought against two e-scooter operators whose cloud-based apps enable the identification and unlocking of available scooters, along with manufacturers Xiaomi and Segway, following claims that the companies were responsible for personal injury and property damage.
Indeed, aware of the potential vulnerability of IoT technology, the State of California recently passed a bill that requires manufacturers of devices that connect “directly or indirectly” to the internet to equip them with “reasonable” security features, designed to prevent unauthorized access, modification or information disclosure.
While the issue of liability has thus far focused on device manufacturers, it’s worth examining the direct and indirect implications to the teams responsible for the creation of an application or service that are part of the product, such as an organization’s DevSecOps team.
Visibility and situational awareness
DevSecOps teams employ a security-focused continuous development, integration and deployment lifecycle model. The pace with which new functions and features are released when using this model presents inherent business risks. For example, when a function fails — due to load, latency or errors — it is tiny from a software perspective, but has a big impact on application performance. Microservices connectivity sprawl not only adds more traffic, but increases application time-out problems due to scale or logic. And as the innovation and deployment pipeline accelerates, bottlenecks within or between teams can restrict the overall flow of value to customers, increase the mean time to resolution (MTTR) and add operational costs. An effective way to reduce those risks is to have DevSecOps teams extract value from wire data — the traffic flow that comprises every action and transaction that traverses the enterprise. Continuously monitoring wire data and forging it into smart data during a development cycle and beyond — in real time — will provide unrestricted visibility into how applications and services work across the entire infrastructure and deliver meaningful and actionable insights for DevSecOps teams. The same smart data allows a common situational awareness to improve agility, reduce MTTR and keep up with the pace of change.
By providing relevant, actionable and intelligent data sets on events as they happen, smart data enables all teams — from developers to operations, security, QA and everyone in between — to work closely together while parameters continue to evolve throughout the development process, and while traffic flows from — and to — data centers, clouds and the network edge. Not only will this visibility provide enterprises with a “line of sight” into various interdependencies, the common situational awareness will go some way to containing product liability. Businesses will enjoy greater speed and agility, and be confident that any issues that arise will be dealt with before they can harm their brand or their users’ experience.
Two of the main themes echoing throughout the halls of the recent Money20/20 finance conference in Las Vegas were trust and identity. Knowing that you are who you say you are has become trickier than ever. According to Javelin Strategies, in 2018, identity theft and fraud hit a new record high, costing the U.S. consumer $16.8 billion dollars. In 2018 we watched wild-eyed as everything came under attack, from our Facebook accounts to our mobile devices to our election votes.
The fastest growing fraud is in card-not-present transactions. Online purchasing by typing your credit card number or making a mobile payment is prime fraud frontier. Fraudsters have honed their “digital synth” skills to create fake digital consumers with false identities. Aite Group predicted that synthetic identity fraud will continue to grow from at least $820 million in payment card losses in 2017 to over $1.25 billion in 2020. Faster payment platforms will be prime targets as they continue to gain traction. The trick for retailers and institutions is to make the verification process for consumers feel natural and easy while doing their stuff better than ever.
As I walked the exhibit floor of Money20/20, I threw caution to the wind and became a personal experiment in just how many ways there are to verify your identity, especially for financial and retail transactions. I forked over my bodily credentials to more companies during that three-day conference than I had in the previous six months. I had at least two iris scans, four fingerprint checks, a voice check or two, a couple of face-detection sessions and numerous instant assessments of my creditworthiness.
In all seriousness, I walked away feeling somewhat reassured that new ways to use biometric, behavioral, big data and AI in combo will help us detect fraud with increasing precision and speed. Here’s some of what I learned:
- Liveness is important. Jumio is a biometrics company that’s implemented reputable facial detection systems. It compares your government-issued ID to your selfie as part of the onboarding process for companies that include Airbnb. Now, through a partnership with Facetec, the company can suss fakesters with more accuracy. Facetec adds a sort of video selfie to the process, creating a 3D face map that’s much more difficult to spoof.
- Two can be better than one combos. Many of the products combined biometric information. Sensory cleverly combines face recognition and voice detection. A voiceprint along with a video of the customer speaking a word is their authentication. Consumers can use this on their own devices. To try it out, go to the Google Play store and download Applock by Sensory.
- Malware can be beautiful. My inner artist was intrigued by BioCatch’s “The Art of Fraud,” an online installation of art depicting the spread of malware and fraud. The company is best known for using about 400 behavioral biometrics, including your handedness and which way you swipe on your phone, to separate potential fraudsters from the real McCoys.
- When to verify makes all the difference. Uniken takes a different attack. “Today,” said Bimal Gandhi, the company’s CEO, “our business MO is to connect and then verify, but we should be verifying and then connecting.” Uniken makes sure you are who you claim you are before the connection is made, hence minimizing the chance of damage.
- A smarter network. At Cisco, financial services industry lead Al Slamecka focused on how, over time, more and more intelligence will make its way into the network with products like the company’s Clarity. Scanning the network will reveal data flaws so we’ll more quickly understand data flaws.
- Am I an alien? It could have been the whirlwind speed of my “identity trip,” but I could not manage to get Princeton Identity’s iris scan to read my irises — it reminded me that iris scanning needs a well-lit room and wide eyeballs. Fingerprints are also problematic. Anyone who uses the fingerprint scanner on their mobile device knows there’s about a 20% failure rate whether it’s because of the angle of your finger or the smudge on your scanner.
- Finally, little known security fact. Socure assesses your identity through a variety of machine learning rules and data sources that go far beyond traditional credit scores. I happily gave them the usual name, address and date of birth, but when I looked at my worthiness, I had lost points for having a VoIP line. Fraudsters, it turns out, have a higher likelihood of using a VoIP line.
Bottom line? There’s a saying that goes “If you want a job, go into IT; if you want a job for life, go into cybersecurity.” It’s always going to be an arms race to keep one step ahead of the unsavory, especially as more payment devices join the internet of things. The best defense is all defense. Personal protection on your devices, big data culling for anomalies, more intelligent networks and faster reporting systems all play their parts.