IoT Agenda


April 17, 2019  12:16 PM

What you need to know about robotic process automation security

Pat Geary Profile: Pat Geary
connected RPA, connected workers, Digital transformation, digital workers, Internet of Things, IoT applications, IoT software, RDA, robotic desktop automation, robotic process automation, RPA

Organizations now face a choice of over 45 claimed robotic process automation products — all varying significantly in quality, design and approach. So, picking the right option is critical to achieving long-term success. However, with approximately 30% of all data breaches occurring as a result of vulnerabilities at the application layer, purchasers clearly require greater insight to correctly gauge the security credentials they require from various RPA vendors.

Gaining clarity on RPA security is a major issue; especially as the majority of newer offerings, such as robotic desktop automation (RDA), or desktop robots, don’t offer the same security capabilities as connected-RPA. These RDA tools promise quick wins that may sound compelling, but as organizations attempt to scale these tools to achieve greater business goals, their design limitations become increasingly apparent. For example, organizations get little business benefit if there is an inherent lack of central process design control, security, audit and governance.

Security problems with desktop automation

Unfortunately, the majority of newer RPA-labeled offerings, such as RDA, involve multiple short, record and replay tactical automations for navigating systems on desktops that can create security risks. This is because with desktop recording, a single human user is given autonomy over a part of the technology estate, which introduces a lack of central control. This obscures a robot’s transparency and hides process steps, which when duplicated over time becomes a potential security and compliance threat while limiting scale.

If a software robot and a human share a desktop login, no one knows who’s responsible for the process. This creates a massive security and audit hole and introduces shadow IT into a business, which is potentially very damaging for an organization. Restricting automation to a multi-desktop environment outside of the IT department or any central control means that RDA vendors are effectively sanctioning and using shadow IT as part of their deployment methodology. This is potentially very damaging for an organization as shadow IT, in the context of RDA, means unstructured, undocumented systems that become part of the process flows of a business which are uncontrolled.

For example, say the creator of a desktop automated process leaves the company or an organization changes. This can lead to audit failure due to an unknown fulfilment activity taking place, as well as security holes, such as passwords embedded in these lost processes, fraud and denial of service. If your business allows departments to build these recorded RDA scripts, then over time you not only create a shadow IT nightmare without realizing it, but you also create a massive technical debt that your business will have to resolve.

Why connected-RPA is more secure

Connected-RPA is different as it was designed from the start to carry out tasks securely, in the same way humans do: via an easy-to-control, automated “digital worker.” These digital workers are trusted to operate within the most demanding enterprise environments, as although they are run by business users through a collaborative platform, they still operate within the full governance and security of the IT department.

With connected-RPA, business users train digital workers without coding, so the system infrastructure remains intact. That’s not to say that APIs, web services and other traditional components can’t be used on the platform, but they are gated — controlled and provisioned by technologists for the business to consume, but not change.

For connected-RPA to deliver security, longevity and resilience at scale, automations should be carefully planned, modeled and designed. This means that business users can create automated processes by drawing and designing process flow charts that are intuitive and then used by the digital worker to automate a task. Documentation of a task becomes the actual task — change the documentation and the task is instantly changed.

The process models run by the digital worker are made explicit in the process flow chart for each process automated. The process flow chart is subject to audit and change control, as well as security with dual-key authentication. This approach is highly secure and compliant, as all documentation is securely managed within a connected-RPA platform, and protects the business from rogue employees, rogue robots and rogue shadow IT.

Connected-RPA also enables business users to collaborate by adding their automations into a central pool of capability managed and reused by the whole business. Digital workers’ decisions and actions are centrally captured and audited, too, and so is their training history conducted by humans. Crucially, this gives a comprehensive, cast-iron audit of all activity across the entire connected-RPA platform.

Organizations should also only consider RPA vendors that can demonstrate the highest level of Veracode Verified, a program that validates a company’s secure software development processes. This certification not only demonstrates an RPA vendor’s focus on providing an authentically built, enterprise-grade, secure system, but is also part of the company’s intrinsic product development methodology.

By completing and passing rigorous testing, the Veracode Verified program moves an RPA vendor beyond point-in-time security testing into a mature application security program that enforces secure development practice across the entire software development lifecycle.

Ultimately, RDA tools limit the scale and potential of RPA solely to the confines of the desktop and introduce a variety of risks too. However, connected-RPA provides a platform for collaboration — securely and at scale — where across many large organizations human workers, systems and applications are already creating a powerful, intelligent, safe ecosystem of partners that enable a real digital transformation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

April 15, 2019  12:48 PM

Securing the IoT edge

John Maddison Profile: John Maddison
Authentication, credentials, Edge computing, edge security, firewall, intent-based segmentation, Internet of Things, iot, IoT communications, IoT devices, IoT edge, IoT encryption, iot security, network access control, securing IoT, Segmentation

The network has undergone a remarkable amount of change over a remarkably short period of time. The clearly defined perimeters of traditional networks have been eroded away by BYOD, mobile computing, migration to the cloud, IoT adoption and the new WAN edge. Of course, this sort of evolution is normal, but the process has been accelerated by digital transformation designed to enable organizations to compete more effectively in today’s digital economy. 5G and the advent of edge computing and networking promise to change things even further and faster.

No element has played a larger role in this transformation than IoT. These devices are smarter, faster and increasingly mobile. They are also present in nearly every new networking environment being adopted by organizations, from branch offices and retail stores to the core network, and from manufacturing floors to the extreme edge of the network where they mingle with user endpoint devices in collecting, generating and sharing information.

Even though these devices are woven into our larger, distributed network environments, in many ways, IoT has become its own network edge. Devices have their own communications channels and protocols, interact to accomplish complex tasks, and generate massive amounts of data while performing critical functions — from monitoring systems to managing inventory to collecting and distributing data.

They have also become highly specialized. Medical IoT and industrial IoT are just the first of a variety of IoT devices designed for specific purposes that we have now come to rely on. Going forward, they will also play a critical role in things like enabling the ecosystem and support autonomous vehicles, making smart buildings and cities possible, and reinventing critical infrastructures to be more responsive to the demands of the communities they serve.

They are also beginning to bridge the gaps between traditionally separate networks, such as IT and OT, and between personal, public and business networks. Smart appliances, alarm systems and even entertainment systems connect back to a corporate network to deliver data and receive instructions. And they are integrated into personal devices that blend private, social and business profiles and data into a single component.

Which is why the persistent challenge of IoT security requires redoubled efforts to resolve. An alarming majority of these devices remain inherently insecure — they can’t even be updated or patched, which is why they have become a preferred target by cybercriminals for things like ransomware, cryptomining, distributed denial-of-service attacks and the delivery of malicious payloads.

Given the pervasive nature of these devices, the unprecedented rate at which they are being adopted and integrated into our networks, and how quickly we have come to rely on them, security has to be a top priority.

IoT security strategies

Because IoT devices can be placed anywhere across the distributed network, operate in different environments and connect from a variety of locations, consistent IoT security requires a consistent and comprehensive security strategy:

Assessment
Before an IoT device is even selected, an administrator should evaluate its inherent security settings. Devices that can be secured and patched should be appropriately hardened. Devices that cannot be hardened need to be secured using proximity controls, which means they need to be placed behind a firewall and all traffic needs to be inspected and behaviors monitored.

Once they are in place, two additional things need to be considered before they begin communicating. The first is to determine what sort of data a device will generate and the relative value of that data, and second, administrators need to clearly understand what other devices this IoT device will be able to connect to and, as a result, what resources and data it can see, access and potentially exploit.

Encryption
The next step is to secure communications. The kind and amount of traffic generated by IoT devices can vary greatly. Not only can they use different communications protocols, but the devices themselves can range from only sharing essential information to being very chatty. Encrypting traffic needs to be applied on or as close to an IoT device as possible.

Inspection
However, because encrypted tunnels provide an excellent way to securely transmit malware, they also need to be inspected. This requires implementing a firewall that can handle the volume of traffic that IoT devices create, has the CPU-overhead required to inspect encrypted traffic at network speeds — a weakness even the most popular firewalls are notorious for — and can implement additional advanced inspection, such as sandboxing, to detect unknown or elusive threats.

Network access control
Once IoT devices begin communicating, it is essential that they be accurately identified at the moment of network access. Network access control enables an organization to identify IoT devices to maintain an inventory of connected devices and ensure that policies meet device requirements. It can classify devices, assess them for risks and tag them with appropriate policies.

Intent-based segmentation
The best way to manage IoT traffic after access has been granted is by using intent-based network segmentation. This advanced segmentation strategy can automatically translate business requirements for an IoT device into a security policy that automatically determines the sort of protection an IoT transaction stream requires. IoT devices might be assigned to a segment assigned to a class of devices or functions, a segment based on level of security required or even a separate segment just for a specific device, application or workflow. When properly applied, these segments should be able to seamlessly protect any traffic generated by that device, even if it traverses multiple network environments or cloud ecosystems.

Conclusion

The most essential foundation for securing the IoT edge is building a flexible and integrated security fabric that is able to tie together and orchestrate the disparate security elements that span your networked ecosystems into a unified, interconnected and responsive system. This enables the effective monitoring of legitimate traffic and the checking of authentication and credentials, while enforcing access management across the distributed environment.

Such an approach expands and ensures resilience, secures and isolates distributed IoT resources, and enables the synchronization and correlation of intelligence for effective, automated threat response.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 12, 2019  12:42 PM

How accurate is accurate enough when it comes to location data?

Fabio Belloni Profile: Fabio Belloni
accuracy, Enterprise IoT, gps, Internet of Things, iot, IoT applications, IoT data, IoT use cases, location data, real-time location services, real-time location system, RTLS

Location has been a huge enabler for a wealth of applications impacting consumers and businesses, from mobile marketing to asset tracking. GPS is an early example of how combining a smartphone with positioning would change the way both consumer and commercial vehicles navigate the roadways. Drivers kept their expectations low and assumed that during a trip of any length, or even surrounded by the concrete jungle of a parking garage, the GPS would recalculate a few times as the signal from the positioning satellites deteriorated and the smartphone or other terminal, like their dash-mounted Garmin or TomTom device, lost track. GPS was accurate, and for the most part was “accurate enough” for general civilian location use.

Through the years, new technologies have entered the market as location-based services, machine-to-machine communications and IoT began to require location capabilities, primarily starting from outdoor and then extending to indoor asset-tracking use cases in a wide range of industries. Wi-Fi, active RFID, Bluetooth beacons and other technologies emerged with rudimentary capabilities to meet this need, in essence analyzing the power of the received signal strength (RSSI). The problem? These technologies weren’t built specifically for positioning, never mind the real-time nature that emerging applications required, limiting their effectiveness and their accuracy. Still, for the most part, they were “accurate enough,” with a “tolerable latency” for the requirements of the applications for which they were being used.

Over the last few years, however, the growth of IoT and its emergence in nonindustrial B2B markets has changed the mindset about what is required in terms of location accuracy. Spurred by the efficiencies they began to see across their businesses, organizations began to envision a wide range of applications for which they could use IoT, such as tracking smaller items and even people via embedded sensors on ID badges, with the aim of interacting with the environment. Effectively for every type of environment, several use cases started to emerge, even having different stakeholders within the same area. At the same time, huge technological advances across a wealth of technologies have emerged in the form of real-time location systems (RTLS) to deliver sub-meter location capabilities. And even more recently, the industry is abuzz over centimeter-level — and in some cases even smaller — positioning for emerging cutting-edge applications.

But is centimeter-level positioning necessary for IoT and other applications? First, it makes sense to take a look at what location accuracy really means for applications.

Understanding accuracy

Accuracy in the RTLS sense can be defined as a combination of precision and delay, or latency. High accuracy refers to the ability for an RTLS system to achieve from sub-meter (less than 1 meter) to centimeter-level precision, while still performing in real time with latency down to a fraction of a second in tracking moving targets. However, achieving accuracy with low latency comes at a cost — regardless of technology. In general, high-accuracy real-time tracking is solved by covering the area of interest with equipment and creating data redundancy, which results in increasing the initial system’s cost and, in some cases, the total cost of ownership as well.

Delay is another factor in RTLS accuracy. Not every application requires real-time location capabilities. For example, slow-moving heavy equipment may require location data with interval requirements of minutes — a 10-ton object does not move without a crane — whereas when tracking sports athletes, a delay of longer than 300 milliseconds is inadequate for augmented reality applications.

In most IoT applications today, neither centimeter-level precision nor real-time tracking is a key requirement. For example:

  • Locating a forklift in a warehouse: Accuracy within a few meters is acceptable, as is receiving the location within a few seconds instead of real-time.
  • Locating a container in a shipyard: Accuracy within a few meters is acceptable, as is receiving the location within a minute.
  • Moving large equipment around an oil field: This application may require location data with intervals of minutes, and understanding location within a few meters is generally acceptable.

However, there are emerging applications where high-accuracy tracking is a requirement. These may or may not include a requirement for real-time capabilities. Some examples of applications where a high level of accuracy is required include:

  • Deriving game analytics: Tracking the movements of athletes or objects, like pucks as they zip around an arena. This requires real-time tracking down to a few centimeters as players and equipment are always in motion and the relative position to each other is essential for characterizing the game dynamics and isolating specific events.
  • Smart buildings: This could be related to optimizing the workflow in hospitals while digitizing the ambient environment with a rules engine mimicking the real-world logic; interacting with domotics for home automation; or deriving metrics computing contextual information. Examples include turning on the light when someone enters a meeting room or analyzing the path journey of a shopper in a supermarket for deriving dwell-time metrics and product interactions.
  • Employee safety in an industrial environment: In an application used in warehouses, where workers and autonomous equipment move rapidly from place to place, determining location may require higher accuracy tracking in real time; for example, in the case of collision avoidance between forklifts and workers.
  • Security and monitoring: This applies to any mission-critical scenario where high-reliability data and consistency are required; i.e., for surveillance and access control.

Finally, the percentile of standard deviation — also characterized by the cumulative distribution function — is another key aspect of location accuracy. To say that a location is highly accurate in real time means that it needs to meet those criteria of high-accuracy positioning with low latency consistently — for example, less than 1 meter for 90% of the time.

The bottom line

The accuracy needed to locate a person or object depends on the specific requirements of the application itself and the business needs it supports. Looking at the examples above, it’s clear that in some cases, certain applications will require more accurate, lower-latency location capabilities than others. Organizations will determine what their requirements are for real-time location based on the specific applications they are developing, and advancements in precision will continue to open the door to a wealth of new applications.

It’s important to note that while organizations are determining their needs today, they also need to consider future applications, what level of accuracy these applications will require as they emerge and at what scale. This is a critical aspect for minimizing costs, improving profitability and ensuring a healthy long-term investment. Utilizing an RTLS that can easily scale and incorporate these new requirements as business needs dictate is paramount. That calls for the implementation of very flexible RTLS technologies where the system can be configured to operate across borders and from low to high accuracy. This makes it suitable for a wide range of applications, including security, safety and reliable workflow management, as well as pushing toward augmented reality or virtual reality applications as those needs emerge.

Determining the precise location of a person or object, consistently and in real time, is complex. It is often more difficult to track static objects than moving ones. There is no silver bullet that optimally solves all use cases. Organizations must weigh their specific requirements against system costs — considering both initial investments and total cost of ownership — required to achieve the location capabilities they target to deliver a return on investment that satisfies their business goals.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 11, 2019  4:42 PM

How will enterprises consistently adopt IoT and edge computing technologies?

Rick Vanover Rick Vanover Profile: Rick Vanover
Edge computing, edge vendors, Internet of Things, iot, IoT edge, IoT edge computing, IoT partners, IoT vendors, Microsoft, VMware

The IoT and edge computing space is one that will continue to evolve, but organizations may be struggling now with finding the right way to address it. What the edge means to most organizations doesn’t always translate to an individual organization. The same goes for IoT systems, which I’ve always advocated that organizations will find their best way and have an “aha” moment.

The piece of good news is that the how isn’t as daunting as it may seem. I say that because some familiar faces are in place to make this seem much more reasonable than how face value may be perceived. VMware and Microsoft, among others, are making significant investments in this space that will pave the way for specific business systems to be easy to implement. This will allow each organization to find its way with brands that already have an established relationship in place.

VMware Pulse IoT Center and Azure IoT Edge are two technologies that can make this transition start to make sense. Consider as IoT devices grow in popularity — it will be natural that there may be storage and networking considerations that will need to be rethought. Businesses will see IoT devices start to become more integrated and modern options in systems used — for example, machinery, autonomous vehicles, smart buildings, appliances and more — that will pose an IoT conundrum: Where what used to be a forklift is now a multi-terabyte a day data generating system. And the location has 25 of them. And your organization has 12 distribution centers. You can quickly see storage, bandwidth and compute needs will make this seem to be a tricky tradeoff between smarter devices and unwanted network and storage problems. That example was a forklift. The next example could be a tractor-trailer, an air-conditioning unit or scores of other examples.

Looking at consistent adoption for edge and IoT, organizations need to use key technologies to do the business-benefit-inducing work at the edge. Take the forklift example. If analytics can be performed on the device, wouldn’t that make sense to interpret the data there, at the source? Let the analytics be set in the cloud or central management, but do the hard work close to the data. Aggregating the results is the most important part, and organizations can manage that relatively smaller amount of data in a much more scalable fashion.

Source: Unsplash

Many of IoT and edge use cases today are around photo and video surveillance, but these are just a start. Organizations will have plenty of options for more complete systems, and when multiple systems are in place, the management and scale will become more important than ever.

A safe bet from IT practices of the past is to use key platform partners for the technology services to drive your business; IoT and edge are no different. Just finding the right place to start is the most important step.

Still not sure? Azure IoT can even walk organizations through a few questions to help them get started in the right direction.

What do you look for in an IoT and edge deployment today? Are platforms from established brands part of your requirements? Share your comments below.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 11, 2019  4:31 PM

Industrial IoT design insights: Five factors to consider

Mitch Maimam Profile: Mitch Maimam
IIoT, IIoT design, IIoT platform, IIoT security, IIoT software, industrial internet of things, Industrial IoT, Internet of Things, iot, IoT communications, IoT design, iot security, IoT software, IoT strategy, product development

IoT is becoming ubiquitous in all types of product categories, from consumer goods and medical products to commercial and industrial systems. Industrial IoT applications bring about unique challenges. Issues that are minor annoyances or problems in consumer products can cause abject product system failures in the industrial space. For industrial systems to succeed, designs need to be dependable and highly secure. With IIoT, downtime in mission-critical applications can’t be tolerated. Security breaches can cost millions of dollars and lost confidence by customers. And the technology is expanding rapidly: By 2020, global manufacturers are expected to invest $70 billion in IIoT, up from $29 billion in 2015. Here are a few critical things to be considered that are life-and-death in the industrial IoT space.

1. Connect or not?

Technology adds a cost layer to traditional non-tech-oriented products. In particular, adding sensing and communication technology can invoke both a nonrecurring and monthly recurring cost. While it is “de rigueur” these days to want to create new IoT products or add an IoT technology layer to existing products, it is important to understand the business case and value. Adding this layer involves embedding cost into the product with possible monthly subscription costs, as well as an initial and continuing stream of expenditures on product development and lifecycle support. While adopters in the consumer space may be willing to experiment with IoT technology with unclear long-term value, clear economic impact needs to be demonstrable in the industrial space. The costs of deployment are simply too high to allow for large-scale deployments of dubious utility.

2. Pick the right platform

When adding intelligence to a product that wasn’t connected before, many startups select hobbyist-grade boards. The trouble is that these developer platforms are not suitable for large-scale industrial-grade deployments. If the device proves successful and starts generating serious demand, production can’t scale because you can’t source thousands of that type of hobbyist board. Off-the-shelf platforms are useful for proofs of concept and as platforms for software developers, but do not confuse these POC systems with those that are production-ready. Any experienced hardware developer who has been creating industrial systems will know a development system lacks the reliability, security and durability required for mission-critical applications. You should only source components and modules for your product that will be available and appropriately costed now and in the future.

3. Pick the right communication platform

Today, developers are able to choose from a plethora of communication technologies for industrial IoT applications. There are a wide variety of wireless platforms to choose from in the cellular, Wi-Fi, Bluetooth and other major arenas — and there are subcategory options for each. The selection does not start with the radio. Rather, it ends with the radio. It starts with understanding the amount of data being acquired, the frequency of acquisition and communication, and where data is processed. These factors can then be balanced against things like the communications bandwidth, cost of storage and transmission, range and hardware cost. In developing a communication strategy, a bottom-up approach is required in order to avoid implementing the wrong wireless technology.

4. Security is job one

Security needs to be baked into your IoT product design process, not added on as an afterthought, particularly in the industrial space. The stories of hackers breaking into commercial systems through insecure connected devices are legendary. Security is a must-have, not simply a nice-to-have. The potential for a breach is enormous, and the results could be devastating. Bad guys often scan for poor or misconfigured security. Consider end-to-end security mechanisms, end-to-end data encryption, access and authorization control, and activity auditing. A security chain is only as strong as the weakest link. Low-end and poorly protected IoT endpoints are a frequent point of entry for attacks when they are not carefully and intentionally secured.

5. Get a top product development team

Oftentimes, engineering organizations in mature industrial spaces do not have the particular skills in-house to add an IoT layer to their product, even though their internal expertise may be more than up to the task of developing and sustaining the core product technology. Certainly, a company can embark on a campaign to recruit the talent for an internal team. However, in the current job market, the competition for such talent is fierce. It could take months or years to find and onboard an internal team. Many companies seek the assistance of an outside product development organization. By so doing, the company can get the best of both worlds. The combination of the internal team, with its core product and market knowledge, with an external team, with expertise in RF communications, cloud architectures, mobile applications, sensors and sensor integration can be extremely powerful.

While the prospects for industrial IoT deployments can be exciting, it is important to remember the basics as well. You need a sound business case, as with any investment. Solid project management is just as important as avoiding the above mistakes when shepherding a leading-edge technology device from inception to the manufacturing floor. Selecting the right engineers for the design team, who have technical as well as communications skills, is also critical to success. Finally, staying within budget parameters and meeting deadlines ensures the plan will be completed successfully, increasing the chances of the business’s success.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 10, 2019  4:11 PM

Why both cloud and edge computing are essential to IoT

Dave McCarthy Profile: Dave McCarthy
Distributed computing, Edge computing, Industrial IoT, Internet of Things, iot, IoT analytics, IoT cloud, IoT edge, IoT edge computing, IOT Network

Many of today’s industrial businesses are weighing the option between cloud and edge computing for their IoT deployments and finding it difficult to decide what best suits their data architecture and business goals. For every cloud benefit, there’s an equally tempting advantage to processing data at the edge. So, why not choose both? While most companies today view the cloud and edge as two separate entities, there is a great advantage to layering edge computing into cloud workflows. Because cloud and edge computing offer different systems for different types of environments, a distributed computing framework is often the best approach for IoT.

What is a distributed computing framework?

A distributed computing framework is a data processing approach that forgoes the practice of processing all of a business’ data in one place — e.g., all in the cloud or all at the edge — in favor of distributing the load across multiple locations. Here, it’s important to dispel the commonly held view that edge computing is singular — meaning a business only has one edge. In reality, companies can and often do have more than one. The edge is simply the point of data generation, so anywhere that happens is effectively an edge.

The simplest distributed computing framework involves three layers: the cloud, the site and the individual equipment in that site, but can be subdivided into even more layers depending on the environment. These separate layers allow industrial businesses to process and manage their data wherever it makes the most sense for their operation and objectives, whether that’s in the cloud or at one of the edges.

A distributed computing framework in a factory

A large deployment is a great environment to examine this framework in action, such as a factory with hundreds of pieces of equipment. Each piece of equipment in the factory is considered an edge endpoint because they each generate data. The factory itself constitutes an edge aggregation point, as it consolidates data from all enclosed equipment. The factory then would have processing ability in the cloud, which would be saved for instances when the business has something specific to report. Cloud computing also becomes especially helpful when a business has multiple factory sites.

In this example, the business could first compile data generated from the equipment on the factory floor before sending it to the cloud. Adding this step helps prevent a cluttered data repository, which often results from sending information from hundreds of pieces of equipment straight to the cloud. Incorporating edge and cloud computing into a factory can offer several benefits, but what would happen if the factory were to use only edge or cloud computing?

  • Cloud computing only: In traditional IoT architectures, all collected data is transported, combined and processed in a central data store. This has worked well in instances where only data collection is necessary, but for businesses that need to analyze information from each individual piece of equipment, this approach is no longer viable. Relying solely on cloud computing for some of these larger deployments — such as the factory example — would make it very difficult to react to the data generated on the equipment quickly enough to have a positive business impact. In fact, these kinds of delays can make a huge difference in scenarios that involve safety and quality. Including edge computing in a distributed framework allows businesses to move faster than they would if the data had to travel to the cloud and back, which opens the door for real-time analysis right on the equipment itself.
  • Edge computing only: Alternatively, edge devices only process data that is locally collected and on a short-term basis, meaning a factory relying exclusively on edge computing would lack the ability to get a full view of their operation and easily store data for identifying trends over time. Locally collected data provides a great picture of what is happening at the site and with the equipment, but not as they relate to each other. To get this higher level of analytics without the cloud, the business would have to manually combine all factory data, which would be inefficient and time-consuming.

Why both the cloud and the edge are essential for IoT

An industrial business that uses both cloud and edge computing for its IoT deployments will not only be able to take advantage of the low-latency and device responsiveness that comes with edge computing, but it will also benefit from the scalability, cost effectiveness and low maintenance of cloud computing. A multi-tiered approach fuses the strengths of both types of computing instead of picking one over the other. For example, a manufacturer of heavy-duty trucks might use edge computing to predict when individual trucks need maintenance, but can turn to cloud computing to make decisions about the fleet as a whole. Data about the types of repairs implemented and exact time spent on repairs can be stored in the cloud to help mechanics eliminate unnecessary diagnostics or steps for future repair situations.

The IoT landscape continues to change, introducing more devices every day, and with them, more data that businesses need to process and manage. No matter where an organization falls on the edge-to-cloud scale, it’s important that they choose a computing approach that best fits their business needs in order to gain a competitive advantage. Businesses that perform analytics both at the edge and in the cloud can use real-time data to make faster, more accurate decisions that create real operational value, such as minimizing costs and maximizing performance.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 9, 2019  2:00 PM

Today’s PKI is purpose-built and ready for IoT

Jason Soroko Profile: Jason Soroko
attack detection, attack prevention, Authentication, Internet of Things, iot, IoT attacks, IoT legislation, IoT regulations, iot security, Mirai, PKI, Public-key infrastructure, securing IoT, Stuxnet

As connected devices become common throughout a wide range of industries, it should come as no surprise that we are reading about the increasing frequency of attacks targeting IoT devices and systems. Particularly alarming are the facts that many of these attacks are now targeting devices linked to safety rather than to fraud, which has typically been the case against traditional IT systems. And while the headlines might suggest that the attacks are random, employing complex and varied techniques, what is often overlooked is the fact that these attacks nearly always exploit weak or nonexistent security measures.

Upon close examination, it becomes clear that these attacks have something in common. If the connected IoT device had been able to verify the origin of the instructions it executed, and had also been able to protect the code which executed those instructions, the attacker’s path to success would have been severely limited. Identity-based security is the key to solving this problem. Public key infrastructure (PKI) has been around a long time, and is able to establish identity-based security measures in the most effective way possible.

A connected computer does not automatically know the origin of the instructions it will enact. The Stuxnet attack, which famously targeted an Iranian nuclear enrichment plant, took advantage of the fact that, when asked, the industrial controllers in the plant would change their control logic without question. This is a weakness exploited by almost all attacks against industrial systems. The research attacks against Tesla took advantage of the fact that the electronic control units allowed their firmware to be changed.

When the Mirai botnet first appeared, infecting cameras, televisions, routers and any other connected devices, the headlines did not make it clear that these connected devices allowed authentication through very weak controls. An attacker able to find or guess the static credential values — username/password or token value — was able to control millions of these devices, highlighting the danger posed by these ineffective security measures.

The shortcomings of overreliance on attack detection

For many cybersecurity teams, the answer to this problem is to protect their devices by detecting and rooting out anomalies that might indicate that the device has been compromised by a hostile botnet or other intruder. Putting code on these devices to help identify potential warning signs is one way to address the issue, but many enterprise IT providers have found that blacklisting anomalies in this manner is extremely difficult and often ineffective.

The fact remains that if someone can authenticate into a device due to weak authentication controls, it is hard to discern that a root-level user might actually be an intruder. Any activity that occurs after an attacker is already within the system will be nearly impossible to identify as anything other than legitimate.

Regulation is coming — but slowly

A recently tabled federal regulation calls for NIST to provide guidance on IoT security, including secure identity management, firmware patching and configuration management. The fact that the federal government seems to agree with us, recognizing the importance of securing device identities and firmware, is a very positive sign — though the NIST’s guidance will not likely be prescriptive in nature.

California is currently the state with the strongest IoT protections, and those regulations put much of the onus on device manufacturers, requiring them to assign unique credentials to each device they produce. Unfortunately, many still do not, and less-discerning buyers may be unaware of the vulnerabilities that these unsecured devices create.

Other regulations are coming, but the rollouts will be slow, and it is important for organizations to independently take the steps necessary to protect themselves.

Securing connected devices with PKI

So, what are organizations to do? The truth is that we have known PKI is the answer to this question for a long time. PKI is a set of roles and policies for creating, managing and distributing (or revoking) digital certificates and public key encryption. Its procedures extend far beyond simple username and password credentials. The digital certificates are issued and validated by a separate certificate authority and incoming requests are verified by a registration authority, creating a chain of trust that is extremely difficult to compromise.

In order to provide effective communications security, these certificates use TLS cryptographic protocols, which are capable of supporting many different methods for encrypting data and authenticating the integrity of a message. PKI is the only authentication approach that can deliver a single strong digital identity for the person or device for every use case and all platforms.

IoT is an area in which PKI particularly thrives. While botnets like Mirai are often able to infiltrate devices secured by simple username and password combinations, PKI presents an identity-based security mechanism that cannot be easily compromised. By turning to a trusted PKI provider, organizations can also enjoy interoperability with other and verified third parties. Enabling interoperability without compromising information security is a major benefit for those who choose PKI — particularly as emerging and highly interactive frontiers like IoT continue to grow at an exponential rate.

This is not your grandfather’s PKI

If that all sounds great — and it should — why aren’t more companies adopting PKI? At a time when insufficient authentication is a frequent cause of breaches and botnet takeovers, you might expect organizations to be rushing to adopt PKI as quickly as possible.

Unfortunately, the term PKI carries baggage. As surprising as it sounds, the technology’s origins date all the way back to the 1970s, and while the basic idea behind PKI has remained consistent, its implementation was not always so simple. Years ago, PKI took too long to implement. It was risky. It was costly. It required a high level of expertise to operate. Even though PKI has long been the best available mechanism from a security standpoint, the term itself carries negative connotations that have proven difficult to exorcise.

Some industries have been quicker to recognize the improvements in PKI than others. The automotive industry began to recognize that hacked devices posed a particular danger to them as far back as 2009. If a phone or a router becomes infected, nobody dies — but if a vehicle on the road is infiltrated with malware, real human lives may be in danger. As a result, many automotive companies now use PKI to ensure communications between vehicles, phones, servers and other connected devices are as secure as possible. The industry serves as an excellent proof point for the changing face of PKI.

Today’s PKI is not your grandfather’s PKI. Purpose-built certificate authorities and PKI management systems have driven costs down, and implementation has been massively streamlined. User friendliness and programmatic capability, including the use of protocols such as EST and REST, are now in place. But one thing remains the same: PKI is simply the best technology available for authenticating communication between devices. In today’s increasingly interconnected world, there has never been a better time to stop focusing on detection and start emphasizing prevention.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 9, 2019  11:51 AM

Break out of IoT proof-of-concept purgatory

Lou Lutostanski Profile: Lou Lutostanski
Internet of Things, iot, IoT business model, IoT data, IoT partners, IoT partnerships, IoT pilot, IoT pilots, IoT proof of concept, IoT strategy, IoT use case, poc

Businesses are spending $745 billion worldwide on IoT hardware and software in 2019 alone. Yet, three out of every four IoT implementations are failing. Why?

One big reason: Leaders are failing to go all in.

To make IoT successful, you need to transform not just some hardware and software, but the way your business works. These dynamic deployments require an entirely new approach, far beyond the traditional push to get new business applications off the page and into production.

If the right steps aren’t taken in the beginning, say you don’t think far enough beyond the IT infrastructure, you end up in limbo: caught between the dream of what IoT could do for your business and the reality of today’s ROI. That spot is called proof-of-concept (POC) purgatory.

Sound familiar? Here are five signs you might be in IoT POC purgatory — and tips on how to escape it.

1. You have a lot of data … and not much else

There’s no surer sign of POC purgatory than an IoT technology that produces only dashboards. Making data visible is an effort in futility if you aren’t applying AI to make it smart — to truly drive insights in your organization. To do this, though, you need a clear and well-communicated business objective from the earliest stages of your IoT project.

That objective — whether it’s operational efficiency, better customer service or bottom-line revenue generation — allows you to use the right technology to develop actionable insights from your data. That’s when a mixture of cloud, customer, employee, public and real-time data sitting in a repository meets the analytics that point to actions that can change your business.

Without a clear business objective, the lights of IoT might be on, but no one is truly home.

2. You keep getting pushback from unexpected places

All the new technologies within IoT mean one thing: people — and lots of them.

It’s surprising how many stakeholders come out of the woodwork along the path to implementation. These stakeholders can include project managers, system integrators, operations specialists, installers and business stakeholders from HR, marketing, sales or customer service — all of whom will have questions, comments and critiques.

Of course, looping in all areas of the business is crucial to get a proof of concept going. To keep that momentum up and avoid proof-of-concept purgatory, however, you need solid change management plans that push IoT into your business. Customizing communications to each stakeholder group will ensure they understand exactly what’s in it for them when your implementation succeeds.

3. Your teams aren’t speaking the same language

Once you have the right people in the game, you have to make sure everyone is working off the same playbook.

For example, IT and operational technology (OT) teams have long had their separate realms to play in. This siloed approach stands in the way of not only a smooth deployment, but also a scalable one as additional features get added and the system matures. IoT requires the skills of both teams to succeed. Not only do these teams need to work together, they also need to trust each other. IT professionals need to trust OT devices to connect to their carefully constructed networks, while OT leaders need to feel comfortable with a new security stack interacting with their hardware.

This collaboration is just the start of the people challenge: It can take up to 10 partners to get an IoT system to market, not including your internal stakeholders, so it’s crucial that everyone’s moving in lockstep.

4. You can’t get the CEO on board

Big initiatives take big support. Frequently, we see businesses get stuck when line of business managers love the idea, but can’t get the C-suite to sign on the dotted line and put it into production. Here’s where pilots drag on and leaders become disillusioned with the project.

To avoid spinning your wheels here, it’s crucial to ensure that all the appropriate C-suite stakeholders, up to and including the CEO, understand what your IoT deployment will give to the business.

Construct a simple product roadmap that takes busy business leaders from robotic arms and sensors to factory floor insights to what really matters: dollars-and-cents impact on the bottom line.

5. You want to be in IoT, but you don’t know why

A good use case is like a lighthouse. With it, it’s easy to keep the boat steered in the right direction. Without it, it’s easy to get lost. Often, business leaders read articles about the opportunity and know only that they need to jump into a boat. But everything above — the alignment of teams, the insights driven from data and operationalizing of the system — is driven by one major thing: the use case.

Lack of an apparent use case is by far the most common reason why people are in POC purgatory. Considering IoT implementations can take a year or more to get off the ground, it is mission-critical to have clarity on what the deployment is trying to achieve from the very start.

If this sounds like you, there’s a way out. Go all in. Align around your use case first. Align your in-house capabilities and external collaboration around the technologies best suited for your objective. Align your data intelligence to ensure it is delivering insights that transform your business. Align your employees around the business transformation you are trying to complete — and retrain and reskill them continuously as the system evolves.

When you align internal stakeholders to the business objective and ensure that any external partner is functioning as an extension of your team, you can truly go from stuck in purgatory to full production and tangible payoff for your business.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 8, 2019  12:36 PM

Protecting the exploding attack surface: A blueprint for government agencies

Reggie Best Profile: Reggie Best
Attack surface, cyber-situational awareness, government security, intent-based network security, Internet of Things, iot, IoT compliance, IoT devices, IoT risks, iot security, IoT threats, Network security, securing IoT

No industry possesses more confidential, sensitive and proprietary information than government agencies. From citizen data and employee files to military plans and details about national laws and regulations, federal, state and local government agencies are a gold mine for nation-states and other criminal groups.

This is nothing new, of course. Over the past few years, we’ve seen no shortage of government-targeted attacks. What is fairly new, however, is the rapidly expanding attack surface, which is giving attackers more ways than ever to infiltrate government networks and get their hands on the nation’s most sensitive data.

IPv6 explodes the attack surface

In 2012, the U.S. government mandated that all government agencies transition to IPv6, which was designed to overcome the problem of IPv4 address exhaustion. With IPv6, there are more than enough IP addresses to accommodate every connected device, which, in the age of IoT, cloud computing and digital transformation, is a necessity.

From a government perspective, this transition to IPv6 — which some agencies are just beginning — along with the accelerating rate of cloud adoption means that almost everything — from military weaponry to building management control systems to voting machines and census-collection tablets — is IP-enabled and part of the network ecosystem. And while this is driving greater effectiveness and efficiencies from an operations standpoint, it’s also introducing tremendous security risk.

First, more network devices mean more endpoints susceptible to attack. Second, thanks to cloud computing and digital transformation, applications and systems are deployed, changed and removed at a faster rate than ever before, leaving security teams constantly trying to understand the state of their network infrastructure. And last but certainly not least, security teams are struggling to bring all network assets under the correct security policy to control access and ensure a strong security and compliance posture.

In today’s “hybrid agency,” where IT infrastructures are massively distributed and constantly changing, is it possible to really know what’s on the network, maintain proper policy hygiene and attain continuous security that moves at the speed of innovation? The answer is yes, thanks in large part to cyber-situational awareness and intent-based network security.

Achieving cyber-situational awareness

Agencies must find a way to monitor network assets and activity across physical, virtual and cloud environments. This means achieving real-time visibility into all endpoints and resources across all computing infrastructures; understanding how those endpoints are connected to the agency, the internet and each other; and identifying suspicious traffic, potential leak paths to the internet, anomalous activity, unknown rogue devices and shadow IT infrastructure.

In other words, IT security teams must be able to identify threats and vulnerabilities to the infrastructure as they emerge and change, so they can develop effective incident response and risk mitigation strategies. Agencies that use cyber-situational awareness in this way have the real-time and accurate network visibility needed to properly protect their networks, critical data and our nation’s infrastructure. Following are five tips to achieve this ideal state:

  1. Validate the network IP address space. Understand the scope of IP address space in use and visualize the network topology. Instead of working from a set of known addresses that you think encompass the entire organization, verify that there are no unknowns.
  2. Determine the network edge. Understand the boundary of the network under management.
  3. Conduct endpoint census. Understand the presence of all devices on the network infrastructure, including traditional IP-connected devices, such as routers, gateways, firewalls, printers, PCs, Macs, iPhones, etc., and non-traditional IP-connected devices, including medical equipment, security cameras, industrial control systems, etc.
  4. Conduct endpoint identification. Assess the nature of devices on the network, including type, operating system and model.
  5. Identify network vulnerabilities. Evaluate and comprehend network anomalies, such as unknown devices, unmanaged address space, leak paths, etc., for remediation.

Integrating intent-based network security

Once you have a real-time, holistic understanding of what’s on your network, you can then implement proper policies and rules. Until recently, IT security teams manually wrote rules for every enforcement point. In today’s complex, dynamic hybrid environments, manual policy management processes just aren’t sustainable — not to mention, they’re costly, burdensome and prone to error.

Intent-based network security provides a desperately needed shift in global security policy management — one that automates policy orchestration and allows agencies to take advantage of innovation without slowing down development processes or introducing enterprise risk.

At a high-level, intent-based network security unites business, DevOps, security and compliance teams by enabling them to collaborate on a global security policy. Non-security personnel determine the business intent of applications and security personnel define the accompanying security and compliance intent, and then all three are aligned so policy changes can be fully automated and meet the needs of all parties. Manual rules-writing becomes a thing of the past, and all assets across the hybrid agency are brought under the proper security policies.

The powerful combination of cyber-situational awareness and intent-based network security enables agencies to use next-gen technologies and processes, such as IoT and cloud computing, without introducing security and compliance risk. IT security teams can successfully protect their network assets regardless of how many there are, where they reside or how they change. And the size of the attack surface no longer matters because security is finally able to adapt at the speed of change — and in today’s world, that’s a blueprint for success.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 8, 2019  10:55 AM

IoT for smarter cities: Lessons learned from around the globe

Tom Amburgey Profile: Tom Amburgey
Internet of Things, iot, IoT applications, IoT use cases, Smart cities, smart city, smart city applications

Often referred to as the next Industrial Revolution by industry experts, the internet of things is radically changing business, consumers and governments. According to a recent IDC report, global spending on smart city initiatives is expected to grow to $158 billion by 2022 as cities continue to invest in the hardware, software, services and connectivity that enable IoT capabilities.

Many private sector industries have already implemented and capitalized on the huge potential of IoT technologies, but the public sector has had its share of early adopters. Facing rising citizen expectations and better public engagement, many cities and municipalities have recently introduced disruptive IoT projects to improve their vital services and their citizens’ quality of life.

Not just limited to back-office applications, IoT projects are enabling cities to better adapt and respond to changing conditions — improving critical services like infrastructure and public safety. These cities can now deploy resources more effectively, increase city sustainability and preserve energy. Let’s look around the globe at two successful cities that are using IoT to improve back-office processes and engage people with a citizen-centric application for water and traffic management.

Smarter water management: More accurate tracking

According to a recent study by IoT Analytics, Europe tops the list of IoT smart city projects, capturing nearly 50% of the global project base. For many years, smart city initiatives have been a priority for European policy leaders and companies. In 2011, the European Smart Cities Initiative was created to support smart city projects with a special focus on reducing energy consumption in Europe.

One result of this initiative is found in Castellón, Spain, where IoT technology is being used to accurately track and control water management. From an initial pilot of 600 smart water meters to the current implementation of 30,000, the smart water platform provides the city with real-time data on its water resources.

This innovative system includes long-range and low-power capabilities to collect and communicate household water consumption information, allowing the city to accurately track and control water management. Additionally, this initiative has enabled Castellón to quickly detect leaks, eliminate breakdowns and easily manipulate the water supply network in real time, preventing loss of service and costly repairs.

The results have ensured that the city continues to provide ample water to its citizens while reducing unnecessary waste.

Knowledgeable command centers: Better urban mobility

While Europe accounts for 50% of smart city initiatives, the focus on the Asia-Pacific region continues to grow. Although the region accounts for just 15% of current smart projects, studies suggest this trend will quickly change, with more than 50% of smart cities expected to exist in China by 2025, creating an economic impact of over $300 billion.

In one example, Singapore is working to improve urban mobility by introducing smarter technology to make roads safer and keep traffic flowing smoothly. Because traffic is a growing concern for many metropolitan areas, including Singapore, the city-state plans to feed traffic data into a centralized operations control center, which will aggregate the data and provide real-time traffic information to the public. Equipped with live traffic information on mobile phones, web portals or navigation devices, motorists will have instant insights into traffic incidents and congested routes, allowing them to identify alternative routes.

This initiative is designed to reduce the number of motorists in congested areas to enhance safety on major roads and expressways. Additionally, city officials gain the necessary data required to adjust traffic light systems based on shifting traffic conditions.

The U.S. is poised to learn from global successes

While governments around the globe are focused on increasing productivity, reducing costs and improving their citizens’ quality of life, the U.S. has traditionally been lagging in developing innovative and disruptive smart city technologies. This trend is poised to change, however, as IDC reported that the U.S. is expected to account for one-third of global spending on smart city initiatives in 2019.

By carefully studying the current slate of global smart city projects, governments and municipalities in the U.S. are better prepared to bring these successes to their constituents. This enables us all to look toward a future in which many U.S. cities will be equipped with innovative technologies that alter the way constituents interact with their cities. From pedestrian detection centers at intersections, automated dispatching systems that vastly reduce response times and even integrated asset management tools to drive preventive city maintenance, the city of tomorrow will soon be here today.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: