IoT Agenda


April 3, 2017  12:31 PM

Build IoT devices on a platform that accepts modules and standard interfaces

Kim Rowe Profile: Kim Rowe
"IoT" "SOA" "ÉA", Adding intelligent, Cloud networking, Enterprise IoT, Intelligent Application Gateway, Internet of Things, iot, IoT sensors, product development, Sensors, Smart sensors

Modularity and standardization represent two major strategies for dealing with complexity. If a given function can be contained within a defined physical or abstract (read: software) space and be defined by its interfaces and their functions, it and others can be used to build very large systems, which remain manageable and understandable. The internet of things provides a perfect example. It is almost infinitely complex yet remains accessible to average users and can be expanded with innovative additions almost beyond limit.

While IoT can appear daunting in its variety and complexity, it also displays a structure and modularity that make it quite comprehensible. This structure and modularity also greatly facilitates the development of systems and the software that controls them, making the development process quite manageable if approached with the same ideas of modularity, structure, compatibility and flexibility.

IoT consists of four major components: the low-level sensors and actuators, the edge routers or gateway devices, the internet itself, and the cloud. The actual ways these components interact and the amount of software needed by each can vary greatly depending on the needs of the application (Figure 1).

Figure 1: The four major levels of the internet of things. An RTOS with a standard POSIX API can greatly enhance adaptability across the different levels.

Figure 1: The four major levels of the internet of things. An RTOS with a standard POSIX API can greatly enhance adaptability across the different levels.

For example, the first two levels — the edge devices and the gateway devices — are the points where data collection and control actually take place. These can be small sensors and actuators or private networks of intelligent control devices, all of which eventually link to the internet and the cloud. These are also typically the layers at which we find embedded devices linked in what were previously known as machine-to-machine configurations. So multiple sensors and actuators can be located in a single machine which is itself at the edge of the system, and they can either be accessible from outside via IP addresses or controlled by software internal to that machine, which is itself connected to the edge router. Complexity is no stranger to connected embedded devices, and the use of modularity and standardization can be indispensable at this level as well. And modularity and standardization can be applied to the development process as well, based on the structure of the system under development.

It is important to think of the internet, the systems that compose it and their components as a modular structure, but also to approach the process of development as a modular activity that can be assigned to specific teams and individuals who can work out a solution to their assigned component within the context of defined interfaces with other components and through communication with their own team members and the members of other cooperating teams. Fortunately, there are development methodologies that can act as a guide for exactly this approach. This also applies to the very earliest stages of development at what can be called the “platform.”

A platform is nothing esoteric but consists of ready-made components that can be assembled into a focused functional environment for the addition of unique functionality that represents the actual value of the device to the customer. Typically, a platform for embedded devices would consist of a selected microcontroller or microprocessor along with an operating system that could supply the needed functions, communication protocols and driver support for both on- and off-chip peripherals. The hardware interfaces as well as the software interfaces of the operating system should all adhere to well-known standards. Developers can then start at that point to add value in terms of the desired application and its various functions along with adding needed OS modules, communication protocols, off-chip peripherals and their drivers (Figure 2).

Figure 2: This single-board development platform from Freescale includes an ARM Cortex-M4 processor with a set of on-chip peripherals and interfaces that allow connection of further peripherals. Supplied with a configurable Unison RTOS package, it is ready for the developer to set up for his target system's needs and begin almost immediately developing a specific application.

Figure 2: This single-board development platform from Freescale includes an ARM Cortex-M4 processor with a set of on-chip peripherals and interfaces that allow connection of further peripherals. Supplied with a configurable Unison RTOS package, it is ready for the developer to set up for his target system’s needs and begin almost immediately developing a specific application.

The characteristics of the selected operating system are important. It is a big advantage if it works with a well-known API such as POSIX for two reasons. First, the POSIX API, being similar to Linux, is familiar to a large number of developers. Second, there is a great deal of POSIX-compatible third-party open-source software available that could be adaptable to the needs of the system under development. A selected RTOS should also support a good selection of different MCU and MPU architectures to make it easier for developers who have already selected one for other projects to adapt that RTOS. And for each supported architecture, the RTOS should support the different members of a device family, which is important for future upgrades, product versions or families of products a customer may wish to build.

Another issue is that while there may be myriad software components for different application needs, only a few of these tend to be selected for a given project. They can include the various communication protocols as well as functional modules like storage system, floating point math or video modules. Does this RTOS have all the components you’ll need for your project? If not, will the ones you do find work with this kernel? You’d have to test them — and at what cost in time and cash? The same if you write your own. The more pretested and documented modules that come with the RTOS, the more confidence you have that you can use them without thorough testing and qualification, which can result in huge savings of time and money. That means less time spent in searching for and qualifying open-source components. You can be sure the RTOS has all (or most of) the components you need before committing to it.

From components to lean process

All these issues of compatibility and qualification involve more than just the ability to put the pieces together. They form the basis of a perception of the project and a process that can expand to influence an entire organization. The approach of using a platform not only forms the foundation for a design, it also provides the basis for what is known as lean product design. Pioneered by Toyota, the lean development approach combines the use of a standards-based platform of necessary software components along with integrated work teams made up of persons with individual complementary competences. These team members communicate constantly with one another, yet plan their own work and work their own plans. The same applies to the different work teams, each of which has its own assignment, or “module,” to work on, but which also communicate with the other work teams. Since these modules share standard interfaces, they can be assembled into a complete system toward the end of the project as in the automotive example in Figure 3.

Figure 3: In lean product development different groups 1-3 work on progressively more advanced engines and brake systems. At the last possible time for a given model year, the technology insertion happens, creating progressive improvement and technology insertion as early as possible. Versions not used in a current model year may be used in subsequent years or abandoned; for example, Engine2 group's engine could be used on the basic model the following year. The various components are standardized for mix and match options making the chassis the platform.

Figure 3: In lean product development different groups 1-3 work on progressively more advanced engines and brake systems. At the last possible time for a given model year, the technology insertion happens, creating progressive improvement and technology insertion as early as possible. Versions not used in a current model year may be used in subsequent years or abandoned; for example, Engine2 group’s engine could be used on the basic model the following year. The various components are standardized for mix and match options making the chassis the platform.

The lean approach with its platform-based modularity and compatibility therefore lends itself easily to adaptability. Designs are almost never static, but must react to customer demands for enhancements and added features. As noted, a line of microcontrollers and microprocessors with common hardware interfaces should also offer varying features to substantially enhance adaptability. With such an approach, core MCUs or MPUs can be swapped out and new layouts achieved quickly and easily, knowing that the application software will run without change and that device drivers are tested, proven and ready to go. The ability to reduce power with a new part, increase memory, increase performance or add new connectivity, focusing only on the part of the system that needs to change rather than a complete new design reduces time to market and total cost of ownership.

It becomes apparent that a key design decision lies not only in the choice of an RTOS and a processor, but more importantly in the selection of a platform that combines the two with dimensions for scalability, adaptability and expandability for the long haul. This consists of a processor family (or families) that have those characteristics along with an RTOS with similar abilities to adapt to the needs of the target system along with the ability to adapt and expand smoothly with future requirements.

In addition to standard functions like a file system, communication protocols, wireless support, video support and more, there should be robust support for security. Security is an issue that spans from the lowest-level device driver all the way up through the coding standards for the application. Just as a solid hardware/software platform is the foundation for the device design, it is also the base rock for security. It affects both the selection of add-on components as well as the basic design of the RTOS.

An RTOS supplied with a complement of security components that are prequalified and can be selectively attached and integrated into the OS image can go a great distance toward putting together a secure system that can also securely interact with both the elements within its networked environment and outside systems with which it must communicate. In terms of communication, this includes Transport Layer Security (TLS), which replaces Secure Socket Layer (SSL), IPSec/VPN, secure wireless links and encryption/decryption, among others. Other inherent characteristics of a securable RTOS include the combination of a secure boot service combined with secure remote field service with encryption and automatic fallback to the original state if an unsuccessful attempt or an attack is detected.

We have already noted a number of characteristics that come more easily with a platform approach, including a lean process, adaptability and security. Others include safety, which depends on security as well as functional safeguards. Two main features of real-time systems that assist with safety are determinism and instant-on with emergency stop (estop). Determinism provides predictability that can be tested and estop allows a system to be stopped and restarted quickly to deal with emergencies.

With IoT, we have a new set of criteria for connected systems. With a very broad set of sensors and sensors used in a much broader set of systems, connectivity has a far broader meaning. Wireless means any subset of wireless modulation schemes and the protocols used have also grown substantially. They now include wireless: Wi-Fi, Wi-Fi mesh, Bluetooth Classic and Bluetooth Smart/Smart Ready, 802.15.4 with 6LoWPAN, 3G, 4G and UHF. The wireline connectivity solutions are also extensive. Serial I/O, both asynchronous and synchronous, SPI, SDIO, I2C, I2S, USB, CAN and internet connectivity are expected to be tried, proven and tested.

To make systems that are complete, additional protocols should be available to cover control and connection of mechanical, memory, display, camera and sensor systems as well as advanced storage devices like flash, RAM and MMC interfaces. And, of course, these systems will all connect to the cloud, often passing large amounts of data.

The admonition to “start with a firm foundation” applies to the design of connected embedded systems as surely as it does to the construction trade. Starting with a modular, standards-based, yet adaptable and configurable combination of RTOS and processor and one that is focused on future expansion and enhancement can be the key to success. This applies in terms of time to market, initial development expenses maintainability and the ability to meet future customer needs.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

March 31, 2017  3:31 PM

IoT attack trends — and how to mitigate them

Mordechai Guri Profile: Mordechai Guri
Antimalware, Botnet, Consumer IoT, DDOS, Enterprise IoT, Internet of Things, iot, iot security, Ransomware

Two years ago, IoT attacks were considered exotic, an aberration of interest mainly to those in the industry and conspiracy theorists. No longer. The recent “teddy bear” data breach, which exposed more than 2 million children’s and parent’s voice recordings along with emails and passwords, forcing IoT cybersecurity dangers to become a mainstream household concern. And the few who were still unaware certainly got the message earlier this month, with the WikiLeaks revelation of the CIA hacking tool that can turn Samsung TVs into eavesdropping devices.

The evolution of IoT malware mirrors that of PC-based malware, but at lightning speed. The first attacks were essentially pranks, tricksters seeing what they could do, like the 2012 “Internet Census” powered by a botnet of 400,000+ embedded devices. Bad actors were quick to see the possibilities, leading to the Mirai botnet-based DDoS attacks on Dyn, Deutsche Telekom and others. The latest transition is the monetization of IoT malware by hiring out these botnets, ransomware or ad-click fraud providers such as the Linux/Moose botnet operators selling Instagram followers.

IoT attack trends

While these attacks may be minor compared to the mega-record, mega-expensive breaches we’ve seen, the potential is huge. Gartner predicts IoT devices will reach an installed base of 21 billion units by 2020. And we’re not just talking toasters, teddy bears and TVs — by 2020, there will be 250 million “connected” cars on the road. This brings the problem to an entirely new level.

Given the sheer variety of IoT devices and opportunities to exploit them, IoT attacks will develop in several directions.

DDoS attacks
As IoT expands so will IoT botnets — and their capacity to launch large-scale DDoS attacks. The Mirai DDoS attacks on the Dyn network were the most massive in history, with reported attack strength of 1.2 Tbps and taking down more than 80 major websites. Dyn’s preliminary analysis found that tens of millions of discrete IP addresses associated with the Mirai botnet were part of the attack.

The same botnet interfered with heating distribution in Finland, knocked nearly a million Deutsche Telekom users offline, was used in a DDoS attack on WikiLeaks and disrupted operations of five major Russian banks.

With the public release of the Mirai source code by its creator last October, hackers have already begun developing more virulent and broader reaching strains. Mirai is not a simple attack tool but a development framework. Additional capabilities such as new credential stealing, IP anonymization, persistency and traffic hiding will expand its attack potential. New Mirai strains will also likely include obfuscation techniques that make it difficult to track activity and expanded infection capabilities to target more types of devices.

IoT ransomware attacks
Until recently, IoT ransomware was all theory. At the 2016 DEF CON conference, researchers demonstrated they could infect smart thermostats with ransomware. And in a Bloomberg interview, GM of Intel Security Chris Young sketched a future where hackers demand a ransom before allowing a car owner to drive to work. That future has come sooner than anticipated. In January, attackers locked the electronic key system and computers of a four-star Austrian hotel, demanding $1,800 in bitcoins to restore functionality. They paid up. One can easily imagine cybercriminals making similar ransom demands to unlock hacked medical devices such as insulin pumps or pacemakers.

Ironically, one reason that IoT ransomware is not yet a bigger problem is what makes IoT so difficult to secure — the variety of IoT devices and operating systems means hackers can’t write ransomware that spreads superfast or easily.

IoT as attack vectors to enter an organization
As edge devices proliferate, so do the opportunities to gain entry into the wider network to which they are connected. Unfortunately, in the rush to get to market, many IoT device manufacturers neglect security aspects. Even manufacturers that are conscious of security issues might unknowingly embed insecure third-party components into their products. Many of the webcams enlisted by the Mirai botnet utilized electronic components from the same manufacturer.

IoT for spying and surveillance
One of the most concerning IoT security issues is the ability to invade and expose our most private moments. First reported in 2014, tens of thousands of home security cameras are being hacked and streamed live online. In most cases, changing the default password blocks the feed. However, Senrio researchers discovered a security flaw in D-Link cameras that lets attackers overwrite administrator passwords, exposing thousands of users to hacks not only of their cameras, but the network it connects to.

Even more disturbing are the types of attacks revealed this month by the WikiLeaks CIA dump. According to the documents, Britain’s MI5 and the American CIA worked together to develop a smart TV app, Weeping Angel, that can turn televisions into spying tools. Targeting Samsung TVs specifically, the malware records audio from surrounding areas, including when the user has turned the set off. While it’s unclear at what stage of development this particular project is in, the potential for hacks of this type, when used by malicious hackers, are enormous.

Vendors need to step up

Vendors have been slow to respond to the push for better IoT security, particularly more advanced penetration testing. However, they soon may find the financial consequences persuade them. In 2015, Fiat Chrysler recalled 1.4 million vehicles to install a security patch to prevent hackers from gaining remote control of the engine, steering and other systems. And the FTC recently filed a lawsuit against D-Link for “failing to protect its customers against well-known and easily preventable software security flaws in its routers and IoT cameras.”

IoT antimalware

Nascent IoT antimalware holds some promise, however approaches that work for PC-based attacks will not work in the IoT world. The high level of device diversity and operating systems versioning pose a barrier for security vendors. Currently, most IoT security products focus on the network side, trying to detect and block attacks by analyzing the traffic. However, these techniques become less relevant when encrypted traffic is involved.

IoT brings new opportunities but also new challenges. Awareness was the first hurdle. Now manufacturers, legislators, cybersecurity vendors and end users all need to do their part.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 30, 2017  4:11 PM

Textile pressure sensor applications in healthcare scenarios

Davide Vigano Profile: Davide Vigano
#eHealth #Healthcare IOT #Wearables #wireless medical devices, Consumer IoT, eHealth, Internet of Things, iot, Wearables

Wearable technology has primarily focused on the upper body via smart glasses, wristbands and chest straps. There is also a tremendous amount of potential in wearable technology to collect data from the foot. The foot is a very complex part of our anatomy and it is constantly under pressure and is under-served by technology and innovation. The foot contains 25% of all the bones in our bodies which equates to 26 bones, but also 33 joints, 107 ligaments and 19 muscles per foot.

Leonardo Da Vinci used to say, “The foot is a masterpiece of engineering and a work of art.” It only makes sense to concentrate efforts to this critical part of our body and keep it healthy.

Think of foot-related complications of diabetes as an example. The problem is of epidemic proportions and the numbers are absolutely staggering. There are 347 million diabetics in the world today with a projected growth to a half billion by 2030 according to the Center for Disease Control. Approximately 70% of these individuals suffer from measurable peripheral neuropathy, a disorder in the sensation in the bottom of their feet. Unfortunately, on average 5% of diabetic patients get foot ulcers and 1% of those require amputation. Diabetic foot ulcers (DFU) are the most common cause of amputations and are responsible for more hospitalizations than any other complication of diabetes. The good news is that DFU can be prevented by following simple guidelines like do not walk barefoot, use clean and soft socks, and inspect your feet daily. Unfortunately, many diabetic patients are also overweight and have real challenges in inspecting their own feet. Once a patient suffers from a DFU there are proven treatments for DFUs; the cornerstone of treatment is surgical debridement followed by “mechanical offloading” of the area, which attempts to reduce pressure on the plantar area of the foot. After surgery, it is important that the patient does not bear weight on that part of their foot to improve blood circulation and increase chances of healing. For this reason the patient may be asked to wear a cam boot or use a scooter. However, it has been found that patients with diabetic foot ulcerations only wear their offloading devices for 28% of daily steps taken. Treatment failure is the norm and it is our belief that inadequate use of offloading devices explains the reason why. Offloading can be verified in the clinic; however, the issue is much more relevant outside of those constrained parameters.

The solution? Textile pressure sensors

We believe the solution is a continuous foot monitoring system utilizing wearable e-textile pressure-sensor-enabled wireless devices for the diabetic foot. This product concept is designed to help three distinct groups of people: the primary care physician (PCP) or a podiatrist, the patient and the enterprise provider. Let’s take a closer look at each of these groups.

For the PCP, the product is designed to monitor the patient and guide in the adherence of offloading. For the patient, they would be provided with a tool because they are not able to sense the bottom of their feet. With this continuous monitoring system the PCP is able to measure and verify in real time that the patient is within a safe parameter of pressure under the foot. Lastly, for the enterprise provider, there are potential huge cost savings by reducing readmissions and recurrent procedures and surgeries to treat DFUs.

We are developing a continuous monitoring system product that is wearable with a microelectronic module that connects to proprietary sensors that could be placed on the patient’s dressing after surgery. It will include a mobile patient application component to detect pressure levels in real time and alert the patient when those levels become excessive. If the patient is unable to improve or reduce the pressure on their foot on their own, there is an alert mechanism to inform the patient’s care team, allowing the provider to reach out to the patient and help them achieve offloading.

There are potentially three scenarios in which this can be clinically useful: acute care scenario, secondary prevention and primary prevention. In the acute care scenario, a patient is diagnosed with a DFU and admitted to the hospital or they come to an orthopedic for surgery and at which point offloading is absolutely necessary. The patient cannot bear weight on that foot following the procedure or they will continue to worsen the wound.

Now, imagine a secondary prevention scenario which is predicated on the fact that once a patient has a DFU, their likelihood for re-ulceration is extremely high. Once the ulcer heals having gone through excessive treatments, they would now have the power to monitor their foot whereas before they would insensate. They can continue to wear the secondary prevention sock to monitor in real time pressures and maintain a threshold within a safe range.

The largest market opportunity most likely exists within the primary prevention scenario. The goal is to prevent DFU entirely by having the patient placed on a continuous foot monitoring system as soon as the physician diagnoses a diabetic patient with early-stage peripheral neuropathy.

In addition to the system, we envision a clinician dashboard used to calibrate the device for the patient and establish a monitoring system for the provider. The dashboard could use a traffic light scenario where the patients who are at the highest risk are coded in red and elevated to the top of the list so that a medical assistant can reach out to those patients when they arrive at the clinic in the morning. The patients that may be outside of acceptable parameters for a short period of time are marked yellow and the ones in the safe range are marked green.

The primary and secondary prevention requires a regular sock that the diabetic patient can wear. The DFU prevention sock is as soft and as comfortable as a normal sock. The thin textiles are woven into the sock itself so there will be no bulges causing additional pressure. The first iteration of the sock will only focus on monitoring pressure under the foot for neuropathic patients who wouldn’t otherwise know if there was something inside the shoe that was causing ulceration or excessive pressures, but with the use of the prevention smart sock the patient will be alerted and can act accordingly. In future iterations, we envision the ability to monitor activity, balance and gait to prevent and alert in case of falls with the accelerometer in the anklet and the reading of no pressure under the foot.

So, in totality, the wearable device, the smartphone app and the cloud-based dashboard create a robust, all-inclusive solution for the patient.

A similar approach may be applied to garments focused on prevention of decubitus ulcers. Decubitus ulcers are pressure ulcers that form on patients who are often immobilized in a wheelchair or in bed, and the parts of the body that are most protruding receive the most pressure and thereby form ulcers that require a great deal of treatment and produce a great deal of morbidity. Sadly, most of these usually occur under the watch of the hospital or a clinic. In terms of market size, there are 1.8 million nursing home beds in the U.S.; 870,000 acute care beds and 70,000 ICU beds — all of which would be an appropriate fit for real-time monitoring of pressure levels to prevent ulceration. In addition, there are approximately 1 million pressure ulcers occurring annually which equates to $6 billion per year in direct cost for their treatment.

Today hospitals utilize the “turning schedule clock” method to assure that bed-bound patients are turned every two hours. A new one is initiated every twelve hours. Currently, this system requires a nurse to come in and reposition the patient who is at risk for ulceration due to their immobility which is a highly inefficient way of triaging human resources. In the future, a continuous bed monitor will hopefully be able to measure pressure on the bed sheet and determine if the patient is at a critical level that puts them at risk for pressure ulceration. This would allow nurses to be able to recognize when a patient hits a critical threshold and determine the amount of time that threshold has been met to officially triage nurses to be turning patients and allow them to conserve resources for other purposes when ulceration is not a risk. The potential for that market is enormous based on the number of beds and the morbidity and the cost savings that the hospital could incur by preventing ulceration.

These are just two of the health care scenarios where we see enormous potential through the use of smart garments and textile pressure sensors due to the vast amount of people impacted as well as the potential for substantial cost savings.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 29, 2017  3:27 PM

Industry and academia need to work together to close the skills gap

Leah Jewell Profile: Leah Jewell
Academia, Career Development, Certifications, degrees, Internet of Things, iot, Skills, Staffing, talent, Training

The technology skills gap is a thorny problem to solve. People might not have the technical skills for current jobs or those that are growing in demand, like those around the internet of things. On the other hand, personal and social skills are desperately needed across the board, but it’s challenging to train and assess people around these capabilities.

A couple of trends complicate the efforts to address the skills gap. First, the rapid evolution of technology is outpacing the speed at which traditional academic institutions are able to deliver new training to the market. Second, there’s a disconnect between industry and academia around what technology and “soft” skills are needed and how best to deliver them.

Industry and academic institutions need to work together. It’s not a new idea, but it’s one that’s at the center of more policy and funding conversations today. Creating deeper and more consistent engagement between academic institutions and employers will go a long way toward ending the skills gap.

Collaboration needed

Getting to that point, however, is going to take a lot of work. I recently heard an executive from a large employer ask a representative from a university, “I don’t understand why the people you’re training and delivering to us aren’t equipped to do the jobs we need.” The university representative replied, “I think the better question is ‘why don’t we know what you need?'”

That hits it on the head. It’s not a lack of motivation for schools or employers. The conversation just isn’t happening consistently across the board. There are challenges even when there is great collaboration. Sometimes an academic-industry partnership gets off the ground, and then the priorities of the business change or leaders leave. Like any initiative, the “project” gets reevaluated and reprioritized, sometimes leaving the academic institution holding the bag for something they can’t support on their own.

Another challenge is time. Many talent pipeline initiatives start in elementary school and focus on exposure and excitement around different fields in technology. These initiatives take a long time to pay back for an employer and those initiatives are hard to sustain year over year. This is a particularly tough position to be in for companies who need IoT talent, oh, yesterday.

Fortunately, there are examples of successful academic-industry collaboration and the conversations with employers are becoming more creative and formalized. Success often begins with companies identifying the specific skills they need for specific job roles they need to fill. Simply saying “we need better-trained people” is too vague. Even if the roles are likely to change in the future, companies can map out a list of skills needed to grow and evolve within a role or company.

Creative look at degrees

Getting the conversation started is one important step. Given the rapidly changing nature of technical skills, academia needs to be more agile and innovative around program creation and employers need to demonstrate flexibility in the type of skill validation they require for employment.

Even today, many employers use a bachelors’ degree as the standard yardstick for entry-level employment. However, the technology changes faster than a school can get a course approved in the catalogue (which can take years). In order to address this challenge, academic institutions and employers have several options at their disposal:

  • Build more IoT-type programs at the associate degree level. This requires industry collaboration on the curriculum and the willingness of employers to hire people with an applied associate degree. The benefit is an expanded pool of qualified candidates with a faster initial turnaround of skilled talent.
  • Offer up non-degree up-skilling and reskilling courses for specific areas of need. These courses can be offered to people going through a degree program who can use these courses to supplement their degree coursework, or to provide training to people already on the job. Schools are able to spin up non-degree courses faster than degree courses because they don’t have to go through the catalogue/academic approval process. These courses are more likely to be current and relevant to today’s workforce needs with industry input. Non-degree courses could also lead to an industry certification.
  • With employer involvement, industry certifications can be developed which enable employers to have confidence that particular skills have been developed. A certification/badge enables a person to share and broadcast their specific skills to potential employers. Certifications validate that people have the skills they say they have, which is critical for employers. Employer involvement is needed around mapping job skills for the development of the certification. Employers would need to put their money where their mouth is and hire people who have the applicable certifications for their needs.

Looking ahead

Developing transparency around mapping the specific skills to specific job roles, and then designing and delivering innovative training programs, certifications and degrees around those topics will be critical for closing the IoT skills gap. Employers and academia can work together to provide learners with the skills they need to obtain or advance within a job and connect them to employers who are looking for those specific skills.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 29, 2017  11:40 AM

Move your business in the right direction with geolocation mapping

Josh Marinacci Profile: Josh Marinacci
Beacon, geocode, geolocation, gps, Internet of Things, iot, IoT applications, map, Mapping, maps

The concept of mapping has undergone two great revolutions over the past 50 years.

First, maps evolved from paper to digital. We gained the ability to query for directions from point A to point B, zoom in and out, and view live traffic reports to calculate estimated times of arrival.

More recently, maps shifted from interactive objects to fully immersive environments. Just about every corner of the globe has been intricately recreated in mapping software, allowing us to track users and devices in real time and watch them dynamically traverse the planet.

Geolocation mapping might seem like a novelty at first, but it enables us to use maps in revolutionary new ways. A map is no longer something we reference — it’s something we occupy, update and organize based on our needs. In fact, 74% of smartphone users rely on geolocation mapping for directions.

Users have a lot to gain from dynamic mapping, but businesses are positioned to reap the real rewards. Geolocation mapping is an integral component of internet of things applications. Tracking user location allows IoT products to automate more of their functions and deliver seamless overall user experiences. Your furnace or garage door opener could monitor your location at all times, kicking on to welcome you back via geohashing functionality as you near your home.

Essentially, geolocation mapping makes it possible to leverage the insights and resources of a crowd to improve any physical environment. Whether it’s point-by-point driving directions or interactive layouts of shopping centers, geolocation mapping is powering numerous IoT technologies.

How geolocation mapping changes IoT

Consumers today demand instant gratification, and geolocation mapping helps businesses meet that expectation. As IoT applications have become less expensive and easier to implement, they have shifted into prominent roles in a growing number of commercial environments. The technology enables everyone — from retailers to tourist hubs to energy providers — to improve their customer relations.

Geolocation mapping delivers two crucial pieces of information: the location of the user and the state of his environment based on data collected from a dynamic map. Those morsels of information put businesses in position to connect users with the information or experiences they want most.

That could mean simply providing directions between locations. But it could also mean offering coupons when someone stands in front of a product, or it could access location data to inform shoppers of complementary products or services through hyperlocalized beacon functionality. The business itself can harness geolocation data to get a better understanding of customers and their needs, making data-driven changes as a result.

Geolocation mapping will become increasingly important to IoT. Take connected cars, for instance. While we currently only track the locations of vehicles, what if we connected cars with other items? What if a parking meter could ping your car to let you know a spot was available around the block? We’ll see connected devices exchanging data more in the future, enabling users to capitalize on the technology surrounding us.

In countless exciting ways, geolocation mapping empowers businesses to satisfy more needs and wants with less time, hassle and cost.

Charting your own geolocation mapping strategy

The technologies at the heart of geolocation mapping are strong and improving rapidly. The necessary bandwidth is shrinking, costs are dropping and advanced mapping is becoming simpler.

Geolocation mapping is a tool within reach of most businesses, regardless of technical fluency or budget. But simply introducing mapping capabilities isn’t a solution; businesses must leverage the right aspects in the right ways to truly benefit:

  1. Harness real-time data streams. The defining feature of immersive maps is the level of dynamism, not the detail. These maps are constantly evolving to reflect real-time conditions, like location of consumers, weather conditions or number of cars in a parking lot. This data creates tremendous potential for automation and analytics, highlighting the true value of geolocation mapping.
  2. Leverage the most valuable data. The amount of data constantly produced by geolocation mapping is a blessing and a curse. To leverage the value of real-time data streams, businesses must focus on the data that’s relevant to their operations and filter out everything else. Users are 75% more likely to take action after receiving location-specific messages. Delivering the right message at the right time requires a careful focus on specific information coming out of the data stream.
  3. Don’t reinvent the wheel. Geolocation mapping offers tremendous potential, but developing the maps is a massive undertaking. While you focus on extending the functionality of dynamic mapping, services like Mapbox or Esri can handle the heavy lifting of generating maps. You could certainly build your own maps, but why waste time and effort on that when you can benefit from someone else’s hard work? Many of these services also offer features to take your maps to the next level, including geohashing and geocoding.

Location might be everything in real estate, but geolocation mapping lets businesses capitalize on their addresses. Sixty-nine percent of Google searches include a specific location, underscoring the importance of location data to the effectiveness of marketing. The mapping capabilities of tomorrow will allow businesses to guide motivated customers from wherever they might be to the products or services they want to buy. A little upfront investment can fund your expedition into the exciting world of geolocation mapping, giving you access to one of the most revolutionary sales tools in years.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 28, 2017  2:35 PM

How to be an IoT developer in the modern age

Jonathan Fries Profile: Jonathan Fries
Consumer IoT, Developers, Enterprise IoT, Internet of Things, iot, IoT applications, IoT devices, Software development

The proliferation of connected devices over the past several years has been astonishing. From everyday items like wearables and thermostats to grander devices like solar panels and street tiles, it seems as if there are few things that are not connected in some way. As we look toward the future to fully connected cities, hospitals and homes, the role of the IoT developer is becoming ever more crucial.

Developing all types of software and devices is important, but arguably none more decisive than developing connected devices. If your Fitbit glitches, that is one story, but it is an entirely different one if a connected medical device fails in the middle of surgery or a self-driving car goes haywire in the middle of rush hour. Because of the critical role that IoT devices are playing in our lives, poorly developed IoT devices can be life threatening, making this type of development particularly unique.

While a developer of any kind needs to have a certain level of skill and passion to be successful, being an IoT developer comes with its own set of challenges and requirements. In the coming years, leading-edge IoT projects will begin to surface as a primary driver of the industry. Below are five qualities and skills developers must have to be a successful IoT developer in this fast-paced age of “connected everything”:

  1. Curiosity about hardware. Chances are, as an IoT developer you’ll either be writing firmware, writing services that work with hardware, or perhaps testing your code with interesting boards (they may have strange wires soldered onto/hanging off the sides of them). If this makes you think “Yuck! I want to keep writing my clean code for the web/iOS/etc.,” then being an IoT developer is not for you.
  2. Willingness to consider new tools. Emerging service offerings from cloud companies (i.e., Amazon and Microsoft) may be based on existing tools, but they offer new features and out-of-the-box power. You can’t rest on your laurels in a field that has this much going on.
  3. Ability to prototype. Do you know what a Raspberry Pi or Arduino is? As an IoT developer you will probably find out (see number 1). Being an IoT developer isn’t like programming for a server or off-the-shelf mobile device. If you are waiting for production hardware to be completed or designed, what do you do? Answer: prototype hardware with one of the commercially available prototyping platforms.
  4. General fearlessness about low-level computing concepts and tools. At some level, you’re going to need to think about bytes of data, inspecting logs from a command line, looking at network traffic, or doing something that is conveniently tucked away in many “modern” programming languages and development platforms.
  5. Screwdriver ownership. Perhaps you’ve heard the old adage “Beware of programmers who carry screwdrivers.” If you are one of the people that we’ve all been warned about (you know who you are), and you say to yourself, “In spite of my years in software, I’m eminently qualified in the use of this screwdriver,” then IoT development just might be for you.

Of course, these traits are somewhat generalized, but we are living in an age of unprecedented convergence between software and all kinds of hardware, and the developer has a crucial role in how the future of interconnected “things” plays out — thus, shaping the world we live in.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 28, 2017  11:05 AM

Top five benefits of real-time business intelligence for utilities

Yatish Patil Profile: Yatish Patil
BI, Business Intelligence, Data Analytics, Internet of Things, iot, IoT analytics, Real-time BI, Real-time data, utilities, Utility

Real-time business intelligence solutions are expected to be game changers for asset-intensive and field-force-driven enterprises, such as utilities. These solutions have become pervasive for enterprises to find value that had been “unseen” in the data before.

Utilities deal with an enormous amount and increasingly diverse set of data coming in from various sources. However, somewhere in this data deluge lays the path to a more efficient tomorrow. Data analytics gives them the power of insights to uncover significant events and identify trends in order to adapt quickly to ever-changing business dynamics.

From electricity distribution to demand response, water supply management to collecting meter readings, oil transmission to delivery, utility enterprises need flexible business processes to adopt intelligent capabilities in order to deliver vital services to their customers. Delivering the right information to the right people in the right format and at the right time is the significant aspect of real-time business intelligence (BI). It is the process of delivering information about business operations as they occur with minimum latency. Business intelligence solutions can help utilities make better decisions, support automation processes and help customers manage their utility lifestyles. This in turn can help optimize business decisions, improve operations and increase customer satisfaction.

Here is a breakdown of the five most obvious benefits of real-time business intelligence for utilities:

1. Optimizing asset performance

Identifying potential problems in field assets, grids and wells, utility distribution equipment can help avoid unplanned service interruptions in advance. Also, any delay in responding to an equipment’s failure and maintenance issues can lead to operational inefficiency. This inefficiency can be regulated by monitoring the performance and health of assets to predict decay points or equipment failure. Hence, analyzing real-time machine data or operational data can help solve these challenges. Real-time insights into asset health, peak periods, supply and demand analysis, and abnormal conditions can help improve asset performance.

Data analytics-driven business intelligence solutions that provide analyses via rich visuals and statistics are what utilities need to adopt. The capabilities provided by BI systems can help enhance the reliability, capacity and availability of the assets, which in turn improves asset performance. This can help ensure smoother operations at peak periods, eliminating downtime occurrence. It also allows them to trigger maintenance schedule operations at regular intervals to receive notifications on asset health.

2. Enhancing customer experience

Intelligence from IoT assets, smart grids and SCADA enriched with customer data can provide critical insights into a customer’s utility usage. Utilities can build systems of intelligence around the consumption pattern to empower consumers with usage insights and influence their usage behavior. Segmentation at a micro level to develop a 360-degree view of customer’s usage behavior can help bring the best possible ways to optimize their usage. Utilities can thus provide cost-effective plans delivering personalized experience, thereby improving loyalty, customer lifetime value and reducing customer churn. They can now provide customers with self-service intelligence capabilities to view and manage their utility usage, consumption pattern, historical usage, billing, payment and abnormal usage conditions.

As a good instructive example, find out how a team of asset data analytics consultants implemented BI capabilities for a giant water utility company to open a channel for water conservation and to deliver differentiated customer service.

3. Improving operational efficiency

Real-time business intelligence systems bring forth an interaction process for utility operators to proactively view and analyze operational data. BI dashboards enable them to monitor everything from meter reading to uptime for every single minute. It helps gain high visibility of historical comparison of asset performance, access real-time data and promptly react to make better decisions in a timely fashion. It also helps find opportunities to improve operational efficiency as and when an event occurs.

Dashboards also generate comprehensive performance reports to help utility operators develop an optimal course of action depending on specific key performance indicators. It also displays the overall performance of the assets in real time on a periodic basis, i.e., year-to-year, monthly, weekly, daily or hourly. These capabilities of business intelligence automate and help optimize asset performance, reduce risk and operational costs, and enhance business responsiveness.

4. Dynamic forecasting and load management

Business intelligence acts as a modelling platform to determine patterns, spot trends and understand predictable behavior in order to ramp up overall efficiency. Real-time and accurate utility demand forecasting is crucial to successfully managing supply and demand operations in a timely fashion. Utilities have to leverage real-time or historical data of consumption, usage, supply and weather constraints to perform complex analysis. This would help them to identify key performance indicators and patterns for dependable forecasting of asset failures and performance. They can also determine correlation between grid performance and grid conditions for maintenance.

Workflow improvements gained by integrating business intelligence solutions with existing automation processes can help avoid asset failures, thus resulting in asset longevity and higher uptime. Getting actionable insights, prescriptions and foresight is necessary for better decision making towards preventive maintenance, quality of service and outage management. The most efficient way to build systems of intelligence is to take advantage of an IoT and data analytics partner’s expertise to leverage Azure IoT and Cortana Intelligence Suite, which brings forth the powerful capabilities to transform data into intelligent actions.

5. Prevention of utility loss and fraud management

As enormous volumes of data streams driven from smart meters, grids and sensors become readily available, utilities can perform real-time and predictive analytics to gain critical insights. This can help operators continuously monitor the vital signs of utility distribution systems, reliably and quickly assess system integrity, and gain hidden insights. This in turn helps them in outage and fault detection, and to determine anomalies on supply-distribution lines, thereby optimizing utility distribution. Thus minimizing utility loss and providing early detection of supply, theft, leaks, consumption and over usage.

The bottom line

The benefits of real-time business intelligence are numerous, tangible and evident, which allows utilities to make faster and smarter decisions, whether it’s sending notifications to customers within minutes of an outage or responding to spikes in demand even before they happen. Thus, empowering utilities with the power of facts, transforming the way they do business forever. With business intelligence solutions, utilities can optimize asset performance, create visibility and boost operational efficiency that supports changing business processes. Utilities that infuse real-time intelligence systems into their business operations can secure their unique position in today’s market.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 27, 2017  2:49 PM

Preventing bot brawls in the connected home

David Moss Profile: David Moss
Bot, Bots, Internet of Things, iot, IoT applications, Protocols, smart home, standards

Recently we’ve seen an uptick in the number of bots online, which vary from chatbots for customer service to spambots on social media to content-editing bots in online communities. Though an ecosystem of bots is unfolding, our knowledge and understanding of how bots interact with one another is limited. Because bots don’t have emotions, you’d think their interactions would be relatively uneventful. However, they have the capacity to be quite social. This begs the question — what affects bot-to-bot interactions and how can developers design bots that have complex interactions without interference?

Wikipedia’s bots recently made news because instead of editing articles on the website, they were fighting silent, tiny battles where they contradicted one another — and nobody noticed. Researchers at the Oxford Internet Institute discovered that the bots spent years doing and undoing vandalism, flagging copyright violations and more. This presents a problem not just for Wikipedia, but for all software that uses bots. Understanding the impact of bot-to-bot interaction is crucial for providing dependable bot services.

What’s a bot?

So, let’s back up for a second. We know there are chatbots, spambots and editing bots that Wikipedia uses, but what really is a bot? Operating in the background of a user’s life, a bot enables microservices that integrate deep-learning algorithms and the benefits of artificial intelligence. Bots are like a small computer program that listens to the real-time data provided by your devices. By listening in, bots are trying to figure out how to effectively understand and use that data in order to learn, react and communicate with you.

The potential for bots extends far beyond simple messaging bots. They play a huge role in the next wave of consumer solutions because they eliminate the need for a screen. Intelligence is beginning to surround us in everyday objects that are connected to the cloud, like your coffee maker or thermostat, and it doesn’t require a screen. We will know that we’ve reached peak bot potential when they deliver services that dive deep into a person’s life, rather than mimic a simple screen conversation. The bot interface is a spoken conversation. While connected outcomes today are still driven by screen interactions, the success of Siri, Alexa and other voice services prove the future is closer than many can imagine.

Pushing IoT forward with bots

True innovation in IoT can’t happen until companies apply ambient computing to smart devices — this can happen through bots. Ambient computing transforms things into intelligent devices by proactively learning patterns and influencing outcomes for a specific set of people and devices. Bots take advantage of ambient computing by learning the habits of people and automatically adjusting devices to meet their needs.

Bots have the capability to enable services that manufacturers may have once thought were impossible. Using ambient computing, bots that are specifically assigned to home security can learn to automatically disarm your security system when you arrive home, or enable settings that are designed specifically for security while you are home. A bot designed for connected light bulbs can learn your behavior and automatically adjust lighting to give the impression you’re home when you’re away. Bots learn and understand your behavior and effectively deliver the massive potential of ambient computing. Using bots, manufacturers can integrate additional useful features in their own products. By connecting their own devices into an entire ecosystem of devices and services that other developers incorporate, manufacturers can make their product entirely more useful than the product would have been otherwise.

While bots may seem very useful, they are not without flaws and currently face potential problems. Bots can disagree with one another, which can be difficult to measure and can result in unpredictable and inefficient behavior. Without a set language, rules and coordination, there would not be a way to ensure that bots can do their job.

What needs to happen with bots now?

To prevent problems from happening like we saw with Wikipedia, IoT companies need to create technology that enables bots to talk privately with each other, without exposing information about the user to the outside world. This would effectively keep them from fighting and allow them to do more communicating, coordinating and reasoning on behalf of the user in order to learn how to run their home.

It’s not enough for a developer to create a bot to act on their behalf. Bots need to go beyond acting on a user’s behalf, and instead be able to reason on behalf of the user. We’re seeing the emergence of bots communicating through examples like Clara, a virtual assistant bot that schedules meetings by using natural language to communicate directly with other peoples’ bots. Today in the world of artificial intelligence and IoT, People Power’s technology enables multiple virtual assistants that learn how you want to run the home. So it’s crucial to determine how bots are communicating — is it a natural language or machine language? Similar to how many IoT players are trying to get devices to communicate with each other today, we need an AllJoyn or an Open Interconnect Alliance for bots to talk to each other. This is an area that is ripe for research and exploration, and IoT companies must get ahead of it before bot communication protocols experience the same fragmentation as IoT device protocols.

Companies should use bots that are under control of the end user, as opposed to Wikipedia’s bots that were not under any kind of central control. In the IoT world, the end user can choose to have a bot in their account or not. At a more granular level, the user should be able to choose to give the bot permission to access a device or not. In addition, bots should have the ability to communicate with each other and coordinate activities by flocking together. This will be a huge step forward as one bot may discover you have a TV connected to a smart plug, and a different bot may recognize that you’ve gone to bed. So the two can work together to make sure your TV is turned off when you’ve gone to bed.

To avoid future bot brawls like we saw with Wikipedia’s bots, companies need to not only understand the value in bots, but also grasp how they can communicate with one another to provide the most effective, predictable and personalized experience for all users.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 24, 2017  2:23 PM

A new guide for the IIoT connectivity space

Stan Schneider Profile: Stan Schneider
AS/400 connectivity, Connectivity, IIoT, Industrial IoT, Internet of Things, iot, IoT applications, standards

The industrial internet of things will combine intelligence and interconnection to revolutionize nearly every industry, from healthcare to transportation to power to factories. IIoT will be a much bigger network with bigger value than today’s enterprise-focused internet. The analysts all agree it will have a multi-trillion-dollar economic impact, as billions of devices come online.

However, that impact is largely yet to be felt. Interoperability is widely acknowledged as the key issue holding back IIoT. Unlike today’s internet, the amazing diversity of applications in dozens of industries cannot be easily addressed by a single connectivity technology. IIoT must therefore combine multiple standards and approaches addressing very different use cases. Removing this blocker has potential to ignite much faster adoption.

After nearly three years of analysis and debates, the largest IoT consortium published a landmark document last month. The Industrial Internet Consortium (IIC) Industrial Internet Connectivity Framework (IICF) is by far the most detailed and profound work yet published on IIoT connectivity. It rolls up insights from and negotiations between a wide variety of industry, consortia and standards. The design offers profound insights into architecture, standards analysis and use case analysis.

The IICF architecture will create IIoT connectivity from a small number of “core connectivity standards.” These standards address different regions of the connectivity space. The industry must build standardized “core gateways” between standards. This architecture merges “best fit” industrial application technologies with a design for eventual internet-scale integration.

IIoT connectivity

Assessment and placement of relevant IIoT connectivity standards on the IIoT connectivity stack

The IICF also deeply analyzes the eight main technologies used in the industry and culminates in practical selection guidance.

This guide to the IIoT connectivity and interoperability challenge could unleash the value promised by IIoT. Its combination of immediate practical execution with long-term integrated vision offers a clear path to the future.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


March 23, 2017  3:22 PM

IoT: Not as new as you think

Phil Quade Profile: Phil Quade
Enterprise IoT, ICS, IIoT, Industrial IoT, Internet of Things, iot, iot security, SCADA

IoT is a sexy topic these days. It’s hard to open a magazine or blog without seeing statistics that project there will soon be more IoT devices online than there are teenagers on ClickChat. Like the growth of mobility and smartphones before it, IoT is a phenomenon that merits attention. But this time it’s different. IoT networks and devices play a crucial role in our global transition to a digital economy, and organizations that fail to adopt a digital business model may not survive. Which is why we also need to give credit to those who pioneered the use of IoT-like technologies — not just over the past few years, but for the past couple of decades.

I’m talking about the technologists working in our critical infrastructures, who have successfully relied on lightweight sensors and analytics to measure the availability and resiliency of our infrastructures that are underpinned by Industrial Control and Supervisory Control & Acquisition Systems (ICS/SCADA). As these sorts of technologies become more mainstream, it is important that we look at both the lessons learned from the use of “industrial internet of things” sensors, along with their security shortfalls (many ICS/SCADA systems were optimized around availability rather than other security services), as we expand the development and deployment of IoT solutions.

Sometimes it’s helpful to characterize IoT with greater precision. I like to place IoT devices in three categories. First, consumer IoT, such as smart TVs and watches and connected appliances or home security systems, is something that nearly everyone is familiar with and benefits from. The other two categories, commercial IoT and industrial IoT, are made up of things many of us never see. Yet we depend on them every day to provide essential resources and services. Commercial IoT includes things like inventory controls, device trackers and connected medical devices, and industrial IoT covers such things as connected electric meters, water flow gauges, pipeline monitors, manufacturing robots and other types of connected industrial controls.

Traditionally, commercial and industrial networks and their IoT devices ran in isolation. But with the mainstreaming of things like smart cities and connected homes, they now need to coexist within local, national and global infrastructures, creating hyperconnected environments of transportation systems, water, energy, emergency systems and communications. Medical devices, refineries, agriculture, manufacturing floors, government agencies and smart cities all use commercial and industrial IoT devices to automatically track, monitor, coordinate and respond to events and manage critical resources.

As a result, public-facing IT (information technology) networks and traditionally isolated OT (operations technology) networks are starting to be linked together. For example, data collected from IoT devices that is processed and analyzed in IT data centers is increasingly used to influence real-time changes on a manufacturing floor or deliver critical services, such as clearing traffic in a congested city in order to respond to a civil emergency.

The security implications are profound. Because of the hyperconnected nature of many systems, untrustworthy IoT behavior could be potentially catastrophic. OT, ICS and SCADA systems control physical systems, not just bits and bytes, where even the slightest tampering can sometimes have far-reaching — and potentially devastating — effects. And compromising critical systems connected to individuals and communities, such as transportation systems, water treatment facilities or medical infusion pumps and monitors, could even lead to injury or death.

Unfortunately, many IoT devices were never designed with security in mind. Their challenges include weak authentication and authorization protocols, insecure software and firmware, poorly designed connectivity and communications, and little to no security configurability. Many are “headless,” which means that they cannot have security installed on them or even be easily patched or updated.

And because IoT devices are being deployed everywhere, securing them demands visibility and control across all ecosystems. This is requiring many organizations, for the first time, to tie together what is happening across their IT, OT and IoT networks — on remote devices and across their public and private cloud networks — with a unified set of security policies and protocols. Integrating distinct security tools into a coherent system enables organizations to collect and correlate threat intelligence in real time, identify abnormal behavior and automatically orchestrate a response anywhere along an attack path. But it isn’t easy. Many of these systems were never designed to work together, and what may be an acceptable risk in one environment may be catastrophic in another.

To accomplish this, enterprises need to implement three strategic network security capabilities:

  1. Learn — Organizations need to understand the capabilities and limitations of each device and network ecosystem they are tying together. To do this, security solutions require complete network visibility to securely authenticate and classify IoT devices. Operators of OT and ICS/SCADA networks and devices are particularly sensitive since, in some cases, even simply scanning them can have a negative effect. So it is essential that organizations learn to safely enable real-time discovery and classification of devices, allowing the network to build risk profiles, and automatically assign IoT devices to IoT device groups along with appropriate policies.
  2. Segment — Once an organization has established complete visibility and centralized management, it can begin to establish controls to protect the expanding IoT attack surface. An essential component of those controls involves the intelligent and, where possible, automated segmenting of IoT devices and communications solutions into secured network zones protected by enforced policies. This allows the network to automatically grant and enforce baseline privileges for each IoT device risk profile, enabling the critical distribution and collection of data without compromising the integrity of critical systems.
  3. Protect — Combining policy-designated IoT groups with intelligent internal network segmentation enables multilayered monitoring, inspection and enforcement of device policies based on activity anywhere across the distributed enterprise infrastructure. But segmentation alone can lead to fractured visibility. Each group and network segment needs to be linked together into a holistic security framework. This integrated approach enables the centralized correlation of intelligence between different network and security devices and segments, followed by the automatic application of advanced security functions to IIoT devices and traffic located anywhere across the network — especially at access points, cross-segment network traffic locations and in the cloud.

Finally, it is essential that IoT not be treated as an isolated or independent component of a business. IoT devices and data interact across and with the extended network, including endpoint devices, cloud, and traditional and virtual IT and OT. Isolated IoT security strategies simply increase overhead while reducing broad visibility. To adequately protect IoT, including IIoT, organizations require more than just security point products or platforms. They need an integrated and automated security architecture.

An integrated security framework is able to tie together and orchestrate the disparate security elements that span your networked ecosystems. Such an approach expands and ensures resilience, secures distributed compute resources, including routing and network optimization, and allows for the synchronization and correlation of intelligence for effective, automated threat response. It also ensures that you are securely connecting known IoT devices, along with their associated risk profiles, to appropriate network segments or cloud environments. This enables the effective monitoring of legitimate traffic and the checking of authentication and credentials, while imposing access management across the distributed environment.

But back to the pioneers of this work. We have a lot to learn from the ICS/SCADA professionals who, for decades, have been protecting our critical infrastructures. Using a variety of protocols, hardware, analytics and SIEMs, these OT technologists have accumulated wisdom that ought to be tapped, rather than relearned the hard way. It is essential that organizations grow and educate their workforce of IT and OT professionals, enabling them to be better prepared, not just to secure their IIoT domain but the increasingly interconnected critical infrastructure domain as well.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: