IoT Agenda


November 21, 2017  10:02 AM

How the construction industry is using IoT and sensor technology

Rudzki Zachary Zachary Rudzki Profile: Rudzki Zachary
AR, augmented reality, Construction, Internet of Things, iot, IoT analytics, IoT data, IoT devices, IoT sensors, Predictive maintenance, safety, Sensor data, Sensors, Virtual Reality

The internet of things and sensor-based technology can be used to create huge advantages on construction sites related to worker safety, cost reduction and predictive maintenance. This technology collects massive amounts of data and can provide different types of analysis: descriptive, relaying the current conditions of a specific piece of equipment or environment; predictive, to forecast the occurrence of potential malfunctions or safety risks; and prescriptive, to provide ways to optimize the workflow and avoid delays and errors.

Predictive maintenance

Equipment fitted with sensors can generate valuable data about elements of a construction project such as temperature, weight capacity, light and chemicals. This information can be used to influence decisions made regarding maintenance scheduling and overall safety of a construction site in terms of fire safety, worker capacity, energy use and general wear and tear.

IoT and sensor technology in construction

Source: Christopher Burns, Unsplash

Managing the specialized machinery necessary for a project is often one of the most significant costs faced by firms in the construction industry, so maintaining these assets is vital to avoiding critical errors and expensive repairs or replacements. Timing maintenance to ensure that it can occur without affecting current projects and when it is necessary for the equipment to continue functioning optimally is a complicated balancing act.

Using sensor-based predictive and preventative maintenance technology enables operators to conduct maintenance on a piece of equipment in the sweet spot: when necessary, but before it has broken down, reducing costs dramatically in terms of the depth of the repair necessary as well as avoiding delays in the project timeline.

This technology not only monitors the condition of the equipment, but can communicate with operators regarding its exact status and alert them when maintenance is due. Sensors can also alert operators as to when conditions are operating at a level in which maintenance may become necessary, such as if a certain area is too hot for a specific type of equipment. Operators can then take the steps necessary to reduce the temperature or pause the activity until conditions cool down, avoiding any maintenance at all.

Internet of things

Many of these sensors and maintenance mechanisms rely on IoT technology – “things” or equipment with network connectivity, enabling them to collect, exchange and communicate data. In addition to sensor-equipped appliances, jobsite employees can personally use IoT-enabled devices such as wearable accessories. Biometric wearables can monitor a worker’s heart rate, temperature and other vital signs, and alert safety managers if she is experiencing exhaustion or overheating.

Weight-bearing sensors can track workers in the field to ensure they are aware of jobsite hazards and injury risks. A team at MIT is working on a connected safety shoe with sensored soles designed to alert workers if they lift a load above the recommended weight, and will keep the alert on until the weight has been reduced to a safe volume.

Virtual and augmented reality

AR/VR technology allows project managers to have detailed insights into the project from end to end and approach the project knowing all the facts. Applications like sensors allow the construction team to detect errors or necessary maintenance on equipment before they can impact the project on a greater scale, which cuts repair and labor costs.

Using AR/VR in construction can drastically improve safety. In risky conditions, such as underwater or below ground, getting a full view of the field before entering and being made aware of potentially hazardous conditions and substances present on site is vital. In some cases, the technology also enables the ability to remotely operate robotic tools, allowing workers to achieve a high level of precision without risking their safety.

Drones are another increasingly accessible application of new technology, used to conduct real-time site surveys and track project progress. Aerial access and mapping capabilities, as well as 3D imaging, benefit worker safety and reduce survey time. The “cool factor” of drones also has the secondary benefit to contractors as a marketing tool.

The costs related to labor mistakes, accidents, equipment maintenance and fraud have significant effects on the bottom line of a project. Although construction is a notoriously slow-to-adopt industry in terms of widely applying cutting-edge technology, the adoption of these technologies can significantly reduce inefficiencies in these areas. More companies are responding to these innovations after understanding that costs can be reduced by shortening project timelines, lowering maintenance costs and fewer accidents, and in turn, lower insurance premiums and overall liability.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

November 20, 2017  3:24 PM

The retail store of the future is here today

Ed Abrams Profile: Ed Abrams
consumer experience, Consumer IoT, Data Analytics, Enterprise IoT, Internet of Things, Inventory Management, iot, IoT analytics, retail, retailers

The adoption of enterprise IoT is accelerating rapidly. Companies across all sectors are moving from proofs of concept to full-scale deployments. By 2020, there will be 34 billion connected devices worldwide, and these devices are transforming every industry, from healthcare to hospitality to transportation — and especially retail.

IoT-enabled technologies, with their ability to bring digital practices and data collection into real-world physical scenarios, are starting to revolutionize traditional brick-and-mortar retailers. IoT-enabled sensors, for instance, which can collect data on consumer behavior within a store on the micro-level, are helping retail businesses of all sizes to enhance their operations, work more efficiently and better serve their customers. In the process, these products and services are creating the retail stores of the future, today.

So, what do these stores look like in practice?

When a customer walks through the door of their favorite, IoT-enabled retail outlet, he’s entering into a completely new kind of shopping experience. If a customer has the retail store’s app on his phone, the store’s sensors will recognize the device and enhance his shopping experience based on the information the customer has already shared with the retailer. For example, the store may recognize the customer’s wish list and then send a push notification telling the customer where he can find a particular item. And if he decides to purchase an item, the customer can receive personalized recommendations for additional products and services at checkout.

Meanwhile, the IoT-enabled sensors are also bringing the store to a new level of efficiency. When a customer walks into the store, an employee can receive her wish list on a tablet or wearable device. The employee can easily offer the customer directions, advice or other services. At the same time, IoT-enabled sensors in the back of the store are tracking inventory and providing employees with updates in real time. If a particular item sells out, a manager can receive a notification to resupply and to meet customer demands.

As all of this is happens, advanced data analytics are helping retailers get a sense of real-time consumer sentiment, which they can use to optimize the layout of the retail store. IoT-enabled cameras and real-time data on purchasing decisions can help individual stores better understand the behaviors of their customers. IoT-enabled sensors can create heat maps that show where customers are spending time while they browse. For example, if a particular table of sweaters is capturing the attention of customers, managers can move that table up towards the entrance to bring in new business.

IoT retail technologies let customers experience all the benefits of shopping online along with all of the benefits of shopping in-store. They can look at, compare and try on certain items — all while receiving personalized recommendations that make the shopping experience more convenient. The customer leaves the customer more satisfied and more delighted, which leads to higher levels of brand loyalty, more time spent in stores and more money spent on purchases.

While this may all sound like science fiction, this is happening now. From the moment customers walk into an IoT-enabled store to the time they walk out with a new purchase, customers can receive a more personalized and seamless retail experience without having to prompt an employee. At the same time, retail stores are managing their employees and their inventory more efficiently, while using advanced data analytics to understand consumer behavior and maximize their offerings. It all adds up to a retail environment that does more to help both the business and the customer.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 20, 2017  11:42 AM

The rise of IoT-based Hivenets

Derek Manky Profile: Derek Manky
Botnet, cybercriminals, Internet of Things, iot, iot security, security in IOT

Over the past few years, we have seen the development of predictive software using artificial intelligence techniques. The latest advances in these sorts of tools employ swarm technology to use massive databases of expert knowledge, comprised of billions of constantly updated bits of data, to make accurate predictions. Such systems can be used to offer advice, make medical diagnoses or increase trading profitability on the stock exchange. This sort of predictive analysis represents an entirely new paradigm for how computing resources will be used to transform our world.

So, what does this have to do with IoT? Over the past year we have seen the development and deployment of massive IoT-based botnets, such as Mirai or the currently emerging Reaper system, built around millions of compromised IoT devices. These weaponized botnets have been used as blunt force tools to knock out devices, networks or even huge segments of the internet.

Based on developments we are seeing in places like the dark web, we predict that cybercriminals will begin to upgrade IoT-based botnets with swarm-based technology to create more effective attacks. If you think about it, traditional botnets are mindless slaves — they wait for commands from the bot herder (master) in order to execute an attack. But what if these nodes were able to make autonomous decisions with minimal supervision, use their collective intelligence to solve problems, or simultaneously target multiple vulnerability points in a network using a variety of penetration and exploit techniques?

The result would be a Hivenet instead of a botnet. Such a tool can use peer-based self-learning to effectively target vulnerable systems at an unprecedented scale. Hivenets will be able to use swarms of compromised devices, or swarmbots, to simultaneously identify and tackle different attack vectors. Hivenets are especially dangerous because, unlike traditional botnet zombies, individual swarmbots are smart. They are able to of talk to each other, take action based on shared local intelligence, use swarm intelligence to act on commands without the botnet herder instructing them to do so, and recruit and train new members of the hive. As a result, as a Hivenet identifies and compromises more devices it will be able to grow exponentially, and thereby widen its ability to simultaneously attack multiple victims.

While IoT-based attacks such as Mirai or Reaper are not using swarm technology yet, they already have the footprint necessary. Reaper is especially concerning because it uses a Lua engine with additional Lua scripts. Lua is an embedded programming language designed to enable scripts to run, enabling an attacker to switch from one attack to another fairly easily. Upgrading this sort of code to use emerging swarm behaviors and AI would have devastating consequences.

Responding to a swarm outbreak

There is currently very little that can be done to effectively fight off such an attack. Traditional security tools allow organizations to simultaneously fend off a single or even several attackers. But a swarm is a completely different sort of challenge. In many cases, especially sustained multiple distributed denial-of-service attacks, there’s simply not enough mitigation capacity. Even today, with all of our advances in technology, when a swarm of killer bees is headed your way the best solution is to simply run away.

Protecting networks and services, including critical infrastructure, from a swarm attack will require a systematic approach based on identifying potential attack vectors and engineering vulnerabilities out of a network. Simply building in things like redundancy, automated backups and distributed network segmentation can go a long way towards effectively mitigating the impact of such attacks.

Make no mistake. Cybercriminals are organized, well-funded, resourceful and highly motivated. They are developing and deploying advanced malware, using cloud-based computing resources and developing cutting-edge tools based on AI and machine learning to not only circumvent advanced security defenses, but to also widen the scope and scale of their attacks.

In part two of this article, I will explore the most effect responses to IoT-based Hivenets, including the development of “expert systems” as the next critical response to advancements in malware and cybercriminal technologies.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 20, 2017  10:24 AM

Smart building security: Cyber-resilience must be built in

Julian Weinberger Profile: Julian Weinberger
Automation, Cyberattacks, cybersecurity, Data Encryption, Enterprise IoT, Internet of Things, iot, iot security, Machine learning, security in IOT, Smart Building, Virtual Private Network, VPN

As IoT and machine learning revolutionize the ways in which we live, work and play, it comes as no surprise that the market for smart buildings is expected to increase fourfold in the next 10 years. Enterprises in particular are implementing IoT and machine learning to manage office spaces and utilities more efficiently and to automate mundane, repetitive tasks. Organizations are optimizing all aspects of their corporate buildings with sensors and digital controllers for HVAC, electricity, surveillance systems and even parking spaces.

Although projections vary, industry experts agree that the smart building market is about to undergo a period of exponential growth. For example, Markets and Markets’ “Smart Building Market” report estimates the growth of smart buildings to rise from $7.42 billion today to $31.74 billion by 2022. Meanwhile, IoT device manufacturer ARM estimates that one trillion smart units will be built between 2017 and 2035.

While the progress towards smarter building infrastructure is impressive, it is important to remember that it is not without risk. Unfortunately, the diverse range of IoT systems within smart buildings are still running old, unpatched software and frequently communicate using nonstandard protocols. This makes malicious activity and potential security threats much harder to detect.

With a successful intrusion into the central control point or building automation system (BAS) within a smart building, the consequences could be dire. Upon access, hackers would have the ability to stop elevators from working, disconnect power supplies, hack into IP-connected cameras or create a botnet for launching distributed denial-of-service attacks on other systems. An intrusion into a government or financial institution’s BAS could even open up a gateway into their entire IT network, comprising personal information such as Social Security numbers or bank account information. Since smart buildings are such attractive targets for cyberattackers, cyber-resilience must be built in.

Built-in security

To enhance the security of IoT devices, U.S. Congress is considering passing the Cyber Shield Act of 2017 in order to eliminate the most common vulnerabilities in IoT design. Leading manufacturers like ARM are also working hard to strengthen security. ARM recently announced its Platform Security Architecture, a new systems architecture to help secure and protect connected devices by building in security at the design stage. According to ARM, one way to build in device protection is to prevent firmware tampering using strong, crypto-based boot architecture. Device management must be architected along similar trusted lines as well.

The next stage is to ensure intelligent systems communicate with one another securely by default. Virtual private networks (VPNs) provide encrypted connections to allow proprietary data in smart buildings to be transferred privately across the public internet between remote locations all over the world. Their flexibility means they can be readily scaled and adapted to meet any data exchange security requirements. With VPNs in place, even if a third party were able to eavesdrop on the network communication, the information itself would be indecipherable.

Moving forward, it is imperative that the building industry and developers strictly deploy smart systems that have security built in from the start. When it comes to connectivity, the implementation of VPNs is critical for protecting smart buildings and ensuring device data is kept private and secure.

In summary, as IoT and machine learning transform buildings into smart infrastructure, new security risks and vulnerabilities are bound to arise. While smart infrastructure offers a substantial amount of benefits, many IoT devices and management systems still run on legacy software and lack basic security measures. To decrease the risk of cyberattacks on smart buildings, infrastructure must have built-in cyber-resilience by securing all connection points using VPNs.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 17, 2017  2:12 PM

Investing in the internet of things

Russell Redenbaugh Profile: Russell Redenbaugh
Internet of Things, investment, iot, return on investment, ROI, VC

While the internet of things is still relatively new if you consider the entire swath of history, it’s rapidly being adopted into mainstream society, which means it’s going to be a hot target for investors. But like any new technology or entity, IoT products and companies must be studied and handled with care by investors. Excitedly rushing into the next big thing to make a quick buck is never a good idea.

Here are three things successful investors must consider before investing in IoT.

1. Find the return on investment

Finding any traditional financial rate of return may mislead you. The great commercial successes of the past few decades all did one thing very well. They preserved the only non-renewable resource: time.

Although it’s not IoT, think of the iPhone. The extraordinary ROI of an iPhone is not that it’s a better phone. The ROI is in it being a tool that allows people to save time by having email, text, music, camera, video, maps, health, calendar, reminders, notes and much more all in their pocket 24/7.

Finding the ROI for IoT must include finding the return on time. Whose valuable time is being saved by this product or service? We see many businesses utilizing IoT as a way to manage inventory, streamline manufacturing and monitor shipping, to name a few ways, which saves many employees’ time and improves overall efficiency.

2. Find more than just the ‘what’

People get caught in the buzz of what is hot and forget to look at the how and the who. As the saying goes, investing requires both a jockey and a horse to win. Successful investors, and correspondingly successful management teams, apply processes, networks and resources. It’s great to have a wonderful idea for a product, but how do you plan to develop it, manufacture it, market it and so on?

It’s also important to find out who is running these companies and who their managers are. Who are the angels or venture capitalists backing a company? How have they generated a track record of success, and is it repeatable? A great idea can easily die without the proper management, networks and funding. Never invest in just the what.

3. Follow the economics

Great investments come from ideas that preserve the scarce and squander the abundant. Understanding what is scarce and abundant is the foundation of all economics — and successful investing. Always invest in ideas, technologies and companies that preserve what is scarce by wasting what is abundant. Importantly, what is scarce and abundant can change without you ever getting the memo. Those who bought Apple at under $2 per share, as we did for ourselves and investment clients, were pursuing rate of return. What we saw was that while the falling cost of technology was abundant, portable, on-demand music was scarce.

Investors following the three steps above will dramatically improve their odds of finding successful IoT investments.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 17, 2017  12:19 PM

IoT’s biggest threat: Systemic security flaws

David Rolfe Profile: David Rolfe
Business model, Internet of Things, iot, iot security, patching, security in IOT

Nearly every week, we read another story related to an IoT security breach or hack. For now, the vast majority of these are individual failures by vendors, but organizations are facing an existential risk as hackers focus on finding loopholes in IoT ecosystems — not just individual technical flaws, but systemic weaknesses in new and untested business models. As organizations rely more on IoT-powered products and services, the biggest threat to the industry’s ultimate success lies in security.

A perfect storm of security problems

IoT security is perceived as somebody else’s problem. Engineers are being pushed to focus on maximizing the functionality of individual devices, while broader security issues remain somebody else’s problem. Currently, the best we can hope for is that an individual IoT device will be secure against remote access for a few months after we’ve bought it. Developers are under huge pressure to release new features and products as rapidly as possible, and as a result make design assumptions to enable hardware to pass testing, but often set it up for failure in the real world.

Many IoT devices are hard to patch securely, if at all. A perfect example of the tradeoff between safety and reliability is patching IoT devices. If we make devices patchable, we create a massive, hard-wired security hole that allows a hacker to install whatever they choose. But if we don’t, we can’t fix known vulnerabilities. Add the minimal to nonexistent user interfaces associated with many IoT devices to the mix, throw in sporadic connectivity, and you’ve got a recipe for uncontrolled chaos.

IoT-based business models are untested. Thanks to the internet, I now find myself having to explain concepts such as the Yellow Pages, record shops, travel agents and even public libraries to my kids. All of these — along with many others — were massively disrupted by the original internet. IoT will be equally disruptive, and just as we can reasonably foresee cycles of hype and failure, we should also anticipate the belated discovery of hidden flaws, such as your location becoming public or your self-driving car being confused by pranksters with stop signs. While innovation of course requires risk, the high volume of unknowns for IoT business models means that businesses are learning and fixing at the same — or slower — pace as hackers seeking to exploit whatever weakness they find.

Each of the preceding factors alone would be a serious problem, but IoT faces them all — and more — simultaneously.

Despite the newness of IoT ecosystems, we can still look to older or traditional security models to mitigate risk and cope with these threats. I like to take a page out of the U.S. government’s procedures for safety devices used in nuclear weapons:

  • Security has to be built in from the very start, not layered on top afterwards.
  • Designers have to perform a balancing act between restricting functionality to prevent harm while still ensuring the device will work when needed.
  • Fail-safe is not a marketing term you toss around — it came about when tests showed that when exposed to heat, the solder on the circuit boards in a nuclear device would become liquid and dribble across the surface, creating new and very scary electrical paths. We’re already seeing a focus on doomsday scenarios appear with self-driving cars, where “who dies when a crash is inevitable?” is now a problem for software developers, not philosophers.

We need to take a step back and think very carefully about the potential systemic failure modes of future IoT ecosystems. We’re moving on from an era where technology either worked or failed into one where insidious behavior could rob consumers blind without them ever knowing. Unless the IoT industry starts looking at security and systemic failure modes seriously, it faces a real risk of a public fiasco that will destroy its credibility and potentially the entire industry.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 17, 2017  11:18 AM

Augmented vs. artificial intelligence: What’s the difference?

Roei Ganzarski Profile: Roei Ganzarski
ai, Artificial intelligence, augmented, ethics, Intelligence, logic

Movies like iRobot, Chappie, Ex Machina and, of course, the classic Terminator, all portray a future where artificial intelligence is a staple part of day-to-day life. But, the promise of this futuristic technology is only in its infancy in modern day society … or is it?

We’re all aware of how powerful companies would become if they had such a technology in their hands, with its added benefits allowing them outpace the competition, get more business or raise more capital — it’s easy to see why everyone is claiming to have it. And with both the average person (and sometimes, even the more tech-savvy person) viewing the highly sophisticated software as the turning point to our intelligence-driven future, why wouldn’t it be called AI?

There’s a catch — true artificial intelligence does not exist and will not exist for at least a decade, even though the AI industry is predicted to be worth nearly $3 billion this year alone. Because of this reality, it begs the question: If we aren’t currently experiencing true AI, then how can we categorize the current state of advanced algorithms and technology? The answer is simple. Right now, we’re living in the era of augmented intelligence.

What exactly is augmented intelligence?

From the surface, augmented intelligence looks nearly identical to artificial intelligence, but there is one major difference: There’s a person, like a programmer, pulling the strings behind the scenes in each and every possible scenario the AI program may need to act upon or telling the computer how it needs to learn. While machines using augmented intelligence can often act and react like humans, these actions are only based on human inputted information. In other words, and in very simplistic terms, a software developer inputs several “if this, then that” scenarios and creates a near-real-world reaction that a machine is able to act upon. Even when using advanced machine learning, a developer is inputting the logic of learning and the reasoning behind it. The key factor that makes this intelligence augmented is the ongoing manual intervention which dictates how, if and when a machine reacts.

What qualifies as artificial intelligence?

Artificial intelligence can be defined as the creation of a machine that can replace and perform tasks that normally require human intelligence and reasoning. Robots today can replace many human tasks, however, the most critical pieces that define true artificial intelligence are the main traits that create human reasoning or logic (beyond knowledge): morals and ethics.

First, an artificially intelligent machine should have morals, meaning it must be able to define for itself right from wrong within the context of its situation, position, place in the world and more. For example, if an autonomous vehicle is tasked with a scenario where it must put either one or multiple lives at risk, it must be able to make a decision in the blink of an eye. This decision, which takes multiple real-time elements and considerations into account, includes “living” (or not) with the consequence of that very decision.

Next, this technology must have its own set of ethics that governs its behavior and learns from its experiences. Principles, that while guided, are self-imposed and self-governed. For example, if a search-and-rescue drone can fulfill a non-life-threatening but critical job while imposing on someone else’s privacy or property, should it do so? And should there be consequences if the drone breaches privacy but no one knows?

While logic, in its most basic state, can be programmed into a machine (i.e., if this happens, then this is the appropriate response), true logic is deeply rooted in both morals and ethics that drive the machine — no pun intended — and are learned and instilled in each human throughout their lifetime, backed by generations that came before them and thousands of years of culture. In order for authentic artificial intelligence to take form, machines or bots powered by this revolutionary technology will need to meet these qualifiers to start “thinking” for themselves to match basic human instinct and reasoning. In fact, true artificial intelligence means the machine can change its morals and ethics with time and experience, and perhaps even act against them if the situation calls for it. And if this indeed becomes reality, where machines can act like humans and work against their own preprogrammed ethics, any attempt to limit its behavior or outcome will be futile.

Now, when will we reach the era of true artificial intelligence?

Despite its newfound reputation as the industry golden child, there has still been no instance of true artificial intelligence. While augmented intelligence has masked itself as artificial intelligence in a few instances, like when it was reported Facebook decided to shut down its experimental AI bots because they had created their own language, there was always a “Great and Powerful Oz” figure behind the machine. This speculation and vision of a futuristic world, where artificial bots think for themselves and live among us in everyday, is likely quite some time away, so don’t expect to see Chappie walking down the street anytime soon. Moreover, when it happens, do we really think we will be able to control it?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 16, 2017  2:34 PM

The rise of general-purpose over single-purpose IoT

Tom Maher Profile: Tom Maher
Enterprise IoT, IIoT, Industrial IoT, Infrastructure, Internet of Things, iot, IoT platform, IT, platforms

When we look at the industrial IoT landscape, much of the early industry attention had been centered on single-purpose use cases. However, we’re now beginning to see the various components of the IoT ecosystem evolving to enable general-purpose use cases.

In reality, it’s the general-purpose use cases that underpin the potential for digital transformation, which is where the real long-term business value lies. If companies fail to adopt a general-purpose approach to IoT, they will fail to evolve the digital capability in their enterprise, which ultimately enables the innovation and business model optimization that digital transformation promises.

Single purpose for a single problem

Single-purpose (or at least limited purpose) yields highly vertically integrated platforms which solve very specific problems at scale. The purchasing decision in enterprises for such technologies tended to be mainly the domain of the operational technology (OT) folks. Many segments start out single purpose for the obvious reasons; there’s a very specific return on investment argument (i.e.,” if we can do this, it’s worth that”) and being specific reduces the scope to being something achievable.

However, over time, enterprise IoT architects will encounter multiple single/limited uses cases and will need to assemble a robust architecture that addresses use cases where the specifics aren’t known, i.e., general purpose or programmable. Over time, the general-purpose architecture becomes more attractive and the “legacy” single-purpose use cases get ported to the general-purpose architecture. When this happens there is also a natural handing over of responsibility to the IT teams within the enterprise. Increasingly, we are witnessing this shift of responsibility for IoT projects from OT to IT.

Keep it Software, Stupid

We can think of single purpose and general purpose in terms of Hardware and Software. Almost everything now is driven by software (small ‘s’), however, not everything is designed to be programmed for the applications we haven’t thought of yet. Software (big “S”) as an approach or mindset comes with a philosophy of being able to create the next thing or improve existing things over time. When software is “frozen,” it fundamentally loses that essential attribute and is hardware-ified (aka firmware).

We therefore, with our Software mindset, can distinguish between what is programmable and what “was programmed” but is no longer essentially “Software.”

The majority of today’s end-to-end IoT platforms have a vertical focus and, as a result, are single or limited purpose. While they’re made mostly of software (programmed), they aren’t general purpose (programmable).

General-purpose IoT: Solving problems today and tomorrow

General-purpose industrial IoT is about using the advantages of Software over Hardware, i.e., programmability. With general-purpose industrial IoT, a business can invest not in a specific purpose, but instead in the ability to create and evolve a digital capability which enables the digital transformation that is unique to their business.

The key enablers of general-purpose industrial IoT are the general-purpose cloud platforms, like AWS IoT and Azure IoT and all the infrastructure behind that, including analytics, machine learning and so forth. These “cloud” elements are already in place and are proven as the “general components” of a range of specific use cases.

Other key enablers are sensors and the myriad of local area networking technologies (wired and wireless, powered and low-power, etc.), coupled with intelligent IoT gateways that have at least the capability to fulfill the promise of “Software”.

What’s needed now is a general-purpose edge network. This edge network, to be general purpose, needs to be both programmable (software defined) and connected to the general-purpose cloud. It goes without saying that it needs to also be secure and trustworthy. Most importantly, it must be delivered as a service.

IoT needs a software-defined edge network

A general-purpose edge network needs a software-defined access network securely connecting the physically remote assets to whatever computer resources are determined by the application.

In the same way that the cloud needed a software-defined network to underpin the connectivity requirement of the software which is its essence, general-purpose industrial IoT needs a software-defined access network. The edge network needs to be virtual in the sense that the edge network figures out how to overlay the software-defined topology onto the underlying physical network pieces. The edge network needs to be intelligent, which is to say, it can just work given the constraints.

When the access network is designed or purchased for a specific use case, the imposed constraints, for example, access technology or access costs (MB tariff plans), remove the ability to execute general-purpose use cases over the network.

General-purpose industrial IoT includes the ability to deploy new software, either new use cases or improving existing use cases.

For CIOs, this means looking beyond single purpose and instead investing in creating the platform for continuous digital transformation. Deploy intelligent gateways, for example, Dell Edge, which is a PC in a box, rather than “dumb” or “appliance” gateways. Assume the access network (first mile) is heterogeneous. Purchase data tariffs which are in line with general purpose, i.e., you don’t know how much data you’ll use, but over time it’s likely to increase, and factor in not just the application data usage, but also the usage to continuously deploy new software.

For enterprises to successfully embrace digital transformation, they will need to consider how they can decouple the access network from the cloud and enable the virtual intelligent edge. They will need to look at connectivity management platforms that support secure, intelligent over-the-top, edge-to-cloud and cloud-to-edge application connectivity. Ideally, these intelligent network connectivity services will be truly virtual, be available on-demand and can scale with the businesses’ requirements.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 16, 2017  1:04 PM

View from the edge: The action is out on the edge

Dave Laurello Profile: Dave Laurello
Edge computing, IIoT, Industrial IoT, Internet of Things, iot, IoT data, IoT devices, IOT Network, Predictive maintenance

In all the excitement over IIoT, the domain of computing activity that may do most to translate IIoT technology into lasting business value has been somewhat overlooked. Yes, we’re talking about edge computing.

Of course, computing on the edge — technology infrastructure that’s located on or near production operations for data collection, data analysis and data storage — has been going on for decades. Processes like keeping an assembly line running smoothly, delivering clean water continuously and making trains run on time have long depended on edge data being gathered efficiently, with only limited connectivity to data centers. But from a computing standpoint, the edge has often been seen as something of a sleepy backwater.

All that has changed recently thanks to secular (industry-agnostic) trends that have driven dramatically more investment in computing infrastructure at the edge, followed by increased reliance on edge-gathered data for cutting-edge applications. These trends include the criticality of data to business success; the demand for real-time analysis of data in order to make better business decisions; and the increasing interconnection of “things” of all kinds in order to gather ever more and higher-quality data.

As a result, analysts estimate that 5.6 billion IoT devices owned by enterprises and governments will utilize edge computing for data collection and processing in 2020 — up from 1.6 billion in 2017. And by 2019, 40% of all IoT-collected data is expected to be stored, processed, analyzed and acted upon close to or at the edge of the network.

Real benefits, real opportunities

These trends present the opportunity to reap significant benefits for those organizations that can take advantage of them.

Consider the case of a manufacturer looking to improve decision-making and overall productivity. Most manufacturers are already operating at the edge. While their plant operations may be centralized, the data gathered by unmanned machinery or unattended workstations may be only minimally connected to their data centers and business networks. As a result, the time it takes to gather, process and analyze data on machine performance makes it difficult to identify problems, diagnose them and respond to them promptly.

With today’s edge computing infrastructure, in contrast, manufacturers can now automate the collection of large volumes of machine data (available from IoT sensors), compare it to their own historic performance or industry-wide standards, and derive usable analysis right on the shop floor. This approach drives predictive maintenance to maximize machine uptime, streamline production processes and reduce costs.

Keeping pace with the evolving edge

Meanwhile, still other business and technology trends are reshaping how computing happens at the edge and driving the need for a new edge infrastructure. Key requirements include:

  • To meet the demand for real-time analysis and support more rigorous decision-making, much more computing power at the edge is a necessity.
  • To cope with the exponential growth of data, smarter networking and data-storage processes are a must.
  • To guard against all the new intrusion points and attack vectors that are created by IIoT interconnection, highly secure connected-edge environments will be required.

IT and operations technology professionals — both of whose domains include some responsibility for computing on the edge — will have their hands full, evaluating a variety of “next-generation” technologies that are proposed as the edge infrastructure of the near future. Among these are micro-data centers, ruggedized edge servers, edge gateways and edge analytic devices. One useful screen to help triage them may be to ask: Can they — under the harshest of conditions, and often with life-or-death consequences — “self-manage”?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 16, 2017  12:16 PM

The internet of things, database systems and data distribution, part two

Steven Graves Profile: Steven Graves
architecture, Data Management, Database, Database Systems, Databases, DDS, Distributed computing, Edge computing, Internet of Things, iot, IoT data

In part one of this two-part series, I covered where, in the internet of things, data needs to be collected: on edge devices, gateways and servers in public or private clouds. And I discussed the characteristics of these systems, as well as the implications for choosing appropriate database management system technology.

In this installment, I’ll talk about legacy data distribution patterns and how the internet of things introduces a new pattern that you, or your database system vendor of choice, need to accommodate.

Historically, there are some common patterns of data distribution in the context of database systems: high availability, database clusters and sharding.

I touched on sharding in part one. It is the distribution of a logical database’s contents across two or more physical databases. Logically, it is still one database and it is the responsibility of the database system to maintain the integrity and consistency of the logical database as a unit. Exactly how the data is distributed varies widely between database systems. Some systems delegate (or at least allow delegation) of the responsibility to the application (for example, “put this data on shard three”). Other systems are at the other end of the spectrum, deploying intelligent agents that monitor how the data is queried and by which clients, and moving the data between shards to colocate data that is queried together, and/or to move data to a shard that is closer to the client(s) most frequently using that data. The database system should isolate applications from the physical implementation. See Figure 2.

data distribution, logical database, physical databaseThe purpose of high availability is as its name implies: to create redundancy and provide resilience against the loss of a system that stores a database. In a high availability system, there is a master/primary database and one or more replica/standby database to which the overall system can failover to in the event of a failure of the master system. A database system providing high availability as a service needs to replicate changes (for example, insert, update and delete operations) from the master to the replica(s). In the event of the master’s failure, a means is provided to promote a replica to master. The specific mechanisms by which these features are carried out are different for every database system. See Figures 3a and 3b.

database system mechanisms

The purpose of database clusters is to facilitate distributed computing and/or scalability. Clusters don’t use the concept of master and standby; each instance of a database system in a cluster is a peer to every other instance and works cooperatively on the content of the database. There are two main architectures of database system clusters. See Figure 4a and 4b.

database system clusters architecture

database system clusters architecture

As illustrated in Figure 4a, when clusters are used, and unlike sharding, where each shard contains a fraction of the entire logical database, each database system instance in a cluster maintains a copy of the entire database. Local reads are extremely fast. Insert, update and delete operations must be replicated to every other node in the cluster, which is a drag on performance, but overall system performance (i.e., the performance of the cluster in the aggregate) is better because there are N nodes executing those operations. Nevertheless, this architecture is best suited to read-heavy usage patterns versus write-heavy usage patterns.

High availability is sometimes combined with sharding, wherein each shard has a master and a standby database and database system. Because all of the shards represent a single logical database, the failure of a node hosting a shard would make the entire logical database unavailable. Adding high availability to a sharded database improves availability of the logical database, insulating it from the failure of nodes.

Replication of databases on edge devices adds a new wrinkle to data distribution. Recall that with high availability, and depending on the architecture, cluster, the contents of the entire database are replicated between nodes and each database is a mirror image of the other. However, IoT cloud server database systems need to receive replicated data from multiple edge devices. A single logical cloud database needs to hold the content of multiple edge device databases. In other words, the cloud database server’s database is not a mirror image of any one edge device database, it is an aggregation of many. See Figure 5.

Cloud database server aggregationFurther, for replication in a high availability context, the sending node is always the master and the receiving node is always the replica. In the IoT context, the receiver is most definitely not a replica of the edge device.

Also, for replication in a cluster environment, as depicted in Figure 4a, the database system must ensure the consistency of every database in the cluster. This implies a two-phase commit and synchronous replication. In other words, a guarantee that a transaction succeeds on every node in the cluster or on none of the nodes in the cluster. However, synchronous replication is neither desirable nor necessary for replication of data from edge to cloud.

So, the relationship between the sender and receiver of replicated data in the IoT system is different than the relationship between primary and standby, and between node peers in a database cluster, so the database system must support this unique relationship.

In part one of this series, I led with the assertion: “If you’re going to collect data, you need to collect it somewhere.” But, it’s not always necessary to actually collect data at the edge. Sometimes, an edge device is just a producer of data and there is no requirement for local storage of the data, so a database is unnecessary. In this case, you can consider another alternative for moving data around: data distribution service (DDS). This is a standard defined by the Object Management Group to “enable scalable, real-time, dependable, high-performance and interoperable data exchanges.” There are commercial and open source implementations of DDS available. In layman’s terms, DDS is publish-subscribe middleware that does the heavy lifting of transporting data from publishers (for example, an edge IoT device) to subscribers (for example, gateways and/or servers).

DDS isn’t only limited to use cases in which there is no local storage at an edge device. Another use case would be to replicate data between two dissimilar database systems. For example, one embedded database used with edge devices and SAP HANA. Or, between a NoSQL database and a conventional relational database management system.

In conclusion, designing the architecture for an internet of things technology includes consideration of the characteristics and capabilities of the various hardware components and the implications for selecting appropriate database system technology. A designer also needs to decide where to collect data, where data needs to be processed (e.g., to provide command and control of an industrial environment or to conduct analytics to find actionable information and so on), and what data needs to be moved around and when. These considerations will, in turn, inform the choice of both database systems and replication/distribution solutions.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: