IoT Agenda


April 19, 2019  1:24 PM

Invest in a time series database when building your IoT application

Ajay Kulkarni Profile: Ajay Kulkarni
Database, device data, Internet of Things, iot, IoT analytics, IoT data, IoT database, IoT devices, Metadata, time series, time series data, time series database

Data is the lifeblood of any IoT application. In other words, IoT companies rely on their data to improve operations, provide better user experiences, make smarter business decisions and ultimately fuel growth.

However, none of this will be possible without a reliable database able to handle the massive amounts of data generated by IoT devices. When building an IoT application, you will want a database that’s scalable, performant, easy to work with and that can grow with your business. For these reasons, developers often turn to time series databases (TSDBes), which are known for their performance and ability to scale for IoT workloads.

IoT data is time series data

IoT data is actually two different data sets collected together: time series data, or data from the things, and metadata, which describes those things.

In particular, time series data can pile up very quickly. For example, a single connected car will collect 4,000 GB of data per day, which generates an abundance of information that needs to be collected, stored and analyzed. This is because time series workloads generally track new data points as inserts, not updates.

As a result, “normal” databases are just not equipped to handle the volume of data IoT devices generate. That’s where time series databases come in.

Time series databases are built for scale and speed

Not only are time series data workloads high volume, but they are also complex in nature — for example, powering a real-time operational dashboard or alerting system. This means that your IoT database needs to both scale and answer complex queries efficiently.

TSDBes, which can be based on relational or NoSQL databases, handle scale by introducing efficiencies that are only possible when you treat time as a first-class citizen. These efficiencies result in performance improvements, including higher ingest rates, faster queries at scale and better data compression. TSDBes also introduce features that aid with time series data manipulation, such as data retention policies, automatic aggregations, interpolation and so forth. The right TSDB will also have the ability to scale with your organization, from the cloud to the edge, which will significantly simplify your data infrastructure operational stack.

Optimize for time series data

If you are working with a specific type of data, it only makes sense to use a database that’s optimized for that workload. And TSDBes are specifically designed for IoT.

Additionally, if you want to avoid creating data silos, you should opt for a TSDB that will allow you to query time series, metadata, geospatial data and external data together at the database layer, which can greatly simplify your application layer. In general, this makes it easier to get proofs of concept and prototypes off the ground.

By using a time series database, IoT organizations can use the insights hidden in machine-generated data to build new features, automate processes and drive efficiency. So, if you are building a new IoT application or modernizing your existing data infrastructure, you should seriously consider using a time series database. Your future self will thank you!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

April 18, 2019  1:43 PM

Europe is beating the U.S. in the industrial internet of things

Michael Schallehn Profile: Michael Schallehn
IIoT, IIoT strategy, Industrial IoT, Internet of Things, iot, IoT management, IoT pilot, IoT project, IoT proof of concept, IoT strategy, poc, proof of concept

When Bain surveyed executives in Europe and the U.S. in 2016 about their plans for deploying industrial use cases for IoT, we found that European companies had pulled ahead of their U.S. counterparts. More European executives (27%) planned to deploy IoT compared with their colleagues in the U.S. (18%). We also found companies in some European industries devoting more of their IT budgets to IoT, particularly in automotive, buildings and industrial.

The results of our 2018 survey showed Europeans pressing their IoT advantage. While the long-term ambitions of executives in both regions appear closely matched, Europeans remain further along in implementation than their U.S. counterparts, particularly in the industrial sector. Europeans also appear further along in cracking the code to unlock value from IoT systems.

Europe’s lead in implementation is a direct result of companies allotting a higher percentage of their technology spending to IoT deployments compared with their U.S. counterparts (see Figure). Funding levels have remained roughly constant in Europe over the past two years, suggesting that it can take longer than expected to overcome implementation barriers, especially security. Discrete manufacturing represents the greatest percentage of allocation in Europe and the U.S., with process industries following closely behind.

Europe’s stronger engagement in proofs of concept (POCs) and higher investment levels in 2016 helped them move to scale faster, with three times more extensive implementations in 2018. U.S. companies are ramping up investing in POCs and plan to invest primarily in these through 2022.

Europeans also remain more concerned about security. U.S. executives share these concerns, but they also struggle with implementation — an indicator that they have not climbed as high on the learning curve as their European competitors.

Industrial customers in the U.S. say that they are more concerned with issues that would bring IoT systems into the mainstream of their businesses — namely, integration with other operating technology, interoperability, technical expertise and transition risk. U.S. executives were much more likely to cite these concerns in our 2018 survey than in 2016. These challenges limit the degree to which companies scale their POCs into daily, operational implementations of IoT technology.

Europeans’ more extensive experience with implementing IoT contributes to their increased awareness of cybersecurity risks. Bain research across industries finds that executives at companies with greater cybersecurity sophistication see more risks than those at companies with less sophisticated cybersecurity capabilities (for more, see the Bain Brief “Cybersecurity Is the Key to Unlocking Demand in the Internet of Things“).

Figure: European industrial customers allot more of their IT budgets to IoT and analytics, and they plan to continue doing so. Source: Bain & Co.

Looking ahead, the greatest challenge for Europe’s IoT providers is to become leaders in cybersecurity, and to meet the needs of their commercial and industrial customers that remain wary due to their security concerns. European companies must also continue to address the complex privacy and regulatory environment of the EU. Mastering these challenges could position Europeans as global leaders with a competitive advantage over their peers from other regions.

IoT providers in both regions could also speed their progress in reducing barriers by focusing their investments on fewer industries, which would allow them to develop greater expertise and deliver more comprehensive end-to-end offerings to their customers. The learning curve effects from POCs will allow vendors to overcome the implementation concerns of their customers by offering more packaged products and services that can scale more easily.

Leading industrial companies across regions are moving quickly past the proof stage, and now, it’s all about scaling. Over the next two to three years, clear winners will emerge as the benefits of early investment kick in and their POCs scale up to operational levels. Companies that have put off investment will lose ground to competitors that are learning how to derive value from the internet of things and becoming more data driven every day — skills that will form the basis of competition in a world of extreme automation and artificial intelligence.

This article was co-written by Ann Bosche, Christopher Schorling and Oliver Straehle, partners with Bain’s Global Technology practice in Silicon Valley, Frankfurt and Zurich, respectively.


April 18, 2019  11:45 AM

Are time series databases the key to handling the IoT data deluge?

David Simmons Profile: David Simmons
Data Management, Database, Databases, Internet of Things, iot, IoT data, IoT database, time series data, time series database

It’s pretty obvious that data is being collected at an astonishing — and rapidly increasing — rate. We’re collecting more data, on more systems and across more industries than ever before in human history. Keeping up with that flow of data is one of the major challenges in the IT industry today.

Unfortunately, I believe the growth of data collection is only just beginning, and the amount and velocity of data collection is going to not only grow, but grow at a faster pace than ever before. We are in for a deluge of data.

Why so much data?

The answer to this question is, of course, a long one, but it boils down to the fact that we are instrumenting more systems and more “things” than ever. From the increasing instrumentation of applications and systems — what we now call DevOps — to the exploding growth of IoT, everything around us is beginning to emit data. For now, I’m going to focus on the growth of IoT data to illustrate what we’re in for.

Every analyst has a prediction for how many IoT devices they think will come online by year X. Back in 2017, Gartner reported IoT devices grew by 31% to 8.3 billion devices over the previous year, and predicted that more than 20 billion devices would be online by 2020 (that’s only next year!). For simplicity sake, let’s use that 20 billion number as a baseline example.

How much data is that?

I’ve built many IoT devices — in fact, I have a dozen sitting on my desk right now. Some of these devices produce only a single data stream, meaning they only produce a single data point for each reading. Others produce upward of a dozen data streams. Consumer and industrial sensors, for example, can monitor for much more and produce dozens of data streams per device.

To give a more concrete example around how this data is calculated, let’s say each device produces an average of 10 data streams and writes data out once per second — which is very low for many industrial sensors, for the record. Now, my single-stream sensor reads the CO2 content and writes it out to a database every second. That reading, between 0 and 10,000 parts per million of CO2, can range anywhere from one to five bytes long. So, for the simplicity of calculating, let’s assume each data stream is a 5-byte reading, once per second. We now have a single device, producing 5 bytes per second, multiplied by 10 data streams — that’s only 50 bytes per second!

While this doesn’t seem like much, if you were to multiply this number by 20 billion devices, you’d get about 1 trillion bytes per second — or one terabyte of IoT data. Every second. Of every day. Forever.

My laptop has a 1 TB drive in it, so I’d fill that up in a single second, which is nearly a petabyte of data in a single year.

What are we going to do with all that data?

Now, this is the real question.

All of that data must be ingested into some sort of searchable database in real time. It must be stored, manipulated, queried and acted upon by businesses and organizations every hour of every day to get the most out of the business insights that rich data holds. Mind you, it’s not all going into the same database, but that’s still a lot of data to manage for any organization.

When talking about ingesting and storing data, we also need to take a look at what kind of data it is because not all data is created equally. We can break down IoT data into several buckets. The first is the metadata about the sensors and devices we’re using to collect the data. This can consist of everything from sensor model numbers to date placed in service, physical location and any other data about the sensor itself. This data is typically not updated often and probably doesn’t change much over time.

The really valuable data is the sensor data itself. Sensor data is typically time-stamped readings from a sensor, sent in a constant stream from device to storage platform. It could be a CO2 reading, environmental data or data from heart rate monitors, industrial equipment and so forth. No matter where this data comes from, it almost always follows the basic formula of <data reading>@time-stamp. This, some of you may recognize, is time series data — data for which time is a critical component.

How do we store time series data?

There are as many possibilities for storing time series data as there are databases in the world. You could store it in a traditional relational database management system (RDBMS), as unstructured data in a NoSQL database or even in a spreadsheet or CSV file. But just because you can do something doesn’t mean that you should.

Traditional RDBMSes are designed to store access and update relational tables of data, while unstructured NoSQL databases are suited to store and retrieve, well, unstructured data. IoT data, as we’ve seen, is none of these things. It is highly specific time series data, and for that, you need a time series database.

Time series databases are designed specifically to ingest, store and query time series data because it’s different than other types of data. It requires really high ingestion rates and the ability to query data across time to understand trends and business insights from the data.

The growth of time series data as a category

As time series data has grown, so has awareness of the need for specific systems for time series data. This growing data problem, and the growth of time series databases, has created a whole new category of database vendors. That’s why, over the past 24 months, time series databases have been the fastest growing segment of the database market.

With the growth of IoT data, it’s easy to see why.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 17, 2019  1:45 PM

Get smart: Campus environments demonstrate the power of IoT

Himanshu Khurana Profile: Himanshu Khurana
campu, edtech, Internet of Things, iot, IoT analytics, IoT data, Smart Building, smart buildings, smart campus

IoT connectivity has helped transform buildings from largely static structures into “living vessels” rife with opportunities to help meet organizational needs, as we have previously discussed. These sorts of building transformations are happening today in a wide variety of sectors, including higher education. The very concept of a university involves the notion of an institution designed for advanced learning and instruction, so why should the buildings themselves be exempt from a university’s box of teaching tools?

Universities are often all-encompassing institutions, providing not just educational programs, but also serving as homes for students and workplaces for a vast array of roles, from instructors and maintenance staff to researchers. University buildings are many times critical to supporting the needs of these roles, and enabling the very collaboration required between those who work, learn and reside within them.

Those who frequent campus environments seek optimal learning conditions to advance their short-term and long-term goals, and they do not typically expect significant interruption due to building-related problems. However, many may be unaware of some of the core and emerging issues that campus buildings are often expected to address today, including:

  • Rising energy and operating costs;
  • Space efficiency and the ability to expand and grow;
  • Managing daily occupant and outside visitor experiences;
  • Improving student experience and educational outcomes;
  • Connecting large facility teams and campuses that can be widely dispersed across geographies;
  • Ensuring safety concerns are minimized; and
  • Protecting against potential security risks.

Add issues associated with aging infrastructure and tightening budgets and the resulting picture can be one of stress, but also opportunity. With the right connectivity in place, thanks to the promise of IoT, educational institutions can not only maintain more effective learning environments despite these challenges, but can also use buildings to their advantage, putting their infrastructure to work.

Lesson in opportunity

At a foundational level, the basic ingredients for “bringing campus buildings to life,” include integrated building management technologies that incorporate data analytics and IoT sensors to monitor the performance of equipment and services and keep systems running optimally to help promote factors like comfort and security. IoT connectivity can also provide a runway for deriving additional value from buildings — it’s often simply a matter of knowing where and how to apply connectivity.

The use of connected technologies to address critical building systems can help serve as a roadmap for universities around the globe and influence how different types of building users may be impacted:

  • Students, teaching staff and campus visitors. Educational institutions should seek to provide building users with the ability to easily shape their experiences, so buildings operate at all times as comfortable and productive learning environments.
  • Service providers, including building operators and service technicians. Universities should seek to enable service providers to foster more intelligent campus operations through integrated systems and interfaces to help make management and maintenance more efficient and streamlined.
  • Contract managers, including interior designers and IT managers. Through a connected upgrade process, universities can better give contractors access to real-time, actionable information, so buildings can optimize their high performance while contract projects are underway.
  • Real estate managers. Connected campus buildings should seek to provide insightful data that will allow real estate managers to better understand and manage facility space constraints or potential expansion opportunities.

One example is Monash University, based in Melbourne, Australia, which sought to enable its campus buildings to more intelligently and automatically alter its internal environments based on the needs of students, staff and other key stakeholders. With the right systems in place, Monash is now using connected technology to help more efficiently manage the buildings on its Clayton, Victoria campus, and as a result turn them into veritable “living laboratories” and teaching tools for students and staff.

This transformation has largely centered on establishing “cognitive buildings” — essentially, making the buildings smarter by adding the right connected technology to capture desired insights to help enhance learning environments for optimal educational outcomes. Applying this connectivity is also expected to help the university better align with building operations in an effort to reach its goal of net zero emissions — or carbon neutrality — by 2030.

Data-driven gains

To better address university stakeholder needs and help realize a vision of turning campus buildings into “intelligent contributors,” a building-blocks approach often helps to establish the right technological foundation, followed by adding layers of applications that can help fully bring buildings to life.

People typically do not like to operate in vacuums, so why should a building? Implementing a scalable, integrated building management platform is helping Monash and other universities tie together building systems and operations, capturing data to provide a better read on how students and staff are using and navigating campus buildings. With these insights, universities can create spaces that better match expectations and usage patterns — essentially helping to make them more comfortable for those who use them, and often more energy efficient to help save on operating costs. Data-driven displays for building operations teams also often further empower more informed decision-making in areas such as security management and building equipment performance.

Taking a building-blocks approach also includes implementing app-based technology, so students and staff can have enhanced control over how they move through campus buildings and experience their surroundings, directly from their smartphones. Such apps include the ability to rate spaces and report issues, providing a path for users to express their pleasure or displeasure, so the university can see where they may need to make improvements or where problem areas might exist. Bringing students and staff into the connectivity improvement processes provides enhanced transparency and gives stakeholders more ownership in making their learning environments optimal spaces for productivity and success. No more just complaining to facility management. Knowledge is power.

Future planning

Universities are seemingly under constant pressure to make capital improvements to attract top students and faculty, increase performance/national rankings, build research capabilities and drive revenue. These pressures often come with the reality of reductions in government spending, tighter compliance regulations, a lack of capital funding, aging buildings and other pressing factors.

To address many of these challenges simultaneously, more universities are starting to look to IoT connectivity. With these “brains” in place, a building can directly contribute to an organizational outcome by helping to better serve students and faculty on a university campus and effectively become a proactive experience enabler and problem-solver that help get ahead of the future before it happens.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 17, 2019  12:16 PM

What you need to know about robotic process automation security

Pat Geary Profile: Pat Geary
connected RPA, connected workers, Digital transformation, digital workers, Internet of Things, IoT applications, IoT software, RDA, robotic desktop automation, robotic process automation, RPA

Organizations now face a choice of over 45 claimed robotic process automation products — all varying significantly in quality, design and approach. So, picking the right option is critical to achieving long-term success. However, with approximately 30% of all data breaches occurring as a result of vulnerabilities at the application layer, purchasers clearly require greater insight to correctly gauge the security credentials they require from various RPA vendors.

Gaining clarity on RPA security is a major issue; especially as the majority of newer offerings, such as robotic desktop automation (RDA), or desktop robots, don’t offer the same security capabilities as connected-RPA. These RDA tools promise quick wins that may sound compelling, but as organizations attempt to scale these tools to achieve greater business goals, their design limitations become increasingly apparent. For example, organizations get little business benefit if there is an inherent lack of central process design control, security, audit and governance.

Security problems with desktop automation

Unfortunately, the majority of newer RPA-labeled offerings, such as RDA, involve multiple short, record and replay tactical automations for navigating systems on desktops that can create security risks. This is because with desktop recording, a single human user is given autonomy over a part of the technology estate, which introduces a lack of central control. This obscures a robot’s transparency and hides process steps, which when duplicated over time becomes a potential security and compliance threat while limiting scale.

If a software robot and a human share a desktop login, no one knows who’s responsible for the process. This creates a massive security and audit hole and introduces shadow IT into a business, which is potentially very damaging for an organization. Restricting automation to a multi-desktop environment outside of the IT department or any central control means that RDA vendors are effectively sanctioning and using shadow IT as part of their deployment methodology. This is potentially very damaging for an organization as shadow IT, in the context of RDA, means unstructured, undocumented systems that become part of the process flows of a business which are uncontrolled.

For example, say the creator of a desktop automated process leaves the company or an organization changes. This can lead to audit failure due to an unknown fulfilment activity taking place, as well as security holes, such as passwords embedded in these lost processes, fraud and denial of service. If your business allows departments to build these recorded RDA scripts, then over time you not only create a shadow IT nightmare without realizing it, but you also create a massive technical debt that your business will have to resolve.

Why connected-RPA is more secure

Connected-RPA is different as it was designed from the start to carry out tasks securely, in the same way humans do: via an easy-to-control, automated “digital worker.” These digital workers are trusted to operate within the most demanding enterprise environments, as although they are run by business users through a collaborative platform, they still operate within the full governance and security of the IT department.

With connected-RPA, business users train digital workers without coding, so the system infrastructure remains intact. That’s not to say that APIs, web services and other traditional components can’t be used on the platform, but they are gated — controlled and provisioned by technologists for the business to consume, but not change.

For connected-RPA to deliver security, longevity and resilience at scale, automations should be carefully planned, modeled and designed. This means that business users can create automated processes by drawing and designing process flow charts that are intuitive and then used by the digital worker to automate a task. Documentation of a task becomes the actual task — change the documentation and the task is instantly changed.

The process models run by the digital worker are made explicit in the process flow chart for each process automated. The process flow chart is subject to audit and change control, as well as security with dual-key authentication. This approach is highly secure and compliant, as all documentation is securely managed within a connected-RPA platform, and protects the business from rogue employees, rogue robots and rogue shadow IT.

Connected-RPA also enables business users to collaborate by adding their automations into a central pool of capability managed and reused by the whole business. Digital workers’ decisions and actions are centrally captured and audited, too, and so is their training history conducted by humans. Crucially, this gives a comprehensive, cast-iron audit of all activity across the entire connected-RPA platform.

Organizations should also only consider RPA vendors that can demonstrate the highest level of Veracode Verified, a program that validates a company’s secure software development processes. This certification not only demonstrates an RPA vendor’s focus on providing an authentically built, enterprise-grade, secure system, but is also part of the company’s intrinsic product development methodology.

By completing and passing rigorous testing, the Veracode Verified program moves an RPA vendor beyond point-in-time security testing into a mature application security program that enforces secure development practice across the entire software development lifecycle.

Ultimately, RDA tools limit the scale and potential of RPA solely to the confines of the desktop and introduce a variety of risks too. However, connected-RPA provides a platform for collaboration — securely and at scale — where across many large organizations human workers, systems and applications are already creating a powerful, intelligent, safe ecosystem of partners that enable a real digital transformation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 15, 2019  12:48 PM

Securing the IoT edge

John Maddison Profile: John Maddison
Authentication, credentials, Edge computing, edge security, firewall, intent-based segmentation, Internet of Things, iot, IoT communications, IoT devices, IoT edge, IoT encryption, iot security, network access control, securing IoT, Segmentation

The network has undergone a remarkable amount of change over a remarkably short period of time. The clearly defined perimeters of traditional networks have been eroded away by BYOD, mobile computing, migration to the cloud, IoT adoption and the new WAN edge. Of course, this sort of evolution is normal, but the process has been accelerated by digital transformation designed to enable organizations to compete more effectively in today’s digital economy. 5G and the advent of edge computing and networking promise to change things even further and faster.

No element has played a larger role in this transformation than IoT. These devices are smarter, faster and increasingly mobile. They are also present in nearly every new networking environment being adopted by organizations, from branch offices and retail stores to the core network, and from manufacturing floors to the extreme edge of the network where they mingle with user endpoint devices in collecting, generating and sharing information.

Even though these devices are woven into our larger, distributed network environments, in many ways, IoT has become its own network edge. Devices have their own communications channels and protocols, interact to accomplish complex tasks, and generate massive amounts of data while performing critical functions — from monitoring systems to managing inventory to collecting and distributing data.

They have also become highly specialized. Medical IoT and industrial IoT are just the first of a variety of IoT devices designed for specific purposes that we have now come to rely on. Going forward, they will also play a critical role in things like enabling the ecosystem and support autonomous vehicles, making smart buildings and cities possible, and reinventing critical infrastructures to be more responsive to the demands of the communities they serve.

They are also beginning to bridge the gaps between traditionally separate networks, such as IT and OT, and between personal, public and business networks. Smart appliances, alarm systems and even entertainment systems connect back to a corporate network to deliver data and receive instructions. And they are integrated into personal devices that blend private, social and business profiles and data into a single component.

Which is why the persistent challenge of IoT security requires redoubled efforts to resolve. An alarming majority of these devices remain inherently insecure — they can’t even be updated or patched, which is why they have become a preferred target by cybercriminals for things like ransomware, cryptomining, distributed denial-of-service attacks and the delivery of malicious payloads.

Given the pervasive nature of these devices, the unprecedented rate at which they are being adopted and integrated into our networks, and how quickly we have come to rely on them, security has to be a top priority.

IoT security strategies

Because IoT devices can be placed anywhere across the distributed network, operate in different environments and connect from a variety of locations, consistent IoT security requires a consistent and comprehensive security strategy:

Assessment
Before an IoT device is even selected, an administrator should evaluate its inherent security settings. Devices that can be secured and patched should be appropriately hardened. Devices that cannot be hardened need to be secured using proximity controls, which means they need to be placed behind a firewall and all traffic needs to be inspected and behaviors monitored.

Once they are in place, two additional things need to be considered before they begin communicating. The first is to determine what sort of data a device will generate and the relative value of that data, and second, administrators need to clearly understand what other devices this IoT device will be able to connect to and, as a result, what resources and data it can see, access and potentially exploit.

Encryption
The next step is to secure communications. The kind and amount of traffic generated by IoT devices can vary greatly. Not only can they use different communications protocols, but the devices themselves can range from only sharing essential information to being very chatty. Encrypting traffic needs to be applied on or as close to an IoT device as possible.

Inspection
However, because encrypted tunnels provide an excellent way to securely transmit malware, they also need to be inspected. This requires implementing a firewall that can handle the volume of traffic that IoT devices create, has the CPU-overhead required to inspect encrypted traffic at network speeds — a weakness even the most popular firewalls are notorious for — and can implement additional advanced inspection, such as sandboxing, to detect unknown or elusive threats.

Network access control
Once IoT devices begin communicating, it is essential that they be accurately identified at the moment of network access. Network access control enables an organization to identify IoT devices to maintain an inventory of connected devices and ensure that policies meet device requirements. It can classify devices, assess them for risks and tag them with appropriate policies.

Intent-based segmentation
The best way to manage IoT traffic after access has been granted is by using intent-based network segmentation. This advanced segmentation strategy can automatically translate business requirements for an IoT device into a security policy that automatically determines the sort of protection an IoT transaction stream requires. IoT devices might be assigned to a segment assigned to a class of devices or functions, a segment based on level of security required or even a separate segment just for a specific device, application or workflow. When properly applied, these segments should be able to seamlessly protect any traffic generated by that device, even if it traverses multiple network environments or cloud ecosystems.

Conclusion

The most essential foundation for securing the IoT edge is building a flexible and integrated security fabric that is able to tie together and orchestrate the disparate security elements that span your networked ecosystems into a unified, interconnected and responsive system. This enables the effective monitoring of legitimate traffic and the checking of authentication and credentials, while enforcing access management across the distributed environment.

Such an approach expands and ensures resilience, secures and isolates distributed IoT resources, and enables the synchronization and correlation of intelligence for effective, automated threat response.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 12, 2019  12:42 PM

How accurate is accurate enough when it comes to location data?

Fabio Belloni Profile: Fabio Belloni
accuracy, Enterprise IoT, gps, Internet of Things, iot, IoT applications, IoT data, IoT use cases, location data, real-time location services, real-time location system, RTLS

Location has been a huge enabler for a wealth of applications impacting consumers and businesses, from mobile marketing to asset tracking. GPS is an early example of how combining a smartphone with positioning would change the way both consumer and commercial vehicles navigate the roadways. Drivers kept their expectations low and assumed that during a trip of any length, or even surrounded by the concrete jungle of a parking garage, the GPS would recalculate a few times as the signal from the positioning satellites deteriorated and the smartphone or other terminal, like their dash-mounted Garmin or TomTom device, lost track. GPS was accurate, and for the most part was “accurate enough” for general civilian location use.

Through the years, new technologies have entered the market as location-based services, machine-to-machine communications and IoT began to require location capabilities, primarily starting from outdoor and then extending to indoor asset-tracking use cases in a wide range of industries. Wi-Fi, active RFID, Bluetooth beacons and other technologies emerged with rudimentary capabilities to meet this need, in essence analyzing the power of the received signal strength (RSSI). The problem? These technologies weren’t built specifically for positioning, never mind the real-time nature that emerging applications required, limiting their effectiveness and their accuracy. Still, for the most part, they were “accurate enough,” with a “tolerable latency” for the requirements of the applications for which they were being used.

Over the last few years, however, the growth of IoT and its emergence in nonindustrial B2B markets has changed the mindset about what is required in terms of location accuracy. Spurred by the efficiencies they began to see across their businesses, organizations began to envision a wide range of applications for which they could use IoT, such as tracking smaller items and even people via embedded sensors on ID badges, with the aim of interacting with the environment. Effectively for every type of environment, several use cases started to emerge, even having different stakeholders within the same area. At the same time, huge technological advances across a wealth of technologies have emerged in the form of real-time location systems (RTLS) to deliver sub-meter location capabilities. And even more recently, the industry is abuzz over centimeter-level — and in some cases even smaller — positioning for emerging cutting-edge applications.

But is centimeter-level positioning necessary for IoT and other applications? First, it makes sense to take a look at what location accuracy really means for applications.

Understanding accuracy

Accuracy in the RTLS sense can be defined as a combination of precision and delay, or latency. High accuracy refers to the ability for an RTLS system to achieve from sub-meter (less than 1 meter) to centimeter-level precision, while still performing in real time with latency down to a fraction of a second in tracking moving targets. However, achieving accuracy with low latency comes at a cost — regardless of technology. In general, high-accuracy real-time tracking is solved by covering the area of interest with equipment and creating data redundancy, which results in increasing the initial system’s cost and, in some cases, the total cost of ownership as well.

Delay is another factor in RTLS accuracy. Not every application requires real-time location capabilities. For example, slow-moving heavy equipment may require location data with interval requirements of minutes — a 10-ton object does not move without a crane — whereas when tracking sports athletes, a delay of longer than 300 milliseconds is inadequate for augmented reality applications.

In most IoT applications today, neither centimeter-level precision nor real-time tracking is a key requirement. For example:

  • Locating a forklift in a warehouse: Accuracy within a few meters is acceptable, as is receiving the location within a few seconds instead of real-time.
  • Locating a container in a shipyard: Accuracy within a few meters is acceptable, as is receiving the location within a minute.
  • Moving large equipment around an oil field: This application may require location data with intervals of minutes, and understanding location within a few meters is generally acceptable.

However, there are emerging applications where high-accuracy tracking is a requirement. These may or may not include a requirement for real-time capabilities. Some examples of applications where a high level of accuracy is required include:

  • Deriving game analytics: Tracking the movements of athletes or objects, like pucks as they zip around an arena. This requires real-time tracking down to a few centimeters as players and equipment are always in motion and the relative position to each other is essential for characterizing the game dynamics and isolating specific events.
  • Smart buildings: This could be related to optimizing the workflow in hospitals while digitizing the ambient environment with a rules engine mimicking the real-world logic; interacting with domotics for home automation; or deriving metrics computing contextual information. Examples include turning on the light when someone enters a meeting room or analyzing the path journey of a shopper in a supermarket for deriving dwell-time metrics and product interactions.
  • Employee safety in an industrial environment: In an application used in warehouses, where workers and autonomous equipment move rapidly from place to place, determining location may require higher accuracy tracking in real time; for example, in the case of collision avoidance between forklifts and workers.
  • Security and monitoring: This applies to any mission-critical scenario where high-reliability data and consistency are required; i.e., for surveillance and access control.

Finally, the percentile of standard deviation — also characterized by the cumulative distribution function — is another key aspect of location accuracy. To say that a location is highly accurate in real time means that it needs to meet those criteria of high-accuracy positioning with low latency consistently — for example, less than 1 meter for 90% of the time.

The bottom line

The accuracy needed to locate a person or object depends on the specific requirements of the application itself and the business needs it supports. Looking at the examples above, it’s clear that in some cases, certain applications will require more accurate, lower-latency location capabilities than others. Organizations will determine what their requirements are for real-time location based on the specific applications they are developing, and advancements in precision will continue to open the door to a wealth of new applications.

It’s important to note that while organizations are determining their needs today, they also need to consider future applications, what level of accuracy these applications will require as they emerge and at what scale. This is a critical aspect for minimizing costs, improving profitability and ensuring a healthy long-term investment. Utilizing an RTLS that can easily scale and incorporate these new requirements as business needs dictate is paramount. That calls for the implementation of very flexible RTLS technologies where the system can be configured to operate across borders and from low to high accuracy. This makes it suitable for a wide range of applications, including security, safety and reliable workflow management, as well as pushing toward augmented reality or virtual reality applications as those needs emerge.

Determining the precise location of a person or object, consistently and in real time, is complex. It is often more difficult to track static objects than moving ones. There is no silver bullet that optimally solves all use cases. Organizations must weigh their specific requirements against system costs — considering both initial investments and total cost of ownership — required to achieve the location capabilities they target to deliver a return on investment that satisfies their business goals.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 11, 2019  4:42 PM

How will enterprises consistently adopt IoT and edge computing technologies?

Rick Vanover Rick Vanover Profile: Rick Vanover
Edge computing, edge vendors, Internet of Things, iot, IoT edge, IoT edge computing, IoT partners, IoT vendors, Microsoft, VMware

The IoT and edge computing space is one that will continue to evolve, but organizations may be struggling now with finding the right way to address it. What the edge means to most organizations doesn’t always translate to an individual organization. The same goes for IoT systems, which I’ve always advocated that organizations will find their best way and have an “aha” moment.

The piece of good news is that the how isn’t as daunting as it may seem. I say that because some familiar faces are in place to make this seem much more reasonable than how face value may be perceived. VMware and Microsoft, among others, are making significant investments in this space that will pave the way for specific business systems to be easy to implement. This will allow each organization to find its way with brands that already have an established relationship in place.

VMware Pulse IoT Center and Azure IoT Edge are two technologies that can make this transition start to make sense. Consider as IoT devices grow in popularity — it will be natural that there may be storage and networking considerations that will need to be rethought. Businesses will see IoT devices start to become more integrated and modern options in systems used — for example, machinery, autonomous vehicles, smart buildings, appliances and more — that will pose an IoT conundrum: Where what used to be a forklift is now a multi-terabyte a day data generating system. And the location has 25 of them. And your organization has 12 distribution centers. You can quickly see storage, bandwidth and compute needs will make this seem to be a tricky tradeoff between smarter devices and unwanted network and storage problems. That example was a forklift. The next example could be a tractor-trailer, an air-conditioning unit or scores of other examples.

Looking at consistent adoption for edge and IoT, organizations need to use key technologies to do the business-benefit-inducing work at the edge. Take the forklift example. If analytics can be performed on the device, wouldn’t that make sense to interpret the data there, at the source? Let the analytics be set in the cloud or central management, but do the hard work close to the data. Aggregating the results is the most important part, and organizations can manage that relatively smaller amount of data in a much more scalable fashion.

Source: Unsplash

Many of IoT and edge use cases today are around photo and video surveillance, but these are just a start. Organizations will have plenty of options for more complete systems, and when multiple systems are in place, the management and scale will become more important than ever.

A safe bet from IT practices of the past is to use key platform partners for the technology services to drive your business; IoT and edge are no different. Just finding the right place to start is the most important step.

Still not sure? Azure IoT can even walk organizations through a few questions to help them get started in the right direction.

What do you look for in an IoT and edge deployment today? Are platforms from established brands part of your requirements? Share your comments below.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 11, 2019  4:31 PM

Industrial IoT design insights: Five factors to consider

Mitch Maimam Profile: Mitch Maimam
IIoT, IIoT design, IIoT platform, IIoT security, IIoT software, industrial internet of things, Industrial IoT, Internet of Things, iot, IoT communications, IoT design, iot security, IoT software, IoT strategy, product development

IoT is becoming ubiquitous in all types of product categories, from consumer goods and medical products to commercial and industrial systems. Industrial IoT applications bring about unique challenges. Issues that are minor annoyances or problems in consumer products can cause abject product system failures in the industrial space. For industrial systems to succeed, designs need to be dependable and highly secure. With IIoT, downtime in mission-critical applications can’t be tolerated. Security breaches can cost millions of dollars and lost confidence by customers. And the technology is expanding rapidly: By 2020, global manufacturers are expected to invest $70 billion in IIoT, up from $29 billion in 2015. Here are a few critical things to be considered that are life-and-death in the industrial IoT space.

1. Connect or not?

Technology adds a cost layer to traditional non-tech-oriented products. In particular, adding sensing and communication technology can invoke both a nonrecurring and monthly recurring cost. While it is “de rigueur” these days to want to create new IoT products or add an IoT technology layer to existing products, it is important to understand the business case and value. Adding this layer involves embedding cost into the product with possible monthly subscription costs, as well as an initial and continuing stream of expenditures on product development and lifecycle support. While adopters in the consumer space may be willing to experiment with IoT technology with unclear long-term value, clear economic impact needs to be demonstrable in the industrial space. The costs of deployment are simply too high to allow for large-scale deployments of dubious utility.

2. Pick the right platform

When adding intelligence to a product that wasn’t connected before, many startups select hobbyist-grade boards. The trouble is that these developer platforms are not suitable for large-scale industrial-grade deployments. If the device proves successful and starts generating serious demand, production can’t scale because you can’t source thousands of that type of hobbyist board. Off-the-shelf platforms are useful for proofs of concept and as platforms for software developers, but do not confuse these POC systems with those that are production-ready. Any experienced hardware developer who has been creating industrial systems will know a development system lacks the reliability, security and durability required for mission-critical applications. You should only source components and modules for your product that will be available and appropriately costed now and in the future.

3. Pick the right communication platform

Today, developers are able to choose from a plethora of communication technologies for industrial IoT applications. There are a wide variety of wireless platforms to choose from in the cellular, Wi-Fi, Bluetooth and other major arenas — and there are subcategory options for each. The selection does not start with the radio. Rather, it ends with the radio. It starts with understanding the amount of data being acquired, the frequency of acquisition and communication, and where data is processed. These factors can then be balanced against things like the communications bandwidth, cost of storage and transmission, range and hardware cost. In developing a communication strategy, a bottom-up approach is required in order to avoid implementing the wrong wireless technology.

4. Security is job one

Security needs to be baked into your IoT product design process, not added on as an afterthought, particularly in the industrial space. The stories of hackers breaking into commercial systems through insecure connected devices are legendary. Security is a must-have, not simply a nice-to-have. The potential for a breach is enormous, and the results could be devastating. Bad guys often scan for poor or misconfigured security. Consider end-to-end security mechanisms, end-to-end data encryption, access and authorization control, and activity auditing. A security chain is only as strong as the weakest link. Low-end and poorly protected IoT endpoints are a frequent point of entry for attacks when they are not carefully and intentionally secured.

5. Get a top product development team

Oftentimes, engineering organizations in mature industrial spaces do not have the particular skills in-house to add an IoT layer to their product, even though their internal expertise may be more than up to the task of developing and sustaining the core product technology. Certainly, a company can embark on a campaign to recruit the talent for an internal team. However, in the current job market, the competition for such talent is fierce. It could take months or years to find and onboard an internal team. Many companies seek the assistance of an outside product development organization. By so doing, the company can get the best of both worlds. The combination of the internal team, with its core product and market knowledge, with an external team, with expertise in RF communications, cloud architectures, mobile applications, sensors and sensor integration can be extremely powerful.

While the prospects for industrial IoT deployments can be exciting, it is important to remember the basics as well. You need a sound business case, as with any investment. Solid project management is just as important as avoiding the above mistakes when shepherding a leading-edge technology device from inception to the manufacturing floor. Selecting the right engineers for the design team, who have technical as well as communications skills, is also critical to success. Finally, staying within budget parameters and meeting deadlines ensures the plan will be completed successfully, increasing the chances of the business’s success.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 10, 2019  4:11 PM

Why both cloud and edge computing are essential to IoT

Dave McCarthy Profile: Dave McCarthy
Distributed computing, Edge computing, Industrial IoT, Internet of Things, iot, IoT analytics, IoT cloud, IoT edge, IoT edge computing, IOT Network

Many of today’s industrial businesses are weighing the option between cloud and edge computing for their IoT deployments and finding it difficult to decide what best suits their data architecture and business goals. For every cloud benefit, there’s an equally tempting advantage to processing data at the edge. So, why not choose both? While most companies today view the cloud and edge as two separate entities, there is a great advantage to layering edge computing into cloud workflows. Because cloud and edge computing offer different systems for different types of environments, a distributed computing framework is often the best approach for IoT.

What is a distributed computing framework?

A distributed computing framework is a data processing approach that forgoes the practice of processing all of a business’ data in one place — e.g., all in the cloud or all at the edge — in favor of distributing the load across multiple locations. Here, it’s important to dispel the commonly held view that edge computing is singular — meaning a business only has one edge. In reality, companies can and often do have more than one. The edge is simply the point of data generation, so anywhere that happens is effectively an edge.

The simplest distributed computing framework involves three layers: the cloud, the site and the individual equipment in that site, but can be subdivided into even more layers depending on the environment. These separate layers allow industrial businesses to process and manage their data wherever it makes the most sense for their operation and objectives, whether that’s in the cloud or at one of the edges.

A distributed computing framework in a factory

A large deployment is a great environment to examine this framework in action, such as a factory with hundreds of pieces of equipment. Each piece of equipment in the factory is considered an edge endpoint because they each generate data. The factory itself constitutes an edge aggregation point, as it consolidates data from all enclosed equipment. The factory then would have processing ability in the cloud, which would be saved for instances when the business has something specific to report. Cloud computing also becomes especially helpful when a business has multiple factory sites.

In this example, the business could first compile data generated from the equipment on the factory floor before sending it to the cloud. Adding this step helps prevent a cluttered data repository, which often results from sending information from hundreds of pieces of equipment straight to the cloud. Incorporating edge and cloud computing into a factory can offer several benefits, but what would happen if the factory were to use only edge or cloud computing?

  • Cloud computing only: In traditional IoT architectures, all collected data is transported, combined and processed in a central data store. This has worked well in instances where only data collection is necessary, but for businesses that need to analyze information from each individual piece of equipment, this approach is no longer viable. Relying solely on cloud computing for some of these larger deployments — such as the factory example — would make it very difficult to react to the data generated on the equipment quickly enough to have a positive business impact. In fact, these kinds of delays can make a huge difference in scenarios that involve safety and quality. Including edge computing in a distributed framework allows businesses to move faster than they would if the data had to travel to the cloud and back, which opens the door for real-time analysis right on the equipment itself.
  • Edge computing only: Alternatively, edge devices only process data that is locally collected and on a short-term basis, meaning a factory relying exclusively on edge computing would lack the ability to get a full view of their operation and easily store data for identifying trends over time. Locally collected data provides a great picture of what is happening at the site and with the equipment, but not as they relate to each other. To get this higher level of analytics without the cloud, the business would have to manually combine all factory data, which would be inefficient and time-consuming.

Why both the cloud and the edge are essential for IoT

An industrial business that uses both cloud and edge computing for its IoT deployments will not only be able to take advantage of the low-latency and device responsiveness that comes with edge computing, but it will also benefit from the scalability, cost effectiveness and low maintenance of cloud computing. A multi-tiered approach fuses the strengths of both types of computing instead of picking one over the other. For example, a manufacturer of heavy-duty trucks might use edge computing to predict when individual trucks need maintenance, but can turn to cloud computing to make decisions about the fleet as a whole. Data about the types of repairs implemented and exact time spent on repairs can be stored in the cloud to help mechanics eliminate unnecessary diagnostics or steps for future repair situations.

The IoT landscape continues to change, introducing more devices every day, and with them, more data that businesses need to process and manage. No matter where an organization falls on the edge-to-cloud scale, it’s important that they choose a computing approach that best fits their business needs in order to gain a competitive advantage. Businesses that perform analytics both at the edge and in the cloud can use real-time data to make faster, more accurate decisions that create real operational value, such as minimizing costs and maximizing performance.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: