IoT Agenda

Page 41 of 79« First...102030...3940414243...506070...Last »

May 2, 2017  11:24 AM

Bricker bot: A silver lining to force accountability for IoT security?

Douglas Santos Douglas Santos Profile: Douglas Santos
Bot, Botnet, Brute force attack, DDOS, Denial of Service, iot security, security in IOT

The Bricker bot made the news a couple of weeks ago for knocking unsecured IoT devices offline rather than hijacking them into other botnets and using them for a DDoS attack like the massive event we saw last year against DYN. This is the third botnet that targets insecure IoT devices, but the only one that is destructive. The second, dubbed Hajime, breaks into IoT devices, but instead of bricking them, it makes them more secure by disabling remote access to the device from the internet. Of course, Mirai was the first, but it has the same purpose as other botnets, which is to enslave IoT devices and use the computing power of its collection of bots for the purposes of the threat actor behind it.

While the Bricker bot may not yet be a worm with mass adoption, it could be a precursor of things to come. It has all the early indications of potentially being very dangerous (even more than it is today) as it gains greater appeal.

There are millions of unsecured devices just waiting for someone to hijack them, with hundreds of thousands more of them coming online every single day. Because so many of these devices have little to no security, they pose a serious risk to the digital economy. As we have seen, because of their pervasive deployment, marshaling them to engage in attacks like the massive DDoS attack last fall would almost certainly bring a considerable segment of the internet to a grinding halt, disrupting business, affecting services and potentially impacting critical infrastructure.

The Bricker bot is different, as it simply disables the internet connectivity of IoT devices. The alleged reason for the Bricker bot, according to its author, is to highlight the vulnerability of IoT devices. The argument goes that if vendors are not keen about making sure they ship devices that are secure by default, and if the owners aren’t concerned about security either, then it is just a matter of time before these devices are breached and become part of a botnet. So, to warn the market about this problem, the Bricker bot author chose to simply knock them offline.

More info about Bricker

The Bricker bot is fairly straightforward. It functions through a couple of TOR exit nodes, continuously scanning the internet for open telnet and SSH devices — more specifically, the “DropBear” version of SSH, which usually ships with “tiny” Linux distros that have what is called a busybox (a couple of Unix-like utilities for embedded Linux distributions).

When it finds such a service, it tries a brute force attack by sweeping for the known passwords that are usually the default password on these devices — oftentimes hardcoded directly into the device. Once the worm gains access, it performs a few crippling commands on the box in order to make it impossible for it to become operational — ever again, in some cases. Even though it is very simple, it is currently able to identify and destroy more than 80 types of devices. This new form of attack even has its own name now: “permanent denial of service” or PDoS.

santos-bricker-attacks

Figure 1: March – April of 2017 we saw a significant increase in the number of attempted brute force attacks targeted against Telnet/SSH

Other applications of such an attack are very likely. Imagine receiving a message from an attacker that says, “Pay us or we will permanently kill your TV (gaming system, printer, router, smart appliance, internet service or other connected device) permanently.” This sort of ransomware attack could easily be performed instead of simply bricking the device.

Context

santos-bricker-breaches

Figure 2: Yearly data on attempted breaches per country

The purported reason behind Bricker, per the author’s own words (taken from a hacker forum — identity to be confirmed), is to force a change in the security of IoT devices, which security experts have known for a long time were almost as insecure as leaving your front door open.

This lapse in security encompasses everything, from weak hard-coded passwords to lack of testing for security issues to poor implementation of the networking stack to constant reuse of insecure code across and between vendors of completely different devices. This goes back to Fortinet’s predictions for 2017 that vendors need to accept responsibility for the security of their products, especially as they are becoming more ubiquitous in our lives and the infrastructure that we rely on.

Right now, the ones who are suffering are the individuals and carriers that use these insecure products. But soon enough, this trend is likely to piggyback onto the vendors selling devices built around junk code. And maybe it will be enough to force them to start thinking a little more about security and not just profiteering.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 2, 2017  11:08 AM

Get more meaningful data back from your IoT devices, and pay less for it

CJ Boguszewski CJ Boguszewski Profile: CJ Boguszewski
Cellular, cloud, Data Analytics, Data Management, Edge computing, FOG, fog computing, Internet of Things, iot, IoT analytics, IoT applications, IoT data, Opex

The internet of things depends on data. It seems like something that needn’t be said any longer, but it bears repeating as it’s one of the biggest barriers to IoT use cases heading to scale deployment. The things sense and act, the cloud stores and computes, and the intelligence applies insight and logic to drive action.

We’ve had machine-to-machine communications for a long time, and much of the prevailing mindset of IoT in the early going has been a SCADA-type mentality of command and control from the central system, with plenty of data in round-trips related to the server controlling and the client obeying. Very little of the cloud and intelligence end of IoT has been fully leveraged thus far.

It’s time to revise that approach

With more computing storage and processing horsepower at the edge of the network in today’s IoT devices, use cases have started to incorporate processing out there (with many paradigm names, inevitably: mist, fog, edge and more). It’s becoming more common, for sure. However, while that does bring benefits to certain use cases, the central tenet of IoT remains — plenty of data making its way upstream from devices, sensors and actuators will be the foundation of reaping ongoing benefits.

grid-ball-1914562_1920After all, with the compute, storage and economics of the cloud, why wouldn’t you want to get as much meaningful data as possible upon which the intelligence end of IoT can act?

One big barrier to getting more data is paying the price of carrying it

However, there’s still a big barrier in the way: the cellular Opex meters running on the telecommunications core networks which, to date, still carry the vast majority of IoT data.

When the MB meter runs, the toll on data collection remains in place. But is some data in the transmission from each IoT device/sensor/actuator more meaningful than others?

Clearly, we know that IoT generates a lot of data. Just to give you a mental refresher, here’s a quick back-of-the-envelope:

  • Data packet size: 1 Kb
  • Number of sensors: 1,000,000
  • Signal frequency: 1x/minute
  • Events per sensor per day: 1,440 (minutes in a day)
  • Total events per day: 1.44 billion (a million sensors)
  • Events per second: 16,667 (86,400 seconds in a day)
  • Total data size per day: 1.44 TB per day … for the 1 kb packet.

What that means is if you’re paying $1/MB, you’re handing $1,440,000 per day to the cell networks. That’s a daunting number when you hit scale! And most IoT product lines are still in their early stages, so they haven’t really had to pencil out the economics of their solution at scale yet.

Big for the IoT developer, tiny for the carrier

Believe it or not, that 1 million SIM deployment for $1.44 million/day is very small for the MNOs, whose users of expensive smartphones with high ARPUs remain their focus. And so they frequently look at IoT as a secondary source of traffic and revenue to their core networks.

Also, as it turns out, transmitting that data over the cellular network, due to the security requirements, necessitates surrounding the “useful” data with plenty of extraneous bits that aren’t later useful for the kinds of things we’d like to do in the cloud with the data. While it’s necessary for the sensors to report at the signal frequency that the use case demands, it isn’t necessary to transmit all of that 1 kb of data in each packet.

Why not?

Well, if the sensors were secured onto a virtual private connection, for example, where the device is unreachable from the “internet,” it’d be possible to eliminate a fair portion of that 1 kb. Extraneous packet header and security information can be expunged from the transmission, as those duties can be handled by cloud-side adapters that are tasked with managing those attributes. Each bit eliminated allows each dollar spent on data capture to the IoT application’s data store to be valuable for further processing in analytics, machine learning, business intelligence and other upstream applications.

Strip out the repetitive, extraneous bits — but also leverage edge compute

So building on the point that removing overhead in communication is key when collecting lots of minimized data transmissions, it’s also important to consider adding to that some capability around filtering data at the edge/in the fog/in the mist before sending it out to the cloud. For some use cases it can be more important to reduce data itself at the edge, for example, a surveillance camera sending an alert when a person passes in front of it, whereas cats are recognized, filtered and subsequently ignored. Combining those two approaches — reducing events through edge processing and sending only the important data with minimal overhead — is the way to go.

In summary, let the cellular Opex meter collect its (reduced) toll on meaningful data transmits and wring the repetitive data out of the stream. Architecting with an eye on virtual privacy for the network of devices, sensors and actuators that form the foundation of IoT is crucial to achieving precisely that.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 1, 2017  4:40 PM

Incorporating location into the IoT experience

Sudheer Matta Profile: Sudheer Matta
BLE, Bluetooth, devices, Internet of Things, iot, IoT applications, Wireless Access Points

The internet of things is all about connecting people and things and enabling objects to be sensed and/or controlled remotely across any network infrastructure. In a business environment, that often means smarter control of heating, ventilation and air conditioning systems, lights, security cameras, etc. — resulting in huge efficiencies and cost savings.

Bluetooth Low Energy (BLE) is a relatively new technology used in enterprises, hospitals, stores, hotels, etc. for indoor location services. BLE enables contextual engagement with mobile users, such as turn-by-turn directions across a campus and proximity-based messages. It can also be used to track strategic assets (for example, wheelchairs, pallets and forklifts) and people (e.g., children and patients).

BLE is becoming very prevalent in IoT to create some amazing experiences. Below are three real-world examples:

  • Motion sensors can be used in hospitals with BLE-enabled infusion pumps to determine with a high degree of certainty if those devices are located inside (or outside) clean rooms, indicating whether or not they are clean or dirty.
  • BLE-enabled defibrillators can be tracked throughout a mall, triggering local security cameras to monitor the situation and dispatch emergency responders as needed.
  • Thermostats in a conference room can be adjusted based on the temperature preferences of the attendees in the room.

How does it work?

Modern wireless access points (AP) are equipped with BLE antenna arrays that can receive signals (i.e,. beacons) from BLE-enabled devices, such as smartwatches, Fitbits, headsets, badges or tags. From a client device perspective, Google and Apple announced Eddystone and iBeacon as open protocols that use BLE to engage, enabling iOS and Android devices to scan for BLE signals. APs then leverage machine learning in the cloud to determine the location of those devices based on path loss formulas and probability surfaces, and can now deliver contextual services and information for those client devices.

These APs also have a port that can be used to send/receive inputs to/from IoT devices. This creates a common point of convergence for both mobile devices and IoT objects, enabling BLE location data to be used for smarter IoT event handling. This, of course, assumes that intelligent workflows are built on top of the infrastructure to apply the right actions to the information received.

(It is worthwhile to point out that modern access points also support Wi-Fi. In other words, Wi-Fi connectivity, BLE location and IoT can all be integrated together into a common network infrastructure for even greater functionality and cost savings. But for the purpose of this article, we are focusing just on the advantages of using BLE and IoT together.)

As devices become smarter and more connected, it is only natural that location enters into the equation for better contextual experiences. With new wireless networks that support BLE and IoT, this is finally possible today.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 28, 2017  3:20 PM

Avoiding industrial IoT digital exhaust with machine learning

Dean Hamilton Profile: Dean Hamilton
Artificial intelligence, Data Management, Data storage, fog computing, IIoT, Industrial IoT, IoT analytics, IoT data, Machine learning, Predictive Analytics

The internet of things is speeding from concept to reality, with sensors and smart connected devices feeding us a deluge of data 24/7. A study conducted by Cisco estimates that IoT devices will generate 600 zettabytes of data per year by 2020. Most of that data is likely to be generated by automotive, manufacturing, heavy industrial and energy sectors. Such massive growth in industrial IoT data suggests we’re about to enter a new industrial revolution where industries undergo as radical a transformation as that of the first industrial revolution. With the Industry 4.0 factory automation trend catching on, data-driven artificial intelligence promises to create cyber-physical systems that learn as they grow, predict failures before they impact performance, and connect factories and supply chains more efficiently than we could ever have imagined. In this brave new world, precise and timely data provided by low-cost, connected IoT sensors, is the coin of the realm, potentially reshaping entire industries and upsetting the balance of power between large incumbents and ambitious nimble startups.

But as valuable as this sensor data is, the challenges of achieving this utopian vision are often underestimated. A torrent of even the most precise and timely data will likely create more problems than it solves if a company is unprepared to handle it. The result is what I refer to as “digital exhaust.” The term digital exhaust can either refer to undesirable leakage of valuable (often personal) data that can be abused by bad actors using the internet or simply to data that goes to waste. This article discusses the latter use, data that never generates any value.

A 2015 report by the McKinsey Global Institute estimated that on an oil rig with 30,000 sensors “only 1% of the data are examined.” Another industry study suggests that only one-third of companies collecting IoT data were actually using it. So where does this unused data go? A large volume of IIoT data simply disappears milliseconds after it is created. It is created by sensors, examined locally (either by an IoT device or a gateway) and discarded because it is not considered valuable enough for retention. The majority of the rest goes into what I often think of as digital landfills — vast storage repositories where data is buried and quickly forgotten. Often the decisions about which data to discard, store and/or examine closely are driven by a short-term perspective of the value propositions for which the IIoT application was created. But that short-term perspective can place a company at a competitive disadvantage over a longer period. Large, archival data sets can make enormous contributions to developing effective analytic models that can be used for anomaly detection and predictive analytics.

To avoid IIoT digital exhaust and preserve the potential latent value of IIoT data, enterprises need to develop long-term IIoT data retention and governance policies that will ensure they can evolve and enrich their IoT value proposition over time and harness IIoT data as a strategic asset. While it is helpful for a business to have a clear strategic roadmap for the evolution of its IIoT applications, most organizations simply do not have the foresight to properly assess the full range of potential business opportunities for their IIoT data. These opportunities will eventually emerge over time. But by taking the time to carefully evaluate strategies for IIoT data retention, an enterprise can lay a good foundation upon which future value can be built.

So how can I avoid discarding data that might provide valuable insight or be monetizable, while still not storing everything? If storage cost and network bandwidth were unlimited, the answer would be easy. Simply sample sensor data at the highest resolution possible and send every sample over the network for archival storage. However, for many, IIoT applications with large numbers of sensors and high-frequency sampling, this approach is impractical. A balance must be struck between sampling data at a high enough rate to enable the responsiveness of real-time automated logic and preserving data at a low enough rate to be economically sustainable.

Is artificial intelligence the answer?

AI software algorithms that can emulate certain aspects of human cognition are becoming increasingly commonplace and accessible to open source communities. In recent years, the capabilities of these algorithms have improved to the point where they can approximate human performance in certain narrow tasks, such as image and speech recognition and language translation. They often exceed human performance in their ability to identify anomalies, patterns and correlation in certain data sets that are too large to be meaningfully evaluated using traditional analytic dashboards. And their ability to learn continually, while using that knowledge to make accurate and valuable predictions as data sets grow, increasingly impacts our daily lives. Whether it’s Amazon recommending books or movies, your bank’s fraud detection department giving you a call, or even self-driving cars now being tested — machine learning AI algorithms are transforming our world.

Many consider artificial intelligence critical to quickly obtaining valuable insight from IIoT data that might otherwise go to waste by surfacing critical operational anomalies, patterns and correlations. AI can also play a valuable role in identifying important data for retention. But employing AI in IIoT settings is not as simple as it sounds. Sure, AI cloud services (such as IBM’s Watson IoT and Microsoft’s Cortana) can be fed data and generate insight in a growing number of areas. However, IIoT poses some special challenges that make using AI to decide which (and how much) data to retain a non-trivial exercise.

The role of AI with fog computing

Companies facing the challenge of choosing between storing all raw IIoT sensor data generated or first blindly summarizing data in-flight (perhaps on an edge gateway) before transmission for long-term storage are often forced to choose the summarization approach. However, choosing the wrong summarization methodology can result in a loss of fidelity and missing meaningful events that could help improve your business. While consolidating data from many machines can allow the development of sophisticated analytic models, the ability to analyze and process time-critical data closest to where the data is generated can enhance the responsiveness and scalability of IIoT applications.

A major improvement over blind summarization would be to have time-critical complex analytic processing (such as algorithms for predictive failure analysis) operationalized on far-edge devices or gateways where they can process entire IoT data streams and respond as needed in real time. Less time-sensitive analytic intelligence and business logic can be centralized in the cloud and can use a summarized subset of the data. The AI algorithm can help to determine how much summarization is appropriate, based on a real-time view of the raw sensor data. This is where fog computing can play a role. Fog computing is a distributed computing architecture that emphasizes the use of far-edge intelligence for complex event processing. When AI is leveraged in a fog computing model, smarter real-time decisions (including decisions about when to preserve data) based on predictive intelligence can be made closest to where the data originates. However, this is not as easy to accomplish as it sounds. Edge devices often do not have sufficient computational and memory resources to accommodate high-performance execution of predictive models. While new fog computing devices capable of efficiently employing pre-trained AI models are now emerging, they may not have visibility into sufficiently large or diverse data sets to train sophisticated AI models. Obtaining large and diverse data sets still requires consolidation of data generated across many edge devices.

A practical compromise IoT architecture must first employ some centralized (cloud) aggregation and processing of raw IoT sensor data for training useful machine learning models, followed by far-edge execution and refinement of those models. In many industrial environments, this centralization will have to be on-premises (for both cost and security reasons), making private IoT data ingestion, processing and storage an important part of any IIoT architecture worth considering. But public IoT clouds can also play a role to enable sharing of insight across geographic and organizational boundaries (for example with distribution and supply chain partners). A multi-tiered architecture (involving far-edge, private cloud and public cloud) can provide an excellent balance between local responsiveness and consolidated machine learning, while maintaining privacy for proprietary data sets. Key to realizing such a multi-tiered architecture is the ability to employ ML at each tier and to dynamically adapt data retention and summarization policies in real time.

Adaptive federated ML

Federated ML (FML) is a technique where machine learning models can be operationalized and refined at the far edge of the network, while still contributing to the development of richer centralized machine learning models. Local refinements to far-edge models can be summarized and sent up one level for contribution to refinement of a consolidated model at the next tier. Far-edge devices across production lines within a factory can contribute to the development of a sophisticated factory-level model that consolidates learning from all related devices and production lines within that factory. Refinements to the factory-level model can be pushed up to an enterprise-wide model that incorporates learning across all factories.

Adaptive federated ML (AFML) takes FML a few steps further. First, as the centralized models evolve, they are pushed out to replace the models at lower tiers, allowing the entire system to learn. Second, when an uncharacterized anomaly is detected by the AI at the far-edge, the system adapts by uploading a cache of high-resolution raw data around the anomaly for archival and to allow for detailed analysis and characterization of the anomaly. Finally, these systems may also temporarily increase the sampling rate and/or reduce the summarization interval to provide a higher resolution view of the next occurrence of the anomaly.

Here’s an example of how an AFML approach works:

Adaptive federated machine learning training mode

AFLM: INITIAL TRAINING MODE:

  1. All raw sensor data is sent from the IoT devices to centralized, on-premises private IoT storage for the time required to aggregate a sufficiently large data set for training of effective AI models.
  2. An on-premises big-data, cluster-computing environment is then used to train AI models for anomaly and predictive analysis.
  3. Once the models have been trained, they are pushed down to fog computing devices and up to the enterprise level for consolidation. Centralized aggregation of raw data ceases and the system switches to production mode.

Adaptive federated production mode

AFML: PRODUCTION MODE:

  1. Complex event processors on the fog computing devices use the AI model to analyze all data in real-time and provide their insight to local supervisory logic and the private cloud.
  2. When everything is nominal, only summarized data are forwarded to the centralized IoT cloud for archival.
  3. Whenever deviations from the model are flagged by the fog device AI, the supervisory logic does three things:
    1. Executes any local rules in place for that predicted failure
    2. Sends a summary of the model deviations to the on-premises IoT cloud (for updating of the consolidated model)
    3. If the exception is an uncharacterized anomaly, sends to the IoT cloud a cache of raw data that surrounds the anomaly
  4. A batch process at each IoT cloud tier routinely retrains the machine learning models (using model deviation data and raw data) and periodically pushes down the upgraded model to the lower tier.
  5. Finally, selected data subsets that can be used by partners are sent to the public cloud for further exposure.

So instead of simply choosing to blindly summarize IIoT data at the edge (generating massive data exhaust), complex predictive analytic models can be employed directly on the far-edge devices where all sensor data can be examined. These analytic models can not only inform timely local supervisory decisions, they can learn from data generated beyond their reach and can also dictate how much of the raw data is worth preserving centrally at any moment in time. With this approach, the entire system will react quickly and get smarter over time.

Choosing the right technologies for AFML

The key to making an AFML architecture work is selecting IoT and analytics tools that are designed with this model in mind. You will need an on-premises IoT and analytics cloud infrastructure that is efficient, flexible, scalable and modular. You will need to be able to easily orchestrate the interactions of your key architectural building blocks. The tools you select should allow you to easily adapt to the shifting needs of your business, allowing you to experiment and innovate.

Although we are starting to see a proliferation of IoT tools that can be used for developing various IIoT solutions, these tools are seldom well-integrated. Industrial enterprises typically look to systems integrators to bridge the gaps with custom software development. The result is often a confusing, inflexible, costly and unsupportable mishmash of technologies that have been loosely cobbled together.

Thankfully, a few IoT vendors are now beginning to build more fully-integrated IoT service creation and enrichment platforms (SCEPs), designed to support an AFML IIoT architecture. SCEPs allow complex IoT architectures, applications and orchestrations to be efficiently created and evolved with minimal programming and administrative effort. These next-generation IoT platforms will help companies eliminate IoT data exhaust and harness IIoT data for use as a strategic business asset.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 28, 2017  3:18 PM

Match your real-time operating system in a cutting-edge platform

Kim Rowe Profile: Kim Rowe
Internet of Things, iot, IoT platform, iot security, operating system, OS

Today’s competitive world of embedded systems and the internet of things is placing ever greater demands on developers. They need to produce products that optimize size, weight and power, as well as have focused feature sets that are also flexible so they can be quickly adapted to meet changing customer requirements. In terms of processors, this means a family that is built around a common, high-performance core that features low power consumption, but which as a series offers a range of selections in terms of on-chip functions and peripherals that present a compatible platform for software. This newer generation tends to be organized into such families that, with a common instruction set, encompass a range that provides scalability from small-scale to large-scale applications along with I/O, on-chip flash and RAM in market-leading sizes.

Matching a real-time operating system (RTOS) to such a processor family requires that it be able to smoothly scale along the lines of power and performance offered by the line of processors. Ideally it should have a familiar, standard API such as POSIX, which is a Linux-compatible standard for embedded systems. It must have a set of core characteristics in a compact package along with a wide selection of tested and proven functional software modules that can be quickly integrated to match the application and selected processor along with the mix of on-chip and off-chip peripherals needed. This, of course, means a wide selection of such functional modules.

Many vendors now offer integrated development kits that include a single-board CPU with an integrated RTOS along with peripherals and tools to help the developer get started immediately adding innovative value. Such an integrated platform supports eight principles required for fast and efficient IoT development:

  1. Lean
  2. Adaptable
  3. Secure
  4. Safe
  5. Connected
  6. Complete
  7. Cloud-compatible
  8. Cutting edge

These features not only help guide development for focused design and meeting time-to-market demands, they also guarantee the functional considerations needed to work effectively in the internet of things.

The platform’s rich modularity supports the lean development model by providing standardization, interchangeability of drivers, protocols and service modules, and portability of applications. This lets developers quickly adapt to changing customer requirements in the midst of a project. The integrated platform optimizes both hardware and software design on the go.

Those same features make it adaptable — able to meet new market demands for features and functionality. Once a product is in place with a customer, the OEM must be able to quickly react to calls for additional features and expanded functionality — or even a smaller, lower-cost version of a product. Existing code can be moved to a higher performance processor and new features quickly added without serious revision of existing code.

Security and safety go hand-in-hand and must be designed in from the ground up. If it can be hacked, it isn’t safe. Security begins with the selection of a secure initial design and extends through communication protocols, strategies such as password, electronic key and physical recognition, the use of secure booting, encryption and many more strategies. However, the judicious selection of the basic system architecture, hardware and software is also a key requirement.

Two main features help ensure safety in systems. Determinism guarantees quick response to threatening conditions and makes the operation of the system predictable so that it can be reliably tested to meet strict timing requirements. Emergency stop with zero boot time means that a device can be halted instantly and restarted with zero boot time if required. Thus an unsafe condition can be halted immediately and brought back to a safe condition or the device diverted to an action to deal with the emergency.

The internet of things is connected and must accommodate a very broad range of sensors, both wired and wireless. This means the full range of both wired and wireless connectivity. An RTOS that can deliver virtually any connectivity solution that can be selected and integrated into the RTOS design off the shelf is complete in that it contains everything you are likely to need for a project as it evolves.

Among the supply of protocols should be those that can be used on the cloud side to easily and securely connect and transfer data and process commands. This requires the latest components and tools to deal with it and its newer applications, along with the ability to work with other leading-edge applications like Microsoft Azure. It also means the use of the best tools such as a wide selection among the IDE offerings, along with advanced support tools that allow use and tracking of memory and objects such as dials, gauges charts for variable displays, plus timing tools and displays to understand scheduling, interrupt processing and more.

The incorporation of these latest tools, protocols and technologies and their availability across a matched RTOS and processor family makes this a truly cutting edge platform.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 27, 2017  3:21 PM

GDPR: It’s more than a regulation, it’s an opportunity

Eve Maler Profile: Eve Maler
Data privacy, GDPR, Internet of Things, iot, IoT data, iot security, privacy, regulatory compliance, trust

Industries of all kinds face governmental regulation. It’s the price of doing business in civilized societies. But how businesses approach regulations has enormous bearing on organizational success. We got a stark reminder of the huge gulf between acting “by the book” and, well, something more earlier this month when United Airlines forcibly removed a paying passenger from a flight leaving Chicago. The airline’s choices in this case earned it a PR drubbing and a serious erosion of customer trust, with more pain likely to come.

Air travel is a highly regulated consumer service industry. A range of options is always available when considering how to handle the challenges of trying to fit crew members onto an already full flight. Speaking of “having to re-accommodate” customers reflects a by-the-book, compliance-oriented mindset. If that’s one end of the range, what’s the other end? Seeking to build trusted relationships with customers and end users of your product by offering them transparency and control.

What does dragging and dropping passengers have to do with privacy?

What does all this have to do with the EU General Data Protection Regulation — or the internet of things? Good question. It’s clear that IoT is leading to an explosion of business opportunities. But companies attempting to develop their own IoT solutions from soup to nuts are quickly learning that providing safe, secure, privacy-sensitive interactions is extraordinarily difficult. At the same time, as May 25, 2018, the date of GDPR implementation, approaches, this far-reaching regulation looks likely to be the central governing framework for consumer-oriented companies pursuing IoT business models across the globe. If your organization works with the personal information of anyone in the EU, whether you’re based there or not, GDPR applies to you.

What are the regulatory objectives of GDPR? Data privacy with choice and control: strengthening the exercise of fundamental privacy rights of individuals and putting users back in control of their personal data.

The lesson here for any organization looking toward realigning its privacy practices for May 2018 couldn’t be clearer: Never miss out on an opportunity to create a trusted digital relationship. Too often organizations fall into process-oriented thinking: “Technically speaking we’re in compliance here, so it’s all good.” You fall into this trap at your peril!

So what to do? Take these steps to make progress in your IoT data privacy journey and get ready for your GDPR close-up with actual users.

Step 1: Identify where digital transformation opportunities and user trust risks intersect

We know IoT is driving all the interesting business opportunities, from connected clothing and athletic gear to breakthrough smart health devices. But where have data flows lain fallow because it’s impossible to build them securely and in a compliant fashion? These unacceptable trust risks can potentially be made acceptable if the right stakeholders from the privacy and business professional sides of the house can work together. You can work to bring these dark data relationships into the light once you know what they are.

Step 2: Conceive of personal data as a joint asset

Often businesses — or at least their marketing departments — become quite proprietary about the personal data they collect from consumers. However, in the GDPR era, that’s simply not a useful mindset. Thinking of users’ personal data as something you both have a stake in sets you up for success. It puts you into your users’ shoes, which is always useful because on another day, for another product or service, you yourself are “just another user.” It’s also good for compliance, since regulations do tend to change and grow (new GDPR guidance documents are coming out at a rapid clip lately).

Step 3: Lean in to consent

GDPR defines six legal bases for processing personal data. One of them is consent, and if used, it gives various information management freedoms and responsibilities to an organization. Crucially, it also comes with user trust implications. Some others, like “in the exercise of official authority vested in the controller,” essentially tie the organization’s hands. But some others, like “necessary for the purposes of legitimate interests pursued by the controller or a third party,” could be used for trust-destroying mischief.

Step 4: Take advantage of identity and access management for building trust

The IoT world seems to be quickly picking up on a lesson that the web and API worlds took a lot longer to learn: Adding security and privacy features is a lot harder if you don’t have a means of checking for authenticated identities and then authorizing their access. Identity and access management infrastructure has a great deal to offer toward building trusted digital relationships.

Think of building trust in layers of identity support. The data protection layer lays the groundwork for your organization’s trustworthiness against security breaches; it includes identity data governance and building a single view of the customer across what may be many individual smart devices and applications. The data transparency layer ensures all these devices and apps have proper terms of service, privacy notices, federated connections to other systems and provable consent; this is about giving users a single view of their consents. The data control layer ensures users can proactively start, monitor and stop sharing as they see fit at a fine grain — for example, deciding to give access to insulin pump data, or even to the pump’s control functions, to a doctor or caregiver.

Trusted digital relationships with users are yours to lose

The GDPR it is intended to be one of the most contemporary regulations in a long time. That’s lucky for all of us, since the internet of things is one of the fastest-moving technology and business spaces in a long time.

To prepare for the GDPR, organizations need to go beyond data protection and embrace data transparency and data control. The choices you make about your customers’ data increasingly reflect on not just your data protection officer’s actions but your entire business model. Addressing user trust risks is certainly something you can do something about; the more important question might be whether you can afford not to.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 27, 2017  3:19 PM

How to design IoT products and services strategically

Ken Figueredo Profile: Ken Figueredo
Data Management, Development, Enterprise IoT, Internet of Things, iot, IoT hardware, IoT platform, IoT sensors

The internet of things has reached mainstream status on the evidence that corporate managers and business unit heads from established businesses (as distinct from startups) are responding to the strategic implications of IoT.

All too often, however, their approach is technology led. It is reminiscent of the way that organizations reacted to the commercial internet some 20 years ago. Today’s preoccupation is with connectivity and selecting the “right” technology from different wireless alternatives. To some extent, technology suppliers are inflating the merits of different approaches. First mover, proprietary offerings (LoRa, Ingenu, SigFox) are trying to advance their solutions relative to the lumbering response of the mobile industry (LTE-M, NB-IoT family of technologies).

The focus on technology is a missed opportunity for businesses to leapfrog to new sources of innovation and business strategy. How much more might businesses achieve by working back from commercialization and innovation ideas to define their technology needs?

Consider how many businesses develop their IoT solutions at present. The process begins by finding some way to “test the waters.” This involves adding connectivity to an existing product or sensor and using the data to enable an IoT application. Many organizations find value simply from visualizing the data that their products report back. Often, this triggers some interesting insights (e.g., product usage patterns) which can lead to a follow-up action (e.g., to introduce product design enhancements, to adjust maintenance procedures, to interact with the end user etc.).

With a successful proof of concept, an organization would then roll out its solution operationally, taking the first steps to “build IoT into the business.” This involves modifying operational procedures and introducing new technologies, such as an IoT platform, to manage and support a population of connected devices. An example of a self-contained IoT solution in the smart city context is a “smart” streetlight solution that dynamically alters streetlights while reducing energy consumption and maintenance costs.

iot-product-management-planning-strategically

Over time and with greater experience, many organizations wonder, “What else can IoT do?” They may decide to support multiple IoT applications. Or they may develop new applications and improve the performance of existing ones by integrating third-party data. Interoperating across multiple (siloed) applications exposes yet another set of possibilities.

Eventually, businesses start to think about ecosystems (closed and open variants) which are the basis for collaboration with partners to make their connected devices accessible to external IoT applications. Ecosystems, which are also known as multisided business models, also expose new prospects to monetize data through innovative application opportunities.

This bottom-up experimentation, from device to multisided applications, tends to be highly tactical. In going through the process of “testing the waters,” an organization is likely to tie itself to a single-technology, proprietary approach through a narrowly scoped in-house development project or by partnering with an off-the-shelf vendor solution. Investment outlay would be an overriding consideration. At the point of operational deployment, its IoT platform would contain the bare minimum of elements needed to deploy and manage a population of connected devices. Typically, these would include software-based tools to manage service activation, connectivity service quality, remote-device profiles, application development and the user interface. And finally, any expansion of the initial application would rely on systems integration investments to graft on new features.

Organizations that recognize the full potential of IoT take a different approach. They build on a long-term vision, deploying investment resources intelligently to anticipate future needs. By prioritizing value creation, to the right-hand side of the illustration, organizations can explore new ways of doing business based on the many different commercial concepts that IoT can enable. Examples include better use of data along the supply chain and data sharing with non-traditional business partners, including those from other business sectors, to enable new services and revenue streams. In practical terms, this implies the need for common data models and technologies to enforce commercial and legal policies around data ownership and monetization.

Working back, the strategic approach to “building IoT into the business” favors an IoT platform whose capabilities are designed to be extended over time. This means having the ability to:

  • Accommodate a growing number of sensor types;
  • Function as a data exchange that can handle multiple data feeds (own and third-party sources); and
  • Support new business models (e.g., alternative pricing schemes, sharing and monetization of IoT data with multiple partners via configurable commercial and privacy policies).

Applying this perspective means that the choice of technologies to create an early application is influenced less by pure cost considerations and more by total cost of ownership, technology-neutral and open standard factors.

With IoT becoming a fixture on the competitive landscape, very few organizations will escape its influence or the resulting impact on business operations, product roadmaps and technology investment plans.

Businesses can choose to invest for the long term, applying lessons from the commercial internet era. They can focus less on connectivity and more on preparing for new service concepts to capitalize on their connected products. The risk of acting tactically is to under-leverage investments and overlook chances to capitalize on innovative business opportunities.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 26, 2017  4:02 PM

How the internet of things is opening possibilities beyond the home

Rob Martens Profile: Rob Martens
Consumer IoT, consumers, End-user, Enterprise IoT, Internet of Things, iot, IoT applications, Smart Building, smart home, User experience

Most people have heard of the internet of things by now, but few are familiar with the possibilities this platform opens for businesses and consumers alike. This platform, that has long been associated with the smart home, offers real and tangible value beyond the cool factor. Consumers and businesses fail to recognize that the internet of things acts as a key component to the daily operations of companies in several industries, such as fitness, agriculture and fashion.

A sensor does not make a product or technology part of the “internet of things,” it’s all about creating a holistic experience for the end user. It’s time consumers and businesses drop this notion and realize the positive impacts this technology has or could have. From the clothes we wear to the food we eat, IoT is creating new conveniences and helping us maximize our resources every day.

Beyond fitness trackers, IoT taps into equipment

Most people are familiar with wearables that track fitness like Fitbit. IoT is also taking exercise machines a step further into the future by creating realistic and tangible user experiences. The Peloton Cycle Exercise Bike, for example, comes with a screen that allows you to connect to real-life classes hosted by fitness trainers, giving users access to world-class instructors and allowing them to compete with other users from the comfort of home. With the busy schedules many of us keep, it’s easy for self-care to fall to the bottom of the priority list. Connected fitness machines make it easier to prioritize fitness by saving time spent traveling to the gym and encouraging healthy competition between users all over the world.

Hotels are even catching on, providing Peloton bikes within the room so guests can get their exercise without heading down to the gym. This is a smart idea in that it differentiates those hotels and draws guests away from the competition by offering a premium experience. The bike’s ability to import data from your wearables means you have access to the latest health metrics whether you’re using a hotel’s bike or your own. And perhaps best of all, when Peloton added tech, it didn’t sacrifice good design. The bike is one of the most attractive fitness machines on the market, and sturdily made so it can handle intense exercise sessions.

IoT provides solutions to farmers and harvesters

IoT is even helping to maximize our agricultural resources. Farmers face immense challenges in creating a steady crop yield year over year — unpredictable weather, water shortages, and limited availability of land can impact the yield significantly. Brilliant IoT innovators have been focusing their expertise in this industry for years to create some pretty amazing resources to help solve those problems, increasing the quality, quantity and sustainability of agricultural production.

Large farms can now use remote sensors to monitor the amount of moisture in the soil and even the amount of growth the crop has achieved. Harvesters can now be controlled from anywhere in the world, provided there’s an internet connection available. Inventors have even begun using artificial intelligence to analyze metrics about the crop and even weather patterns to make predictions about the future, taking some of the guesswork out of a traditionally unpredictable industry. This is not just great news for farmers — it’s great news for everyone because it increases crop yields to make food more widely available.

Fashion meets technology

The fashion world is even beginning to catch onto the many benefits of IoT. Most people today appreciate the ability to customize; we want our clothes, our homes, our cars, everything we own to not only behave in ways that make the day go smoothly and conveniently, but to reflect who we are as people. We live in an increasingly connected world in which we interact with new people more frequently. First impressions are more important than ever, so the ability to express who you are through the clothes you wear has become equally important.

Services like NikeID allow you to customize your product (in its case, shoes) from the ground up. You start with a blank canvas, choose the basic structure of the shoe based on the functionality you want, and then you get to customize the fun stuff — materials used, colors and graphic prints. You can even choose how the bottom of the shoe looks. Another service called Trunk Club behaves as a personal fashion assistant, a luxury typically reserved for the rich and famous that’s becoming available to all thanks to IoT. The service shows you several style combinations and asks for your opinion, building a profile of your tastes that allows it to make suggestions based on what it knows you like. It even seeks out the clothes that are best for your body type, as well as those that fit within your budget, so you aren’t pressed into spending more than you normally would.

The reason these innovations work is that they create a holistic experience for the end user. It’s not about the latest gadget or simply adding sensors to everyday items — it’s about integrating technology into your lifestyle in a way that makes sense and creates convenience and efficiency. It’s about creating a beautiful user interface, in terms of both usability and design. It’s about providing you with the essential knowledge you need in order to be as successful as you can. In its simplest terms, IoT is about making life better, and the opportunities to do that are boundless. Every industry on the planet, and even the planet itself, stands to benefit from technology used in this way.

To learn more about Schlage and Rob Martens’ stance on the tech landscape, please click here.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 26, 2017  1:55 PM

Five things to know about the future of microservices and IoT

Nishant Patel Profile: Nishant Patel
Enterprise IoT, Internet of Things, iot, IoT applications, Microservices, Software development

Microservices are emerging as a preferred way to create enterprise applications. Just like mobile app development adoption five years ago, a lack of expertise can slow down some companies in their pursuit. However, with IoT development on the rise, it’s inevitable that microservices will become the architecture of choice for developers — today and tomorrow.

Although it has received criticism for not fitting into certain DevOps cultures, microservices are increasingly adopted and gaining fans across numerous industries. Large-scale online services like Amazon, Netflix and Twitter have all evolved from monolithic technology stacks to a microservices-driven architecture, which allowed them to scale to their size today. Microservices are ideal for supporting a range of platforms and devices spanning web, mobile and IoT (including wearables). When developing for IoT specifically, here are the five considerations for why the future is bright with microservices:

  1. Lower cost: IoT sensors and devices are fairly affordable today. That said, it is almost always more cost effective to roll out hundreds of small sensors that each do one thing really well instead of opting for fewer, but more powerful and more pricey options. One big reason for this is that no matter the device, in just a few years, most will become “obsolete” or superseded by more sophisticated, more cost-effective alternatives. The beauty of going with the simpler hardware is that you can rely on microservices to add value and fill functional gaps. You can also gradually roll out the network and continue to upgrade and maintain it in a cost-effective manner as individual components get replaced. Done right, this means you’re never in a position where you have to replace an entire monolithic system in one go.
  2. Faster innovation: The world of IoT deployments is generally still very much in beta. Although there are already billions of cool and useful devices deployed, we’re still only scratching the surface when it comes to unlocking their full potential. A microservices development approach allows you to unlock innovation (and thus value) faster by making it easy to test new combinations of “things” and “services.” No need to build an entire technology stack or invest in big infrastructure over many months. With microservices, you can tinker and test to your heart’s content and quickly reap the benefits of innovative solutions to your problems.
  3. Isolated risk: Assembling your solution via microservices allows you to adjust and iterate quickly, thus avoiding the danger of missing the mark. You can do this without having to re-architect your entire system or IT environment. Most mobile and web application developers have already found great success in applying agile development. Developing for IoT, it’s unlikely you’ll be able to build a full feature on top of a device in just a week. However, by focusing on building microservices in one to two week sprints, you can keep moving towards the finish line and connect all the APIs you need, one by one, with dramatically lowered risk.
  4. Flexibility and agility: Another major benefit of leveraging microservices is that if, after testing, you determine that a particular service isn’t working out, you can replace it with something better or more tailored to your needs. A microservices approach to development and integration allows you to build a feature quickly and improve on it over time. When it’s ready to be replaced, you’re just updating one piece of the puzzle without having to worry about impacting the rest of the picture.
  5. Unlimited value-add: The device you deploy is never going to transcend its physical capabilities until you upgrade or replace its physical components. The digital upgrades you can provide via constantly evolving microservices, however, are unlimited both in their scope and their frequency. A camera may be designed to only capture 2D images, but depending on the third-party service it’s linked to, it might provide you with statistical traffic information, queue sizes or weather information.

Soon enough, it will become hard to remember the time when enterprises did not by default turn to microservices. With the rise of IoT, there’s a perfect storm brewing that will push microservices into new and traditional industries. The benefits are high, the risk is low and the savings in terms of cost and resources make this one a no-brainer.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 26, 2017  12:08 PM

Skating to where the puck is heading: Delphi’s recent moves in connected car

Terry Hughes Profile: Terry Hughes
"Automotive industry", automotive, car, Cars, Connected car, Delphi, Internet of Things, iot

Whether you’re a hockey fan or just a business person, you will have heard the Wayne Gretzky quote, “Skate to where the puck is going, not where it has been,” and in the automotive sector of IoT there is no better example than Delphi’s recent investments in high-tech companies. My sources in the auto sector tell me that Delphi has been playing catch-up in areas like cloud, SaaS, apps and big data. Until recently, it was behind its rivals like Harman, Bosch and Visteon in many of these growth areas. If we consider where the puck is today, I would argue that the connected car is focused on telephony, telematics, apps and infotainment, with a healthy dose of SOTA/FOTA (software and firmware over-the-air updating). On that subject, Delphi’s recent acquisition of Movimento, a leading SOTA/FOTA provider, was a smart move to solidify its offering around today’s automotive challenges, of which updating cars remotely rather than in the dealership is definitely one. However, Delphi recently did what all good hockey players do; it lifted its head up, it didn’t chase to where the puck is in 2017, and instead it predicted where it will end up in 2018, 2019 and beyond, and it invested to meet the puck there. So where exactly is the puck heading?

In automotive, and IoT in general, it will all be about the data. Data will be the new currency, with vehicles being a means to an end for collecting valuable data, and everyone in the value chain will be scrambling to monetize the data generated by vehicles. Vehicles will become what today’s Android smartphone has long since become, a data generator for the likes of Google to benefit from. Those of us that use Android are prepared to give up some of our personal privacy in order to get terrific free services like Google Traffic. Today, automakers and their tier 1 vendors are fighting against Google and Apple, taking over the dashboard with initiatives like SDL (SmartDeviceLink) for app mirroring, and in the future they will need to do the same to stop Google and Apple owning the car’s data. Today’s car does generate data such as the outside temperature, whether the car is skidding on ice, and driver behavior, but very little of it is even stored locally, let alone sent anywhere for processing. That will all change in the future, when each car will send millions of packets of data monthly, giving terrific insight into macro factors such as traffic conditions all the way down to individual driver habits and traits. In 2017, the likes of Delphi are figuring out how to pull that data from the vehicle and where to send it to, but in 2018 and 2019 the puck will be somewhere else completely, so Delphi is skating towards a world where the data is crunched, analyzed, brokered and monetized. We can imagine a world where a single vehicle and driver will be contributing data to traffic services, transit operators, smart cities, insurance companies, retailers, weather forecasters, media companies, dealerships and more. The value of the data generated depends on the recipient; for example, traffic patterns or parking availability is only valuable in aggregate, whereas a driver’s routes and associated buying habits are extremely valuable to retailers wishing to attract that driver to their stores.

iot_broll-60

Connected car dashboard, as seen at CES 2017.

From all of that we think we can see where the puck will end up, so exactly what did Delphi do in April 2017? It invested in Otonomo and Valens, two companies that will help Delphi to skate faster. Otonomo’s technology aggregates data generated as cars are driven and sends it to automakers, who can then use that data to sell services to the owner of the car. The company cites services including emergency assistance services and location-based advertising. Valens specializes in technology that helps companies move high volumes of data around the vehicle by delivering chip-to-chip technology that allows for connectivity speeds six times faster than today’s industry standard.

In an article in the Detroit Free Press, Delphi Chief Technical Officer Glen De Vos was quoted as saying that one of the biggest emerging challenges for automakers developing connected cars is the ability to move and transmit the data required for autonomous cars to operate. “Delphi’s vision is to enable the brain and nerve center of the vehicle through our leadership in computing and complete electrical architecture solutions,” De Vos said. “We have selected world leaders in three areas: high-speed data transmission, connectivity and data management.”

iot_broll-30

A connected car on display at CES 2017.

In the same article Mike Ramsey, a research director at Gartner, gave Delphi credit for investing in companies that will help automakers solve some of the toughest issues facing the self-driving vehicles. “All these companies have a role that helps to tame the tyranny of data,” Ramsey said. “What I like about what Delphi is doing is, more than any other company I have seen, they are putting together the right pieces.”

The automotive sector is undergoing a massive wave of digital transformation. If the car of the future is merely a big IoT sensor on wheels, and if the connected car sector is a microcosm of the wider connected IoT space, Delphi’s strategy is a good indication of how many other “old school” companies will have to figure out where the puck is heading, and how they can skate to it before the opposing team gets there in order to capitalize on the massive opportunity presented by the collection, dissemination and monetization of the data that devices are generating.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 41 of 79« First...102030...3940414243...506070...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: