IoT Agenda

Page 1 of 10512345...102030...Last »

July 17, 2018  2:27 PM

Smart cities: The proof is in the pudding

Daniele Loffreda Profile: Daniele Loffreda
5G, Collaboration, Internet of Things, iot, IoT applications, IOT Network, Partnerships, Smart cities, smart city, Startups, Wireless

There is a lot of anticipation around what the future of the smart city will look like, and rightfully so — advanced technologies such as artificial intelligence, machine learning and robotics are poised to have a dramatic impact on society. While the term smart city gets thrown around quite a bit, in many cases today, these new initiatives show mixed results in solving citizen problems. The good news is that as an industry we are working closely with citizens and taking a more strategic and comprehensive approach, which in turn means investments are being made where they will have the biggest impact.

This begs the question … if the smart cities that are up and running are not living up to the hype just yet, why would other cities follow suit? Like many new, highly advanced concepts, widespread adoption can move at snail’s pace. Remember when email was a passing fad? The key to getting buy-in for these projects is proving ROI first. How can it be done? From looking at what has worked for other cities to creating partnerships, there are ways to show the benefits for both citizens and the community to accelerate smart city development.

Seeing is believing

We are in the beginning phase of the next iteration of smart cities, or what is being called “Smart City 3.0.” But there are some great examples of cities from the previous two generations making real progress. A recent study by Juniper Research outlines global leaders in multiple categories, including mobility, public safety, healthcare and productivity. Singapore swept the board when it scored the top spot in every category, with San Francisco, Seoul, New York and London each grabbing silver in one of the four categories.

So, what is possible with a true smart city? The aforementioned report found that “smart cities have the potential to ‘give back’ city dwellers three working weeks’ worth of time every year.” That means more efficient public transportation or traffic patterns that can make it possible to get home in time to have dinner with your family or give you the extra time to make it to that workout class you always seem to just miss. There are many ways smart city living can enhance the everyday lives of citizens and in order to drive more adoption around the world, we have to consistently show the proof.

Incubators and alliances foster smart city innovation

To prove usefulness and validate ROI, smart cities are creating formal incubators and establishing technology alliances. The Dallas Innovation Alliance (DIA), whose “mission is to develop and test a scalable smart cities model,” is a prime example of a public-private partnership taking a multi-phased approach. According to its Q3 results, “the DIA is working with over 20 city departments, 30 partner organizations and has built relationships with more than 50 cities around the world” and has experienced impressive results from the various partnerships. The “strongest example of operational savings and return on investment from the pilot has been the Intelligent Streetlight Project, which saved 755.41 kW hours in Q3 based upon installation of 23 lights in the Living Lab representing a 30% decrease from the existing lighting.”

While most cities don’t have the luxury of starting from scratch, Union Point — a futuristic smart community — is being created from the ground up by LStar Venture and General Electric, and is an example of why incubator projects are so important. As part of the initiative, the developers are planning an innovation center, which will include an interesting private-public partnership. David Rose, a researcher and lecturer at MIT’s Media Lab, told The New York Times that “the Center has expressed an interest in featuring an interactive planning tool developed by the Group called CityScope, which would allow users to visualize and explore tradeoffs around factors like density, transportation and walkability.” While Union Point is an incredible undertaking and requires resources that most cities don’t possess, other cities can learn from what these planned communities are doing — particularly the partnerships that are bringing together educational and industry groups.

Getting up to speed with 5G

The next example — ENCQOR — is a combination of what can be accomplished with public-private partnerships, and the importance of not forgetting about the backbone (the network) that will support smart city development. ENCQOR, which was announced in March to accelerate the transition to 5G technology in Quebec and Ontario, is a $400 million partnership bringing together five global digital technology leaders (Ciena included) and government agencies. ENCQOR is “establishing the first Canadian pre-commercial corridor of 5G wireless communication technologies — the next generation of digital communication and the key to unlocking the massive potential of smart cities, smart grid, e-health, e-education, connected and autonomous vehicles, on-demand entertainment and media, and the internet of things, among others.”

When thinking about smart cities, the transition to 5G must be front and center. With all the new technologies that smart cities will require, it will bring latency, bandwidth and reliability demands that 4G simply won’t be able to handle. One of the most important capabilities that 5G will bring is network slicing, which makes sure that the various requirements for different services can be met, and all on the same physical network. The process will allow networks to be broken up into numerous portions that can be managed independently, customized and, most importantly, not affect one another if one portion is overloaded or down. It also creates flexibility that will allow network operators to meet the needs of services in the future, including those not even invented yet.

Life transformation

From one-off conversations to industry events such as the Smart Cities Connect Spring Conference, it is exciting to see how much interest there is in developing highly connected communities. What is even more exciting is that cities are taking action. According to an October 2017 report from the National League of Cities, “66% of cities have invested in some type of smart city technology.” While we still have a long way to go until the majority of cities have moved away from ad-hoc implementations, with the right programs and partnerships in place, as well as focusing on the backbone of the network, true smart cities will become the norm, rather than the exception. Imagine living in an environment where you can easily avoid traffic, find open street parking spaces in seconds and save more money on your monthly electric bill. Smart city technology can make all of this happen and more. Transitioning to a smart city can transform the lives of citizens, communities and the environment.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 17, 2018  10:17 AM

How can anomalous IoT device activity be detected?

Louis Creager Louis Creager Profile: Louis Creager
Botnet, DDOS, Internet of Things, iot, IoT devices, iot security, Network security, Profiling, security in IOT, Threat detection

An estimated 25 billion IoT devices are expected to be among us by 2025, contributing to $1.1 trillion in industry revenue (data according to a recent study by GSMA). Given the continually increasing prominence of these devices in our lives and their ever-wider capabilities, ensuring their security and proper behavior should be a critical concern for IoT users and manufacturers alike — should being the operative word at the moment.

Web-connected devices in our homes and workplaces help us control more aspects of our lives by the day, with many including microphones and cameras that see and hear everything we do. Considering the risks these features could represent, though, it’s an immense vulnerability that most — especially lower-end — IoT devices on the market today feature little or no security measures at all. The IoT market has been much more interested in delivering user-desired features at low price points.

But as headlines continue to remind us, IoT devices can and do get hacked regularly, and the consequences are severe. The news is rife with creepy stories of IoT-based spying in home and corporate environments, and seemingly endless instances of the damage inflicted by massive botnets built from high quantities compromised IoT devices, which can take companies and even major internet infrastructure offline through powerful distributed denial-of-service attacks.

The good news is that even where IoT devices provided no embedded security from the manufacturer, it’s still possible to ensure that they’re secure and operating in an uncompromised network environment. Through a dynamic approach to IoT device discovery, profiling and anomalous activity detection, devices should be monitored for proper behavior, and those that show signs of having been interfered with can have that behavior mitigated and security issues eliminated.

The first stage in this strategy is to recognize what devices are connecting to routers at the local network level. Given the scope and nature of the IoT ecosystem — where the countless devices we all carry may attempt to connect to local networks as we happen to pass by them — it’s now impossible to know or keep track of all the devices connected to our networks by manual means. The task of vetting the behavior of all devices is more challenging still.

Dynamic IoT device discovery and profiling, which can be situated to view all inbound and outbound network traffic by including a module within routers, gateways, UTMs and other network devices, is the process of identifying any device that connects to the network down to its specific make and model. It’s a strategy that can work for nearly any connected device within a household or business by studying all seven layers of the OSI model that comprises a unique fingerprint for each device and then supplementing this knowledge with a database of recognized IoT device profiles. Where devices utilize organizationally unique identifiers (OUIs) — which is more often the case with brands with a narrow product line — profiling usually will take less than a minute once the device joins the network. In cases where device OUIs cover several device types, a profiling strategy will utilize port scanning, protocol analysis and other such higher-level detection techniques to complete identification within minutes.

With each connected device on the network accurately identified, anomalous activity detection is capable of monitoring behavior in order to continuously verify that devices are carrying out their duties as expected. If a device begins making suspicious connections or exhibiting abnormal behavior outside of its normal operations, you’ll have a pretty strong indicator that the device may be compromised and engaged in malicious activity. It may also be the case that the device simply requires a software update or, worse, that it’s under direct control of an active attacker. But by recognizing these anomalies in real time, the worst behavior can be neutralized before crippling harm is done. Anomalous activity detection can also identify vulnerable (but as-yet-unhacked) devices, such as those IoT devices with weak security measures that need a closer look.

If looking at this strategy for network safety, ensure that the criteria used to inform anomalous activity detection is updated continually to keep pace with natural shifts in device usage and behavior. In the same way, these detection technologies are designed to feature the scalability needed to handle the growing number of devices on networks, especially as IoT adoption swells. With dynamic IoT device discovery, profiling and anomalous activity detection in place, the challenge of protecting networks can be much more effectively overcome (even if device manufacturers aren’t helping out much on their end).

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 16, 2018  1:59 PM

Edge computing will be vital for even simple IoT devices

Svein-Egil Nielsen Profile: Svein-Egil Nielsen
BLE, Bluetooth, Bluetooth Devices, Cloud Computing, Edge computing, Internet of Things, iot, IoT analytics, IoT applications, IoT data, IoT devices, IOT Network, iot security, LPWAN

Edge computing risked becoming an IoT cliché. The phrase kept popping up everywhere with no clear consensus on what it meant or if it was important.

But as a former Chair of the Bluetooth SIG and now a CTO for wireless semiconductors, I’ve seen enough of the evolution of wireless technology to conclude that edge computing is anything but a fad. In fact, it’s a necessary progression for connected devices and applications.

For example, early chips for Bluetooth Low Energy — the low-power variant of Bluetooth introduced in 2010 — had sufficient onboard processing power to maintain the Bluetooth wireless link and run simple applications, but could do no more. Yet, these chips formed the foundation of a whole generation of brand-new Bluetooth-enabled devices — such as wireless heart rate monitors — that could link to a smartphone for the very first time.

Better chips mean better features

Bluetooth Low Energy chips evolved and gained more processing power — and associated memory — to power products, such as activity monitors that could measure not just heart rate, but also steps taken, distance walked and calories burned. And to keep consumers happy, developers had to add even more enhanced features. Examples included sleep quality measurement — time to get to sleep, hours slept, number of times woke up — and analysis, including quality and quantity of deep sleep cycles, to keep sales of these “wearables” booming.

Later, these devices formed the basis of the latest generation of medical wearables. These include diabetes monitors that track blood sugar levels and fall monitors for the elderly that allow concerned relatives and caregivers to know someone is OK based on their routine daily activity.

Bluetooth solved many IoT problems

The relevance of this trip down Bluetooth memory lane is that it can be seen as a precursor in some ways of the internet of things. The main difference for the emerging IoT market is where edge computing fits in when it seems we are likely to have unlimited processing power enabled by cloud computing and analytics.

Using the example of wearables, you’ll see the kind of edge computing now required for IoT reveal itself like invisible ink under an ultraviolet light.

The evolution of wearables required each generation to monitor and collate a greater number of measurements (raw data). Developers found optimal ways of doing this by processing raw data locally (on the edge of the application using the Bluetooth chips’ increasingly powerful onboard processors) and then forwarding to a smartphone app and the cloud (for data sharing and tracking) only the essential information (desired data).

The technology enabled continuous (low-latency) monitoring, and the modest Bluetooth wireless throughput was sufficient to update apps and cloud servers of the key tracking information without requiring extended on-air duration that would otherwise be needed to stream raw data. (Short on-air duration also minimizes power consumption — vital for any battery-driven device such as an IoT enterprise sensor.) Sending only the key information also minimized the impact on the user’s cellphone data allowance (data cost).

Things go wrong, hackers never quit

Because users didn’t always carry their smartphones, wearables had to operate autonomously when not connected. Resiliency was built into the systems. They didn’t depend on a continuous network or internet connection for successful operation (redundancy). If the network or internet link failed, the wearable (edge device) waited patiently until it could reestablish the connection. It then transferred any new key information.

Once personal health or safety data (for example, fall detection) was transmitted across the wireless link, data protection became vital. It’s unfortunate, but an aggressive army of hackers was only too keen to find and exploit any security weakness. Security and encryption mechanisms suddenly became key elements of any edge computing capability.

The cloud won’t do the heavy lifting

I suggest that today’s Bluetooth Low Energy wearable devices look a lot like the forerunner of an enterprise IoT device. Their developers solved many of the same challenges facing today’s IoT, albeit on a much smaller scale. The scaling will come from low-power wide area network (LPWAN) wireless technologies. Some of the most promising examples globally are those using the latest cellular LTE Cat-M1 and NB-IoT standards. These will allow IoT devices to connect to the cloud over long distances through existing telecoms infrastructure.

But even with straight-to-cloud connectivity, edge computing may be even more important for IoT than for Bluetooth. As we scale to millions and billions of sensors, costs like power consumption and network data can quickly ramp. This means the ability to transfer only the most relevant data could be vital for any commercially viable application. And the sheer volume of raw data generated could pose a challenge for cloud services anyway. Most of the raw computing will have to done on the edge of the application.

Edge computing is here to stay

As such, I think cloud and edge computing in combination will open up great opportunities. Finding the right balance between edge and cloud, however, requires knowledge of the full end-to-end application. This includes the cost of power, data transfer and cloud services. Developing end-to-end prototypes may be the easiest way to find what strikes the optimal balance. And thanks to simplified development tools, kits, software and cloud technologies, this task will become easier and easier to overcome.

But one thing is for sure: Edge computing is here to stay. And it will be a key enabler of a commercially viable IoT.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 13, 2018  10:13 AM

In-memory HTAP computing: Create smart IIoT applications with continuous learning

Nikita Ivanov Profile: Nikita Ivanov
Data Grid, Data Management, Deep learning, IIoT, In-memory computing, Industrial IoT, Internet of Things, iot, IoT analytics, IoT applications, IoT data, Machine learning

To achieve maximum ROI, many IIoT use cases involve decision models that must update in real time. For example, a fleet of package-delivery drones must be able to respond immediately to major and previously not experienced changes in weather conditions requiring new flight patterns, or to a new and unexpected increase in failure rate for a motor component. Creating a continuous learning system that can learn from changes in the system and adjust in real time requires unprecedented levels of computing performance.

Gartner has defined an architecture for achieving this: in-process hybrid transactional/analytical processing (HTAP) architectures, or in-process HTAP. For a continuous learning system, in-process HTAP requires a continuous learning framework capable of continuously updating machine learning or deep learning models as new data enters the system. In-memory computing is the most cost-effective way to power this continuous learning framework.

Consider the following IIoT continuous learning use cases that could be enabled by an affordable continuous learning system:

  • Long-haul trucks — A fleet of long-haul trucks might be frequently impacted by major and previously not experienced changes in weather, or by changes to the road network due to new road openings, road closings due to construction or major new changes to road conditions. A continuous learning system could incorporate the impacts these changes have on current trucking results and then create an updated model that can suggest more optimal routes based on required arrival times, fuel costs, truck availability and more in real time.
  • Mobile payments — 24-hour mobile access and payment systems have increased the potential for payment fraud. Fraud detection requires a machine learning model that can incorporate new fraud vectors into the model in real time. A continuous learning system could update this model in real time with the very latest payments and fraud data to detect emerging fraud strategies and prevent them from spreading.
  • Smart cities — One of the foundations of what we think of as a smart city is the ability of self-driving cars to dramatically reduce traffic congestion. To accomplish this, a machine learning or deep learning model must analyze data from multiple sources — including traffic cams, weather stations, police reports, event calendars and more — to provide guidance to self-driving cars. When major changes occur in the system, such as new roads opening, major new construction projects launching or major shifts in traffic patterns, a continuous learning system can quickly incorporate the impact of these changes into its model and begin immediately providing better guidance to self-driving vehicles.

In-process HTAP is essential for these continuous learning systems because it eliminates the inherent delay architected into the traditional model. Most organizations today have deployed separate transactional databases (OLTP) and analytical databases (OLAP). An extract, transform, load (ETL) process is used to periodically move the transactional data into the analytical database, introducing a time delay that prevents real-time model training. In-process HTAP combines the OLTP and OLAP capabilities into a single data store, eliminating the ETL delay.

Of course, the OLTP-ETL-OLAP model was originally established for a good reason. Attempting to analyze live transactional data could seriously impact operational performance. So the question becomes: How do you implement in-process HTAP without potentially impacting performance?

The answer is in-memory computing.

Achieving in-process HTAP with in-memory computing

Several in-memory computing technologies are required to achieve in-process HTAP and continuous learning systems for IIoT use cases:

  • In-memory data grid or in-memory database — An in-memory data grid, deployed on a cluster of servers which can be on-premises, in the cloud or a hybrid environment, can use the cluster’s entire available memory and CPU power for in-memory processing. An in-memory data grid is easily deployed between the data and application layers of existing applications — there’s no need to rip-and-replace the existing database — and the cluster can be scaled out simply by adding new nodes to the cluster. By contrast, an in-memory database stores data in memory and provides RDBMS capabilities, including support for SQL and ACID transactions. However, an in-memory database requires ripping and replacing the entire data layer for existing applications, typically making it the appropriate option primarily when building new applications or undertaking a major re-architecting of an existing application.
  • Streaming analytics — A streaming analytics engine takes advantage of in-memory computing speed to manage the complexity around dataflow and event processing. This is critical to enabling users to query active data without impacting transactional performance. Support for machine learning tools, such as Apache Spark, may also be included.
  • Continuous learning framework — The continuous learning framework is based on machine learning and deep learning libraries that have been optimized for massively parallel processing. These optimized libraries — fully distributed and residing in memory — enable the system to parallel process machine learning or deep learning algorithms against the data residing in memory on each node of the in-memory computing cluster. This enables the machine learning or deep learning model to continuously incorporate new data, even at petabyte scale, without degrading performance.
  • Memory-centric architecture — A memory-centric architecture is vital for the scale of IIoT use cases described above. A memory-centric architecture supports keeping the entire, fully operational data set on disk with only a user-defined subset of the data maintained in memory. This design provides organizations with the flexibility to determine their own optimal tradeoff between infrastructure costs and application performance. In production environments, the memory-centric architecture supports fast recovery following a restart. Since the data on disk can be accessed immediately, there is no need to wait for all data to be loaded into memory before processing begins. In-memory-centric architectures can be built using a distributed ACID and ANSI-99 SQL-compliant disk store deployed on spinning disks, solid state drives, Flash, 3D XPoint or other storage-class memory technologies.

Successful IIoT applications across a range of industries will depend on being able to achieve in-process HTAP — that is, real-time learning based on live data — at petabyte scale. Thanks to the decline of memory costs, mature open source in-memory computing platforms will likely remain the most cost-effective path to developing this continuous learning capability for the foreseeable future.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 12, 2018  1:37 PM

Offloading smart car data: 5G can only stem the tide

Greg Najjar Profile: Greg Najjar
5G, 5G technology, Automobile, Connected car, Internet of Things, iot, IoT data, IOT Network, LTE, P2P, Wireless carriers, Wireless network, Wireless networking

The proliferation of smart autonomous cars in the coming years will require a new approach to wireless data. According to Intel, these vehicles will process 4,000 gigabytes of data per day, the equivalent of 2,666 daily internet users. Each component of a car, ranging from its sonar, radar, GPS, cameras and Pandora account, adds to the overall data requirements. While autonomous cars are the future of transportation, improving public safety and mitigating traffic, these vehicles will create the equivalent of a year’s worth of typical internet user data every few hours. Today’s wireless connections won’t be able to keep up.

5G can only stem the tide

Today, it could take 230 days to transfer a week’s worth of data from a self-driving car. This is obviously not optimal when it comes to transferring important data, such as preventative maintenance, and impossible to be effective when it comes to more immediate data, like the location of a car that was just in an accident. 5G connectivity has the potential to solve for some of these issues, offering low-latency data transmission for a robust data set. This should mean that urgent data should largely be handled by 5G.

A recent Ericsson report implied that nearly half of all U.S. mobile traffic will be handled by 5G by 2023. The company obviously has its finger on the pulse of the market, but that means half of mobile data generated won’t be handled via 5G. And this latter half will still need to overcome many of the issues that exist in today’s networks, such as a need to solve for the last mile and limitations on network density. Today, a car driving from Los Angeles to Las Vegas will not have connectivity the entire trip through the desert — a popular drive that’s almost always had a coverage gap. This gap needs an overcoming technology for the ultimate realization of smart cars in the market.

There are three things carriers can do to mitigate the risk of lack of connectivity:

  • Maintain/increase network diversity. While each of the mobile carriers are aggressively rolling out 5G, it is important to realize that important data will still be sent via alternative networks. This might mean LTE, or even 3G, will continue to be used. Every band of spectrum has specific qualities that make it stronger for certain use cases. Bands that are lower on the spectrum can carry a great distance, but often lack the ability to carry robust data. That makes them perfect for use cases like sending information, such as “all clear” notices, to a smart car manufacturer.
  • Collaborate with automakers. Telecom companies will need to work in lockstep with auto manufacturers to identify which data will be mission critical, from emergency situations to over-the-air upgrades. Collaboration up front, and throughout the vehicle lifecycle, can be critical. For instance, automotive companies may identify multiple types of OS upgrades. This might be one type of update that has improved safety functionality and another that includes a new light dimmer. With this information, carriers will be able to prioritize which data is sent via 5G and which can be sent via a slower network connection.
  • Use P2P data models. In enterprise settings, the LAA and CBRS bands are gaining usage as a way for companies to improve connectivity for their employees. However, very little driving takes place indoors, meaning that these bands are generally available for specific use cases outdoor. As part of their collaboration with automakers, telecom providers can share and use best practices for these innovation bands, creating a seamless network infrastructure for the proper relay of information between vehicles, carriers and manufacturers with the vehicle-to-everything (V2X) model.

By 2025, automated cars will create a $25 billion industry for those involved. Companies such as Waymo, Uber and Lyft are rolling out testing throughout the country, but will ultimately only be successful if mobile carriers are able to meet the data capacity demands these vehicles create. As such, it is critical that carriers maintain a diverse network and work with auto manufacturers to see this market reach its destination.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 12, 2018  12:45 PM

From words to action: Implementing AI

Mark Troester Profile: Mark Troester
ai, Artificial intelligence, Data Science, Data scientist, Digital transformation, IIoT, Industrial IoT, Internet of Things, iot, IoT applications, Machine data, Predictive maintenance

Artificial intelligence is a theme that has dominated the world of technology for some time now, and it looks like it will continue to do so through the near future. From conversational chatbots to predictive maintenance for machines, AI has quickly gone from being a mere competitive advantage to being a business necessity. It doesn’t come as a massive surprise, then, that Gartner has predicted that AI technologies will be in almost every software product by as soon as 2020.

Despite the market buzz, many companies are still stumped by the prospect of deriving actual business value from the use of AI. Its introduction to products and systems remains one of the leading sticking points for businesses. So, how do you actually implement an AI technology?

It’s complicated

As AI goes from a “nice to have” to a “need to have,” it’s also evolving in terms of complexity. Simple, standardized AI services that do image or text recognition are no longer enough. Companies now want to see complex predictive scenarios that are specific to their operations and customized for their business needs.

Take a scenario that uses time-series data to generate business insights, such as predictive maintenance for industrial IoT or customer churn analysis for a customer-experience company. Getting accurate and actionable results in these predictive scenarios requires a lot of data science work, with data being used over time to iteratively train the models and improve the accuracy and quality of the output. Additionally, businesses are being challenged to engineer new features, run and test many different models and determine the right mix of models to provide the most accurate result — and that’s just to determine what needs to be implemented in a production environment.

Similar to how digital transformation has branched out from being an IT-driven initiative to a company-wide effort, AI is no longer the exclusive domain of the data scientists and engineers that help prepare the data. Organizations must move beyond a siloed AI approach that divides the analytics and app development teams. To be successful, app developers need to become more knowledgeable about the data science lifecycle, while app designers have to start thinking more about how predictive insights can drive the application experience.

Identifying an approach that enables them to easily put models into production in a language that is appropriate for runtime — without rewriting the analytical model — is key here. Companies need to not only optimize their initial models, but also feed data and events back to the production model so that it can be continuously improved upon.

This can seem like a complicated process, but it’s a prerequisite for successful implementation of AI.

Go comprehensive or go home

So the next question is: How can companies effectively implement AI in a way that enables them to address complex predictive scenarios with limited data science resources? And how do they achieve success without retraining their entire development team?

The truth is that it can’t be done by simply creating a narrowly defined, one-size-fits-all approach that will get you results with only a few parameters. It requires a more comprehensive strategy to be insightful, actionable and valuable to the business.

Consider an IIoT predictive maintenance application that analyzes three months of time-series data from sensors on hundreds of machines and returns the results automatically. This isn’t a simple predictive result set that is returned, but a complete set of detected anomalies that occurred over that time, with prioritized results to eliminate the alert storms that previously made it impossible to operationalize the results. These prioritized results are served up via a work order on a mobile app to the appropriate regional field service team, which is then able to perform the necessary maintenance to maximize machine performance. It’s a complex process where the machine learning is automated and feature engineering is done in an unsupervised fashion. The provided results analyze individual sensor data, machine-level data and machine population data, and are packaged up in a format that enables the business to take action.

Toward AI 2.0

Currently, the best available term to describe the processes behind these technologies is “anomaly detection.” But not all technologies take the same approach and not all systems deliver predictions that lead to better business outcomes. What you’re about to see is a fundamental shift in how machine learning capabilities are delivered — and we aren’t just talking deployment in the cloud versus on-premises. We’re talking about a shift from delivering data science tools that make data scientists more effective toward data science results that eliminate the need for the data scientist to have these tools in the first place. This way, data scientists will soon be able to spend their time analyzing and improving the results, instead of dedicating huge chunks of their time to non-mission-critical tasks.

The only thing that is required is that the data is provided in a time-series format. Otherwise, you simply upload the data to the cloud (but on-premises options will exist too) and the automated AI does the rest, with accurate results returned within days.

Welcome to the new world of AI implementation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 12, 2018  10:24 AM

IoT: Explosive growth now and for the foreseeable future

Rick Harlow Profile: Rick Harlow
Consumer IoT, Enterprise IoT, Industrial IoT, Internet of Things, iot, IoT applications, Market Growth, market research, market segment, Market strategy, Market trends, smart city, smart home, statistics

The global IoT market is in the midst of a growth spurt that shows no signs of stopping anytime soon. The IoT market is expected to mature from $157 billion in 2016 to $457 billion by 2020, attaining a compound annual growth rate of 28.5%, according to Forbes. Think about that. This is a $300 billion market in just the next 18 months. What other markets are doing that? Discrete manufacturing, transportation and logistics, and utilities will lead all industries in IoT spending by 2020, averaging about $40 billion each.

The IoT market continues to be a valuable market opportunity and is expected to reach $267 billion by 2020 according to Boston Consulting Group. Bain Insights weighed in with predictions that B2B IoT segments will generate more than $300 billion annually by 2020, including about $85 billion in the industrial sector. Bain also predicted the most competitive areas of IoT will be in the enterprise and industrial segments, and that consumer applications will generate $150 billion by 2020 with B2B applications turning out to be worth more than $300 billion.

The internet of things has also fueled more than $80 billion in merger and acquisition investments by major vendors and more than $30 billion in venture capital according to estimates by Bain. Healthcare and life sciences are projected to increase from $520 billion in 2014 to $1.335 trillion in 2020. Healthcare will more than double in size with regards to IoT. Think about that. Sensor companies, cybersecurity companies, wireless network companies and software companies are in a scramble to capture these markets. By 2025, the installed base of IoT devices will be over 75 billion devices — up from roughly 24 billion in 2018.

Here’s another stat. According to Growth Enabler and market analysis, the global IoT market share will be dominated by three subsectors: smart cities at 26%, industrial IoT coming in at 24% and connected health coming in at 20%. They will be followed by smart homes at 14%, connected cars at 7%, smart utilities at 4% and wearables at 3%. “If you are not in the IoT market, then get in it” is what most technologists and investors are saying right now if you look at any report in the market.

How do you capture this market? Do your homework first. Understand what types of problems customers are trying to solve and how IoT can help them. It would also be a good idea to determine if companies are looking for ways to drive new revenue streams. Help these companies examine how they drive revenue streams today and how IoT can help them create new ones. How does the product or service you or your company provide today add value to these key areas that are expected to have explosive growth over the coming years? Let me know your thoughts.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 10, 2018  4:55 PM

Living on the edge: Why IoT demands a new approach to data

Kirit Basu Profile: Kirit Basu
Data Analytics, Data Management, DataOps, Edge analytics, Edge computing, Internet of Things, iot, IoT analytics, IoT data, IOT Network, iot security, Real time analytics

Thanks to increasingly powerful devices and our hunger for more and more data, the internet of things is the latest and greatest innovation to capture our collective attention. Citing the incredible value presented by IoT data, organizations across a variety of industries — manufacturing, retail and medical, to name a few — are or will soon be investing in IoT initiatives. In fact, a recent 451 Research Voice of the Enterprise poll found that 65.6% of respondents plan to increase their IoT spending this year.

These IoT investments will have a tremendous impact on data architecture. According to Gartner principal research analyst Santhosh Rao, “Currently, around 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. By 2022, Gartner predicts this figure will reach 50%.” Where in the past IoT has been an “edge case” (pardon the pun), it is quickly going mainstream, with significant IT and business implications.

One of the byproducts of IoT is the need for “edge analytics” or computation occurring in the IoT device itself rather than within a data center or cloud computing platform.

Making the case for edge analytics

There are three factors driving edge analytics as opposed to traditional data center processing:

  1. IoT apps are often bidirectional;
  2. For some things, latency means death; and
  3. Edge analytics removes the latency.

Self-driving cars are a great example of these factors. Edge data collection occurs when sensors in cars transmit data that is aggregated and processed in the data center or cloud to determine how well the navigation system is working. After analyzing the data, the system manufacturer can adjust the navigation rule set to improve the safety, reliability or performance of the system, and push that new rule set out to the entire population of cars.

But sometimes immediate action must be taken on collected IoT data, and the time lag routing data to the cloud or a data center can be disastrous. Take, for example, a mining company that works with expensive underground equipment. With IoT sensors on drilling equipment, data can be transmitted back to the data center or centralized cloud to monitor vibration patterns and engine hiccups that don’t need to be addressed immediately. But what about sensor information that comes in on information that needs immediate action, such as related to a potential explosion or mine collapse? Waiting for the cloud or data center to determine there is a safety risk may lead to a disaster. Edge analytics can come to a conclusion about the safety issue and initiate necessary action immediately.

Faced with scenarios like this — as well as security and compliance issues in which decisions must be made in real time — organizations are beginning to look for ways to analyze and/or act on data at the edge.

Architecting for edge data

Historically, to move data from where it is created to where it is stored and analyzed, organizations have typically opted for some sort of do-it-yourself coding or data integration software. But unlike other types of data like transactions in a database, IoT edge data analysis is a complex process requiring substantial orchestration and live operational visibility into performance, which is difficult to implement with traditional approaches.

Furthermore, hand-coding requires scarce specific skills, and you don’t want those skilled people tied down to projects indefinitely to deal with maintenance to fix or update data pipelines when data drift (unexpected changes to data structure or semantics) occurs or requirements change. In short, the hard and soft costs of hand-coding become prohibitive in such a dynamic and real-time IoT environment.

Fortunately, new software tools offer a simpler and future-proof alternative to hand-coding IoT pipelines, and they hold many advantages. First, these tools combine design, testing, deployment and operation into a single process, which some vendors are calling DataOps or DevOps for data. This shortens initial project time and streamlines the pipeline maintenance process. In addition, once hand-coding is eliminated, time and money are saved, as programming errors become much less likely.

Second, tooling that provides instrumentation enables an organization to gain greater control over operational management of the IoT pipelines. Throughput, latency and error rates can be monitored live to ensure analytics is based on complete and current data sets. Hand-coded data ingestion pipelines are usually opaque, lending very little, if any, visibility into a pipeline’s performance.

Third, security and data privacy must be baked into data movement — it can’t be an afterthought. Tools often come with integrated security that makes it easier for organizations to comply with regulations like HIPAA and GDPR, which punish misuse of sensitive data with heavy penalties. While mining equipment probably doesn’t emit personal data, medical equipment, video cameras and smartphones do. It is best to detect sensitive data at the point it is gathered so it can be properly obfuscated or quarantined.

Lastly, since edge analytics is part and parcel of many IoT projects, data pipelines should be able to launch native or third-party processing on the edge device when needed. An example of an edge analysis would be to monitor a camera or sensor for changes locally rather than send back fairly static data to the data center. Another common case is to aggregate data into summary statistics rather than send every single reading over the network. A third is to analyze for signs of transaction fraud at a point-of-sale terminal or ATM so that it can be blocked before the transaction completes.

IoT presents an incredible opportunity for data generation that can be used to great benefit for organizations today. As the volume of data grows and use cases for edge analytics become more diverse and varied, organizations’ technologies for managing this data must evolve as DIY systems give way to more sophisticated DataOps tooling.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 10, 2018  2:27 PM

Unlocking the world of seamless, secure connectivity with USP

Jason Walls Profile: Jason Walls
Internet of Things, iot, IoT devices, IOT Network, iot security, Protocols, security in IOT, Standardization, standards

Even though the term internet of things has been around for several years, the IoT space — specifically, the smart home — remains a Wild-West-like landscape of disparate platforms, products and applications. This represents a clash of three different worlds: the constantly changing, continuous support lifecycle world of software; the standardized and security-conscious world of networking, IT and web applications; and the heavily commoditized and competitive consumer electronics world.

This clash has split the ownership of IoT unification between these worlds and has been the source of innumerable security, interoperability and support issues, leaving a bad taste in consumers’ mouths. This has prevented broadband, network and IT service providers, application developers and consumer electronics manufacturers from effectively monetizing the connected user — despite constant market predictions about the billions of devices that will be deployed throughout the 2020s.

To resolve this, the market needs a managed ecosystem for IoT that is secure, flexible and standardized to enable organic relationships between all the stakeholders in these separate worlds.

Security is a multifaceted problem

It is no surprise that IoT security doesn’t get the best press when a vulnerable device as simple as a fish tank monitor can compromise a network or be turned into a distributed denial-of-service bot. It is also perceived as expensive, forcing a tradeoff between implementing security on systems and lowering costs and decreasing time to market.

Unfortunately, security isn’t simple — it’s a multifaceted issue that requires attention at all layers. If we want the benefits of an open system, authorization and access to devices and resources need to be managed and controlled. Data in transport needs to be encrypted from end to end to avoid single points of compromise. Connected devices also require longer-term support lifecycles with the ability to remotely manage secure, validated upgrades.

User Services Platform

As a Standards Development Organization, the Broadband Forum saw many of these same issues 15 years ago with another tricky device ecosystem: broadband home gateways. Managing, monitoring and upgrading these devices were extremely difficult and caused a big headache for internet service providers.

To solve this, the Forum developed the CPE WAN Management Protocol, or CWMP, more commonly known as TR-069. This included a communication protocol plus a rigorously standardized data model for everything related to broadband and consumer networking, including interfaces, services, software and firmware management. TR-069 is now one of the most successful device management systems of all time, with more than 800 million deployed devices supporting the protocol.

Fast-forward to today, where we see that such a system is required to support the IoT space. Learning from TR-069, the Broadband Forum’s User Services Platform was built to meet this demand, providing a standardized architecture, protocol and set of resource definitions (data models) to enable upgradability, lifecycle management, bootstrapping, configuration, monitoring and secure end-user control of all manner of connected devices and applications.

USP and IoT security

USP provides immediate benefits to developers looking to deploy secure IoT systems, including a standardized upgradability mechanism, robust and manageable access control mechanism, and an end-to-end message exchange system that provides authentication, authorization and encryption at the application layer.

It defines an extensible set of operations that allows a management system — controlled by device manufacturers or IoT service providers — to remotely plan, deploy and activate IoT firmware. This is one of USP’s primary use cases, as the upgradability of IoT devices is arguably the biggest factor affecting security. Consumer electronics must get used to much longer product lifecycles for connected devices, and managed upgrades are a key part of that.

Secondly, multiple stakeholders will need to access and control different resources of an IoT system. USP provides a mechanism for endpoints to establish trust with control points and provide or deny detailed and specific access levels to resources.

Finally, USP contains an end-to-end message security layer above what is done with TLS/DTLS and TCP/UDP. It is flexible and designed so that different transports can be used for different deployment use cases, with version 1.0 including support for USP over CoAP, STOMP and WebSockets. All of these protocols can use TLS between two endpoints, but if messages are transferred through a proxy or message broker, it creates an attractive target for attacks. USP uses its own records to provide end-to-end message integrity, security and privacy for end users.

Today’s standard for tomorrow’s world

The need for a standardized platform that enables these features is clear. The sheer number of devices has increased by an order of magnitude, and IoT will only increase this. That also means the threat level is much higher — there’s more at risk, and security in IoT is currently embryonic.

Managed IoT deployments are the way to do it, and USP has an advantage for acceptance given the legacy and penetration of TR-069, as well as the ability to take IoT to the next level and unlock the full potential of the smart home.

USP version 1.0 was released by the Broadband Forum in April 2018. To find out more about implementation, specifications and being a part of the Broadband Forum community visit: https://www.broadband-forum.org/user-services-platform. The specification is also available directly at https://usp.technology.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


July 10, 2018  12:43 PM

Three ways open source software makes IoT stronger

Josh Garrett Profile: Josh Garrett
ai, Artificial intelligence, Data, Github, Google, Internet of Things, iot, IoT analytics, Microsoft, Open source, Open source security, Open source software, Software

Open source software has never been more valuable than it is today. But, don’t take my word for it — just look at Microsoft’s most recent move. Why else would this tech titan spend $7.5 billion to acquire GitHub, the world’s most popular software development platform?

MOBI, the company I work for, has also hopped on the open source bandwagon to create technology-driven tools. Recently, our development team launched Numberjack, a global phone number validation and normalization software to help users validate, standardize and display phone numbers in a variety of international formats.

Modern open source technologies are even expanding to include IoT endpoints. Since few organizations can ensure integration and operability with other mobile technologies currently, global IoT initiatives are increasingly using open source software to create more efficient connectivity and data communication paths.

If implemented safely and strategically, the competitive advantages open source IoT creates can be tremendous. In fact, forward-thinking organizations are already seeing these advanced initiative investments pay off in three disruptive ways: more meaningful data, advanced artificial intelligence capabilities and unlimited IoT upgrades.

Driving data insights

Organizing IoT’s constant communication is a job that’s much easier said than done — especially when managing thousands of global endpoint devices. For enterprise mobility managers, however, that’s just step one. Only after feedback is consistently collected and stored can a company start to make sense of its data insights, much less use them to make impactful business decisions.

By introducing an open source IoT software product, organizations give themselves an always-updated tool that helps accelerate internal data analysis efforts and more accurately maps a mobile technology program’s most serious information needs. This not only eliminates the time-consuming, non-essential tasks IoT management often creates, but refocuses enterprise IT resources to capturing only the most relevant, desirable data insights possible.

Some companies are even using open source IoT to streamline data processing tasks, whether it works with one or 100 different cloud storage structures. After all, it’s much easier to migrate analytics and data sets if the exact same source code can separately reside on each enterprise domain.

Accelerating AI’s impact

Another advantage open source IoT creates is the ability to take advantage of publicly available AI tools like TensorFlow, Google’s deep learning AI framework. Since 2015 (when Google handed over TensorFlow to the open source software community), global IoT initiatives have relied on tools like these to create training models that prepare enterprise technologies for a virtually unlimited number of business applications and use cases.

Without these publicly available AI innovations, connected IoT systems simply can’t include the features many global initiatives expect today, like native image recognition or automated predictive maintenance. Moving forward, the ability to use these insights will only become more valuable — if not essential — to ensuring seamless operability between all enterprise endpoints.

Endless IoT improvements

While a lack of proprietary IoT software does pose risks that can be concerning to IoT asset management efforts, organizations are often willing to overlook those dangers for the promise of a continuously updated code base only open source software can deliver. Unlike licensed options that are only upgraded a few times every year, open source IoT technologies feature up-to-the-minute input from thousands of subject matter experts all over the world. This means that base IoT platforms are not only always compatible with the latest enterprise technology available, but that specialized software versions and instances can be created faster and more easily than ever before. Now, even the most niche industry or application can have its own custom IoT configuration to satisfy business demands.

Open source IoT development can even help organizations overcome one of the industry’s most dangerous potential downfalls: a lack of IoT manufacturer security standards. These publicly available IoT tools set a minimum standard for global endpoint governance and processes, allowing for more collaborative software security rules that can be structured to improve security in proven, measurable ways.

While enterprise IoT software still has a long way to go before it can guarantee data security and privacy, open source development projects like Numberjack could very well be the path to a successfully managed future. It will be interesting to see whether organizations ultimately trust these new-age innovations, or play it safe and choose a more traditional approach to technology instead.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 1 of 10512345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: