IoT Agenda

Page 4 of 69« First...23456...102030...Last »

November 3, 2017  1:57 PM

IoT’s impact on the digital divide

Greg Najjar Profile: Greg Najjar
3G, 4G, 5G, Connectivity, digital divide, Disaster Recovery, Internet of Things, iot, IOT Network, Smart cities, smart city, Wi-Fi, Wireless

Advances in connectivity have widened the digital divide, leaving many mid-to-low income Americans at risk of being left even further behind. According to Pew Research Center, three out of 10 adult households with incomes under $30K do not own smartphones and almost half of these households also lack broadband internet connection. Conversely, nearly all higher earning households ($100K+ annual salary) have access to multiple smart devices.

This kind of disparity will have long lasting effects on American socioeconomic culture. While many Americans take for granted their ability to peruse social media and connect with friends and family, a whole segment of our nation lacks these abilities. This latter group is negatively impacted by the absence of exposure to immediate information. This is particularly true of younger children and teenagers who are prevented from being competitive academically with their well-connected peers. This divide will have drastic implications for generations to come.

Many companies are looking for ways to address the widespread connectivity issues. Sprint is among them, and has launched “The 1Million Project”, an initiative that will provide better cellular and data coverage to correct the homework gap that affects millions of schools and students schools across the United States due to lack of internet access. Connectivity is graduating from “luxury” to “necessity” and will only become more important as internet of things gains traction.

Most analyst firms anticipate somewhere between 20 billion and 50 billion devices being connected by 2020. By some estimates, affluent homes will have as many as 500 connected devices. With smart cities becoming a reality, the digital divide between rural and urban markets is potentially poised to grow even wider in terms of connected living.

However, with careful consideration, the IoT market can mature in ways that benefit all demographics. It is possible to mitigate the damage exacerbated by today’s connectivity gap.

The road to a negative outcome

IoT requires adaptable, but consistent connectivity, leaving the potential for smart devices to create benefits that not everyone can utilize. For instance, smart sensors within the home offer the ability to automatically turn off a stove that was mistakenly left on. Even insurance companies are getting in on the smart home trend; people can lower their insurance premiums by utilizing certain connected devices.

Moreover, the current business model for adding IoT devices to a cellular plan may not scale in the consumer’s favor. If lower-income individuals lack a smartphone or have limited money to spend on a cellular plan, they’re unlikely to afford additional monthly fees for connected devices (consider the $10 monthly fee for the new cellular Apple Watch).

Perhaps the most worrisome threat of IoT on the digital divide is the rise of smart cities. What’s great for urban city dwellers will be absent for rural citizens. This includes early projects, like LinkNYC, where New York City transformed existing telecom infrastructure into wireless convenience (i.e., replaced payphones with Wi-Fi hubs). Meanwhile, rural towns are struggling to get broadband infrastructure in place. Smart cities don’t just exist overnight; they become “smarter” due to years of previous telecom investments.

Toronto is projected to be one of the first true smart cities. As one of the test subjects for Sidewalk Labs, Toronto will soon have sensors and cameras to capture constant, real-time information as a result of the company’s $50 million contribution to the project. The updated technology will monitor traffic flow, noise levels, air quality, energy usage, travel patterns and more. If executed as planned, cities such as Toronto could be more proactive in solving citywide issues that were never before monitored and identified.

Taking public safety into consideration, the advantages that smart cities may have over rural areas could be profound. IoT technologies such as mounted cameras and other connectivity devices allow first responders to provide extended monitoring capabilities to make cities safer and reaction times to events even quicker. Utility companies can use IoT to monitor different types of infrastructure (like pipes, wiring and so forth) in order to oversee aging equipment and detect issues before they happen.

The path to a positive outcome

Project Loon, a helium balloon put forth by Alphabet (Google’s parent company) offers a hopeful example of how IoT can bridge the digital divide. The product was deployed to help achieve connectivity in Puerto Rico after the series of natural disasters decimated nearly all connectivity on the island.

Project Loon is ongoing, but not limited to disaster recovery. Loon is an example of how this kind of project could develop into an ongoing way to solve for rural connectivity. According to early tests, Loon was able to stay in the appropriate place for up to 90 days based on Google’s algorithms that anticipated wind speeds and direction. While questions remain on how a Loon-like initiative could be funded to supply rural areas with much-needed connectivity, this project offers a glimpse of how to democratize access to the internet.

Consumerization of the small cell also has potential to help alleviate the widening digital divide. Sprint recently unveiled its “Magic Box,” an affordable all-wireless small cell which uses existing cell towers for backhaul to allow households to boost cellular signal throughout individual homes. This product isn’t an IoT device, but highlights the direction technology is headed, fueled by consumers’ growing comfort with connected products. The success of this product could create a DIY aspect of reducing connectivity issues in rural areas; neighbors will be able to help neighbors overcome the digital divide due to the range of small cells including the Magic Box.

Within the field of telecom, significant advances are being made to advance the growth of IoT. To encourage IoT, connectivity providers of all sorts are working to identify ways to fill in the dead zones and create blanket connection and redundancy for multiple frequencies. Contrary to a common belief, 5G will not be used to handle all IoT connections, and devices will still need to leverage valuable 4G and 3G connections. Carriers and telecom vendors are making strides to provide better ongoing connection to rural areas through a combination of coverage solutions. Providers of DAS and small cells have evolved from only offering boosted cellular signals and network densification in-building to building mobile and outdoor technologies that can help solve for the last mile.

While each year widens the digital divide between rural communities and metropolitan smart cities as a result of innovation, 5G and telecom companies are working to supply increased connection to rural areas to bring about positive change and close the gap. By working together as an industry, this feat seems more possible than ever before.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

November 3, 2017  12:05 PM

What does the future of mobility look like?

Josh Garrett Profile: Josh Garrett
ai, Artificial intelligence, Enterprise IoT, Internet of Things, iot, IoT devices, Laptops, M2M, Mobile devices, Smartphones, Tablets

Predicting the technology of the future is difficult, if not impossible. As we look toward the next five years and what mobile technology might look like, we decided to ask 300 IT decision-makers of multinational enterprises what technology they think will be commonplace in their organizations in the year 2022. Below is a sneak peek of our findings.

Smartphones, laptops and tablets will remain staples within the enterprise — no surprises there. What is surprising is the variety of new devices mobility experts predict their companies will use moving forward. Machine-to-machine (M2M), virtual assistants, wearables, virtual reality headsets and automation are all expected to become popular workplace tools.

According to 57% of the surveyed decision-makers, each employee will also be aided by his own virtual assistant within the next five years. When asked about the most useful impact that artificial intelligence will have on their organization, the most compelling reason for implementing it is to assist workers to perform their jobs better — not to eliminate their jobs. In fact, decreasing the number of human workers required was the least impactful reason for implementing AI, ranking behind benefits like improving accuracy and eliminating errors or reducing the need for employees to do mundane tasks.

Just because 95% of IT decision-makers recognize the benefits of AI doesn’t mean that they’re rushing to implement AI technology. After all, there are often many obstacles to tackle when implementing any type of new technology. So, we asked survey respondents what barriers were holding them back from implementing AI and automated technology. Cost and security unknowns were at the top of the list, but not far behind were employee morale (knowing that employees would be concerned about job security) and training needs.

When it comes to implementing M2M and IoT, no single concern stood out — instead, the barriers seem to be program-specific. Some companies find this technology’s implementation process too complicated, while others find it too expensive. The number of devices seems to be an issue for other organizations — some find it difficult to track and secure so many devices. Some organizations are still skeptical of the return on investment they’ll see from M2M or IoT, and others are simply overwhelmed at the prospects and don’t even know where to begin.

If you’d like to learn more about how today’s IT decision-makers think about the future of IoT and mobile technology, click here to download our full report (note: registration required).

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 2, 2017  4:51 PM

Without fog, autonomous cars are going nowhere

Lynne Canavan Profile: Lynne Canavan
Cars, Communications, FOG, fog computing, Internet of Things, iot, IoT hardware, IOT Network, self-driving car, Sensors

According to a prediction made by Forbes back in March, 20 million self-driving cars will hit the road by 2020. By 2030, it estimates that one in four cars will be self-driven.

Given this inevitability, the obvious question is: “Are we ready?”

As you might imagine, the safe and secure operation of millions of 2-ton projectiles simultaneously speeding freely along our roadways presents some imposing technological obstacles. Autonomous driving (AD) vehicles require an incredibly intricate yet reliable infrastructure. There is no room for error. Precise operation during every millisecond of drive time is critical to everything.

The compute requirements of an AD vehicle are mind-blowing. According to Intel, by 2020 each autonomous vehicle will generate a tsunami of data — more than 4,000 GB per day. That’s equal to the amount of data generated by 3,000 people, assuming the average internet user produces 1.5 GB per day. Following that math, a million AD vehicles will generate as much data as 3 billion people. And every bit and byte is dedicated to precision operations and safety — there can be no waste or excess.

Here is the breakdown behind Intel’s calculation:

Of course, the huge amount of data generated by AD vehicles — coupled with the real-time service requirements — comes at the expense of network resources. Indeed, the workload of AD vehicles will require nothing less than the world’s most advanced network architecture.

A new communications ecosystem

AD’s inter- and intra-system communications necessitates the development of an entirely new technological ecosystem that brings together the cloud, the AD network infrastructure and the road infrastructure. Since seamless, real-time communications between these hierarchies is required, cloud transactions alone are nowhere near adequate for AD. That’s where fog computing and networking comes in.

The OpenFog Reference Architecture, published in February, provides a medium- to high-level view of system architectures for fog nodes and networks in IoT, 5G and AI — each of which is critical to AD. As shown in the diagram, the multi-tiered OpenFog architecture fills in the gaps along the cloud-to-thing continuum by supporting vehicle-to-cloud (V2C) and vehicle-to-everything (V2X) connectivity.

fog computing for autonomous vehicles

An example fog computing architecture for autonomous vehicles.

Sensor data and video/imaging content for AD vehicles create huge amounts of upstream traffic (from vehicles to cloud), as well as huge amounts of downstream traffic (from cloud to vehicles). This requires a distributed vehicle-to-cloud model that enables safety services. With fog, each vehicle has a distributed, onboard compute architecture with significant connectivity between multiple onboard compute nodes that support analytics, storage and other applications.

The fog network typically takes the form of fog nodes arranged in a hierarchy between the low-level control computers in the vehicle and the remote servers in the cloud. This hierarchy enables multiple AD services and helps determine which services need to use the cloud and which services are more efficiently conducted without the cloud, at the node and network level.

V2C technologies adhering to the OpenFog architecture enable AD processes while providing various services that assist the AD driving process (such as real-time, high-def maps). This can help vehicles on roads drive cooperatively with one another and to be aware of road hazards. Fog also includes higher-level mobile fog nodes in the vehicles that coordinate the functions of the lower-level processors that manage things such as powertrain control, sensing, collision avoidance, navigation, entertainment and so on.

Fog nodes are also prevalent in roadside units, so high-performance computation and large, reliable, secure storage is located within a short network hop of all vehicle positions. Above that, regional fog nodes coordinate the operation of roadside fog nodes, optimizing the smart highway for all drivers. And when the fog infrastructure communicates with the cloud, it is to ensure that everything is safe and efficient across the entire smart transportation system.

The fog architecture was designed to easily enable V2X communication and services. Each vehicle is a mobile fog node that communicates with the infrastructure, other vehicles, the cloud and outside entities, such as pedestrians and bikers.

Fog node functionality

Let’s look at four types of fog node implementations in the AD ecosystem:

  1. In-vehicle fog nodes provide distributed, onboard compute infrastructure for AD vehicles. The nodes process data from cameras, Lidar and other sensors. They communicate over a high-speed databus for onboard advanced analytics, path-planning and the rapid response needed for AD. Real-time visualization of hazards can be fed to the navigation system.
    Additionally, in-vehicle fog nodes provide communications with other in-vehicle and infrastructure fog nodes for early access to travel safety-related information, such as road or lane closures, obstructions on the road, traffic incidents such as accidents, hazardous objects on the road, icy conditions and so forth. This enables AD vehicles to plan alternative routes dynamically.
  2. In-vehicle fog nodes, along the road and at the network edge, provide extended sensing capabilities such as smart cameras with computer vision in vehicles, smart cameras with computer vision in roads and roadside units with sensor capabilities for detecting metrics such as vehicle density and ramp length.
  3. Fog nodes co-located with road side units, such as traffic lights, streetlights or charging stations kiosks, provide local and regional services provision (for example, traffic information updates, accidents alerts, touristic guide information, shopping promotions).
  4. Fog nodes co-located with street cameras provide edge analytic capabilities closer to the source of video that can be vehicle cameras or street cameras.

Fog provides an entirely new set of network resources upon which AD applications can run their computation, networking and storage functions, located between the traditional resources in the car and in the cloud. The OpenFog architecture can greatly improve the performance, efficiency, bandwidth, reliability and feature richness of AD networks.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 2, 2017  2:31 PM

Putting the M and the C into 5G cellular IoT

Vicki Livingston Profile: Vicki Livingston
3G, 3GPP, 4G, 5G, Cellular, Communications, Connectivity, Internet of Things, iot, IOT Network, LPWAN, LTE, M2M

A majority of industry participants recognize that M2M, and now IoT, represents one of the key growth opportunities for telecommunication service providers and enterprises of various sizes in the next decade. Whereas 4G has been driven by device proliferation, bandwidth-hungry mobile services and dynamic information access, 5G will also be driven by IoT applications. There will be a wide range of IoT use cases in the future, and the market is now expanding toward both massive IoT (MIoT) deployment, as well as more advanced technologies that may be categorized as critical IoT.

Massive IoT and critical IoT requirements

Differing requirements for massive and critical IoT applications 1

Massive IoT refers to the tens of billions of devices, objects, and machines that require ubiquitous connectivity even in the most remote locations, like sensors buried deep in the ground, and that report their sensing data to the cloud on a regular basis. To reach massive scale, which is defined by global standards body 3GPP as at least 1 million devices per kilometer2, mobile networks must more efficiently support the simplest devices that communicate infrequently, and are ultra-energy efficient so they can deliver an extremely long ten-year battery life. The requirement would be for low-cost devices with low energy consumption and good coverage.

Massive use cases, for example smart metering, have different network requirements and highly commercial needs:

  • While reliability and availability might be more or less important, extended coverage is often a must and long latency is not problematic
  • Infrequent, mainly downlink, and small data packages
  • Devices need to be inexpensive (i.e., simple modules)
  • Device battery must last many years

Alternatively, critical IoT applications will have very high demands for reliability, availability and low latency, which could be enabled by LTE or 5G capabilities. The volumes would be much smaller, but the business value is significantly higher.

Critical use cases, for example public safety applications, have very demanding network requirements:

  • Low latency
  • High reliability and availability

There are also many other use cases between the two extremes, which today rely on 2G, 3G or 4G cellular connectivity. For example, a third group might be defined as “enterprise applications:”

  • They might need moderate bitrate, support for mobility or even potentially include voice (Voice over LTE)
  • They use smarter devices than just connected sensors (for example, personal digital assistants and insurance telematics), but these must be fairly inexpensive and have a long battery life.

The main questions asked by industry participants in the early stages of considering entering the field of M2M and IoT are the following: What are the key market drivers for IoT? How will we realize the 5G vision of connecting the massive IoT and critical IoT applications?

Globally, M2M connections will grow from 780 million in 2016 to 3.3 billion by 2021, a 34% compound annual growth rate and a fourfold growth in the evolution from 2G to 3G to 4G3.

M2M growth and migration

Global M2M growth and migration from 2G to 3G and 4G+ 4

The underlying fundamental enabler that makes this growth happen is the technology evolution. We are now at the point in time where viable technologies are available at the same time as concrete needs from different stakeholders are taking shape. This is the key reason for the emerging 3GPP Release 13 standards — NB-IoT, LTE for Machines (LTE-M), Extended Coverage-GSM (EC-GSM) — in low-power wide area networks (LPWAN).

Different IoT devices need different connectivity services — some only need to send a few bytes at long intervals, while others require continuous high-bandwidth connectivity with low latency. Operators, therefore, need to define a strategy that describes in which segments they want to compete and which will then guide them on how to tailor services for these segments and what technologies to deploy.

Frequency licenses are a key operator asset. If the operator has available FD-LTE capacity in sub-GHz frequencies, implementing NB-IoT functionality is a natural choice. If TDD-LTE capacity is available, then both NB-IoT and LTE-M are feasible. If GSM is available, EC-GSM IoT can be deployed, and if 3G is available, spectrum can be refarmed to support NB-IoT. All these options are possible due to the cellular standards development for IoT in 3GPP Release 13.

Correct pricing and price differentiation are essential. As the low-power wide area (LPWA) IoT market expands, lower prices for connecting devices to the internet will support the emergence of new applications with new types of sensors.

For mobile operators, IoT connectivity is a clear differentiator between companies like Amazon, IBM and Microsoft, which all compete in the IoT arena, but offer no IoT connectivity.

As well as connectivity, operators can offer additional services such as devices and device management, as well as applications and services related to those devices.

The first cellular IoT networks supporting massive IoT applications, based on Cat-M1 and NB-IoT technologies, were launched in early 2017 and are called LTE-M or LTE for Machines. LTE-M is the commercial term for the LTE-MTC LPWA technology standard published by 3GPP in the Release 13 specification. It specifically refers to LTE Cat-M1, suitable for IoT. LTE-M is a low-power wide area technology which supports IoT through lower device complexity and provides extended coverage, while allowing the reuse of the LTE installed base. This allows battery lifetime as long as 10 years (or more) for a wide range of use cases, with the modem costs reduced to 20-25% of the current EGPRS modems. LTE-M IoT devices connect directly to a 4G network, without a gateway and on batteries.

The LTE-M network is deployed with global, 3GPP standardized technology using licensed spectrum for carrier-grade security. This differentiates LTE-M from the technologies offered by other companies with non-cellular IoT offerings. LTE-M supports large-scale IoT deployments, such as smart city services, smart metering, asset tracking, supply chain management, security and alarm monitoring, and personal wearables.

Advantages of LTE-M over traditional IoT connectivity options include:

  • Longer battery life (expected up to 10 years)
  • Better coverage for IoT devices underground and deep inside buildings
  • Reduced module size (as small as 1/6 the size of current modules)

LTE-M has been embraced by leading global carriers to build a broad base of ecosystem partners, devices and applications for global markets. Supported by all major mobile equipment, chipset and module manufacturers, LTE-M networks will coexist with 2G, 3G and 4G mobile networks and benefit from all the security and privacy features of mobile networks, such as support for user identity confidentiality, entity authentication, confidentiality, data integrity and mobile equipment identification.

Commercial launches of LTE-M networks are taking place globally in 2017-2018. As of Sept, 2017, worldwide, there were 13 commercial NB-IoT networks including six in Western Europe and four in Asia Pacific, and six commercial LTE-M networks including two in the U.S. and three in Asia Pacific.

The U.S. is one of the largest and most advanced IoT markets in the world with both AT&T and Verizon launching nationwide LTE-M networks.

For example, AT&T launched LTE-M services in the U.S. at mid-year 2017, and will launch in Mexico by the end of the year to create an LTE-M footprint covering 400 million people. AT&T’s LTE-M deployment — the result of software upgrades — marks another step forward on the path to 5G and massive IoT.

Sprint launched an LTE Cat-1 IoT network nationwide by end of July 2017, and is anticipating a further upgrade to LTE-M in mid-2018, in turn followed by the rollout of LTE Cat NB1 (NB-IoT).

T-Mobile has tackled some of the biggest obstacles slowing IoT innovation with offerings for IoT and M2M. T-Mobile U.S. CTO Neville Ray said the operator will be deploying narrowband LTE (NB-LTE) and LTE-M, although a timeline had not been announced at mid-year 2017.

LTE-M benefits

Benefits of LTE-M. 5

Analyst firm ABI has predicted that Cat-M technology will see strong growth beginning in 2018 as network operators become more aggressive in their deployments. The initial cost advantage of non-cellular networks is likely to dissipate as cellular operators move on their deployments.

“Size and speed matter in the burgeoning LPWAN market,” said Steve Hilton, analyst at MachNation. “The more devices ordered for a technology like Cat 1, the lower the per unit price per device. And most assuredly the success of this market is going to depend on extremely inexpensive devices. In addition, the sooner that LPWAN solutions are available on licensed spectrum from carriers like Sprint, AT&T and Verizon, the less market opportunity there is for non-dedicated spectrum solutions like Sigfox and Ingenu.”6

With 3GPP Release 13 eMTC and NB-IoT as a basis, more enhancements to existing features, such as mobility and Voice-over-LTE support, were added in Release 14. Two key features — broadcast and positioning — were introduced. Latency and battery consumption reduction were also planned. New user equipment categories were added to enable UEs with expanded capabilities and supporting higher data rates. The 5G Americas white paper, “LTE and 5G Technologies Enabling the Internet of Things,” provides expanded information on many features for both massive IoT and critical IoT.

Looking ahead, 3GPP Release 15 introduces 5G Phase 1: While the requirements address MIoT along with ultra-reliable low latency communications, enhanced mobile broadband and other new cellular capabilities, 5G Phase 1 will focus on eMBB as an early deployment option of the 5G new radio and evolved core network. 3GPP SA2 has initiated a study on the architectural impacts of MIoT in the Release 15 timeframe, which will enable the architecture and protocol enhancements for MIoT to be realized in Release 16. These enhancements will further expand IoT capabilities to take advantage of increased coverage with wide area cells and small cells, improved resource efficiency for IoT devices with limited or no mobility, and support for multiple access technologies, from LTE and 5G new radio to WLAN and even fixed access. As the market for IoT continues to expand, 3GPP will continue to support the growing demands for new and improved communications for IoT devices.

Future white papers from 5G Americas will address the advancements for CIoT in Releases 14, 15 and 16 as the market continues to grow and expand through many different use cases and services. Massive IoT and critical IoT will be key parts of 5G.

1 “Cellular Networks for Massive IoT,” Ericsson white paper. Jan. 2016.
2 “Leading the LTE IoT evolution to connect the massive Internet of Things,” Qualcomm white paper, June 2017.
3 Visual Networking Index, Cisco, March 2017.
4 Visual Networking Index, Cisco, March 2017.
5 IEEE ComSoc Technology Blog.
6 IEEE ComSoc Technology Blog.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 2, 2017  1:23 PM

Data context vs. content in an IIoT environment

Christopher Bergey Profile: Christopher Bergey
Artificial intelligence, Cloud storage, Data Management, Data storage, Edge computing, IIoT, Industrial IoT, Internet of Things, iot, IoT data, M2M, Machine data, Machine learning

There is a tremendous amount of raw data being generated from machines, machine sensors and robots on the factory floor as part of the new industrial internet of things revolution. Some of this data will have intelligence and value that can improve operational efficiencies, foresee maintenance requirements and deliver smarter and faster business decisions. But most of it will be somewhat wasteful, not that interesting and certainly not worth saving for any extended period of time. The challenge that many industrial companies face — or will soon face — from their IIoT systems is how to extract value and intelligence from machine data?

Data content vs. data context

To answer this question, it is important to understand the difference in data between content and context relating to an IIoT environment. Automated production machinery and built-in sensors record endless hours of unfiltered operational machine data. That entity of recordings is content. There is a lot of information within the recordings, but nothing that can extract value, or intelligence, or enable process improvement decisions.

From the data that a machine collects, wouldn’t it be valuable to know more about the machine’s operation and whether it has deficiencies, is running off-center or requires maintenance or immediate attention? How about information that includes peak hours of machine use, associated yields, product rejects, operator effectiveness, congestion areas, material use estimates and so forth? These entities are context. Interesting or valuable data which needs to be stored for further use, analysis or mining for the unexpected, at some point in the future.

As this raw content resides in machines, sensors or robots at the edge of the network, compute processing is required to add that intelligence to the data. As data changes, more real-time processing at the edge of the network will be required. Historically, humans wrote algorithms in an attempt to transform the data from content to context. These simpler, fixed algorithms were processed over a small data stream, at near real time, local to the raw data in the devices. Nowadays, the sensor-created data streams are enormous and high fidelity.

Artificial intelligence is an example of massive real-time processing at the edge, enabling machines to perform human-like tasks. The in-machine sensors read, compare and physically map machine or robotic data to its environment, and include analysis and intelligent algorithms that look for patterns in the data, and will alert operators to anomalies and opportunities for process improvements that can save a manufacturing operation significant time and money.

Machine learning is the “teacher” of localized AI and developed from learning patterns within very large data sets so, when applied to machines or robots, will analyze behavioral machine patterns and interpret real-life operational scenarios that the machine can learn from. The more it learns, the more it can improve the localized AI algorithms to be even more accurate and effective.

IIoT edge-to-cloud storage strategies

The abundance of data generated from IIoT systems and AI-supported machine applications is creating new challenges for industrial operations. Either the systems respond to developing situations or review and analyze historical data, looking for areas where the process can be improved so the artificial intelligent agents can be trained to monitor the system. The ability to respond to real-time data changes requires that the data be immediately accessible and locally available (edge storage), while data worth saving for future use, analysis or process training purposes will be moved to the cloud.

Data analytics at the edge has become more of a requirement as well, as the growth of “smart video” in today’s surveillance systems creating business value and intelligence. Once placed in a commercial or retail setting, the smart surveillance cameras can perform real-time analytics that can recognize customer facial expressions, identify the number of people in the store at a given time, where they go, how long they stay, the effect of sign placements and a host of other possible options. The analysis is performed locally at the edge, in real time, as the results reside in the cloud for archival, or deleted and re-recorded.

In order to generate content locally that will lead to valuable context, either the compute and storage elements will reside directly within a sensor, and sometimes as part of the machine, or the data will be sent to a local wired or wireless network, using an edge gateway but located on the production floor. The IIoT storage strategy is to not funnel all of the data to the cloud, but instead, use a combination that stores data locally at the machine-level, as well as an edge gateway at the factory-level so that data can be aggregated locally, not exposed to the outside world, analyzed at the edge for intelligence and translated into a common cloud format for long-term data storage.

Forward-looking statements: This article may contain forward-looking statements, including statements relating to expectations for Western Digital’s embedded products, the market for these products, product development efforts, and the capacities, capabilities and applications of its products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 1, 2017  3:46 PM

Girding the grid: The missing piece of the smart city revolution

Steven Martin Steven Martin Profile: Steven Martin
Electric grid, energy, Internet of Things, iot, IoT applications, Service providers, Smart cities, smart city, Smart grid, utilities

Three and a half billion people live in cities around the world today, and this number is expected to nearly double by 2050. The rapid pace of urbanization, coupled with advancements in cloud computing, artificial intelligence and sensors, is driving a smart city revolution. From San Diego to Barcelona, the latest IoT technologies are tackling pollution, traffic congestion and crime — just to name a few.

A fundamental key to scaling and advancing these IoT technologies comes down to a city’s ability to transport reliable, affordable and clean power. But many developed nations — particularly the United States — are still using electrical grids constructed in the 1950s and ’60s with 50-year life expectancies. Our grids are not only out of date, they were built to manage a uniformed flow of electricity primarily from coal, petroleum and natural gas. The introduction of wind and solar energy, which ebbs and flows based on weather patterns, creates new stresses the grid was not designed to handle.

Combined with an uptick in extreme weather, the U.S. is experiencing a steady rise in blackouts. If this continues, the American Society of Civil Engineers estimates that U.S. gross domestic product will fall by a total of $819 billion by 2025 and $1.9 trillion by 2040. Furthermore, it predicts that the U.S. economy will end up with an average of 102,000 fewer jobs than it would otherwise have by 2025 and 242,000 fewer jobs in 2040.

While significant updates to our electrical grid infrastructure are needed, there are several emerging technologies that cities can start integrating right away to help power a more cost-effective, reliable and secure grid.

Digital twin

The heart of a smart city is its ability to anticipate a problem before it happens. Through digital twin technology, we can create digital replicas of physical assets to continuously learn from multiple sources and share what they learn with other people and machines. By combining sensor data, advanced analytics and artificial intelligence, digital twins build a bridge between the physical and digital worlds, providing a holistic view of the entire asset network.

Through simulation capabilities, digital twins can uncover historical behavioral patterns that show when, how and why a grid breaks — in turn, allowing cities to prevent and more quickly recover from power outages. It also allows cities to test multiple scenarios to better prepare for growing urban populations, extreme weather and more renewable energy sources coming online.

Energy storage

While we’ve made huge technological strides over the last decade in increasing the effectiveness and efficiency of renewable energy production, the challenge has been how to reliably transport and distribute renewables given our current infrastructure.

To ensure a continual and reliable flow of energy, the grid must carefully manage electrons to ensure the right amount of electricity is on the grid at all times. Too much or too little and there’s a blackout. But what happens when the wind stops blowing or the wind turbine produces more electricity than consumers need at any one point in time? This is where energy storage comes in.

When production outpaces consumption needs, there is excess energy. Rather than allowing excess energy go to waste, battery storage can capture the energy for later use and protect the grid from blackouts. In contrast, when the wind is not blowing or the sun isn’t shining, energy storage can serve as a backup to ensure a consistent flow of energy from renewables.

The challenge: Current battery technology can only output electricity for a few hours. In contrast, a peaker power plant can run as long as power is being fed. Similar to the automotive industry, utility companies now have access to transitional technology which, with its digital control system, blends electric and gas power outputs into a continuous energy flow and minimizes battery degradation, which is key to making battery technology cost effective for the long haul. So far, this has enabled gas peaker plants to cut greenhouse gas emissions and air pollution by 60%.

Smart city technologies are already changing the way we live, work and play in many parts of the world. We have cities filled with lights that fight crime, thermostats that adapt to human behavior and cars that talk to each other. This would have sounded like a utopian pipe dream to my dad growing up in the 1950s. Fast forward more than 60 years and a lot has changed. However, in order to fully realize the benefits of a smart city revolution, we need a strong foundation and it all starts with a modern electrical grid.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 1, 2017  2:16 PM

Why startup-corporate collaboration is a hot topic and why you should care

Tanya Suarez Profile: Tanya Suarez
Collaboration, companies, corporations, Internet of Things, iot, iot security, Security, security in IOT, Start up, start-ups, Startup, Startups

I don’t think anyone doubts that we are in an era of disruptive innovation or that truly disruptive innovations are unlikely to come from corporates. They tend to have “innovation antibodies” that are born from the very factors that have made them successful in the first place:

  • Internal processes that mean results are repeatable and reliable across the whole organization, but can lead to inflexibility and a culture of #notinthebook
  • Legacy systems that underspin operational success. A byproduct of having been around a long time, the cost of retrofitting legacy systems in a retail bank has been estimated at $400 million
  • Regulatory compliance requirements which are born to a certain extent from mass adoption and therefore part of the price paid for success

So, the perennial question corporates ask themselves is: Do we buy or do we build? The right answer (in so far as there is one) is that you do both. You build in areas in which you have core competencies and you buy in frontier or adjacent markets that can significantly add value to your existing portfolio.

For a Cisco or an ARM, it may or may not make sense to work with IoT startups to extend their product offering, although it does makes sense to work with them as potentially scalable customers.

For industry, however, the opposite may be true. Although many firms, from travel to betting, across a wide array of industries are at their core technology firms (for example, Goldman Sachs has 30,000 employees, 10,000 of which are developers), their expertise is narrow and directed at a specific application. Partnering with startups can help revive old business models by providing market differentiation from competitors. For example, Aviva is basing its marketing campaign in the U.K. on an app that tracks driving habits and rewards good or risk-reducing behavior. Manufacturers can move from product-based models to recurring revenue models that exploit cyber-physical opportunities. This is the model being pursued, rather polemically, by John Deere. In any real sense, ownership of the tractor is being transferred from the farmers to the company.

But there is another pressing reason why corporates need to work with IoT startups: Security.

As Bruce Schneier has said, “We no longer have things with computers embedded in them. We have computers with things attached to them.” Corporates need to understand the possible ways in which their products or processes could be hacked. The IoT Reaper botnet is infecting over 10,000 devices a day, with millions of IoT devices being queued for infection, according to Qihoo’s 360.

One of the best ways corporates can begin to understand what can affect their reputation and their business is by working with IoT startups. It’s an ecosystem approach that will not only help startups better understand how to build in security and resilience, but also help corporates better understand the chain of vulnerabilities.

Context: The above comments are based on our experience of working with 132 IoT startups with founders from countries such as the Ukraine to the U.K. Our equity-free program, part of Startup Europe, aimed to get them market-ready and to establish solid foundations for growth. Overall, 78% were B2B or B2B2C startups, and part of the acceleration program involved supporting them in the process of getting to their first pilots and/or contracts with corporates.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 1, 2017  1:28 PM

Privacy: IoT’s big break or big wall?

Don DeLoach Don DeLoach Profile: Don DeLoach
Data Analytics, Data Management, Data privacy, Data protection, Data-security, GDPR, Internet of Things, iot, IoT applications, iot security, privacy, security in IOT

Security and privacy are two prevailing concerns most people have about technology. At any given lunch meeting or even standing around with drinks at a cocktail party, it is not uncommon to hear people talking about the Equifax breach or the looming concerns about what Facebook or Google are “doing with my data.”

Along with the excitement about what technology can do for us is a not so latent fear of being hacked and an equally present concern about compromised privacy. It is no surprise that the growing concerns about privacy are leading people to call for regulations to curtail this move towards unfettered use of private data.

In fact, in the European Union, the move toward regulation has been solidifying for some time, resulting in the General Data Protection Regulation (GDPR), which goes into effect in May 2018. These regulations are seen as the first and perhaps most comprehensive approach to ensuring personal information is protected. It is largely viewed as a clarion call for organizations to anonymize data or face astoundingly large fines. In fact, the fines can be 4% of a company’s annual revenue. That means a company like GlaxoSmithKline, with annual revenues in 2016 of $37.09 billion could potentially be shelling out $1.48 billion in fines. If that doesn’t get your attention, what will? So, the companies based in Europe, as well as those doing business in Europe, understand they cannot be driving analytics using personal information where they don’t have express permissions to use that information. Among other things, that means potentially mountains of data collected in the past that may become illegal. And therein lays the dilemma.

But let’s set that aside for the moment.

In the world of the internet of things, more and more people and organizations are beginning to understand that the real value in IoT is a function of the underlying data and the insight that can be gained from the analysis of that data. By definition, this suggests that the richer and cleaner the data set, the greater opportunity for insight and better resulting actions. So, instead of using an IoT-enabled device and the data from that device being used primarily by the device provider, enterprises and their CIOs and chief data officers want in on the action. They are beginning to demand ownership and control of the IoT data from all devices so they can look at data from device A in the context of devices B, C, D and so on.

Furthermore, companies are combining this information with data from their enterprise systems, such as ERP, point of sale, crew scheduling and others, and then augmenting it further with external data, like weather data, demographic data, as well as more and more data coming from a range of public IoT devices and data shared from increasingly IoT-instrumented partnerships. This data is cleansed and enriched, then propagated to a variety of constituents in what is being referred to as a first receiver architecture, designed to get the right data to the right constituent in the right way at the right time. In essence, this separates the creation of the data from the consumption of the data to the utility value of the data, thus maximizing use. Everybody wins, right?

Now go back to the GDPR regulations.

What happens when 50% of that data is contextually delinked and minimized? This poses a serious limitation. Then, when you consider the combination of data sets as described above, the minimization is compounded, thus the use is reduced substantially.

But the idea that you have to either delink and minimize your data or circumvent compliance at the risk of extreme penalties is a false choice. As with almost any large market opportunity, challenges that surface drive innovation, in some cases through radical breakthroughs by a specific technology and in others via the combination of existing technologies applied in new and innovative ways.

There is a team in Poland headed by University of Warsaw Professor Dominik Slezak doing some incredible work using statistical metadata for high-value approximation over massive amounts of underlying data that has great potential, both for investigative analytics as well as machine learning against those huge data sets, but in a fraction of the time. Further, if one looks forward, it would appear to be a capability, when coupled with other technologies that protect privacy, that can yield equivalent insights to otherwise “illegal data” that does not comply with the emerging regulations.

Another perhaps even more specific example is a new company called Anonos. (Full disclosure: I am an occasional advisor to Anonos.) Most of the privacy companies aimed at GDPR accomplish anonymization — and therefore compliance — by permanently delinking significant context. But the GDPR regulations have certain safe haven exceptions deemed acceptable that can, in fact, allow for greater flexibility, but requiring accommodations that many of the pure play anonomyzation offerings lack. Anonos is an example of a market response to a big problem, and there certainly will be others. Either the GDPR regulations will fail to go into effect (which that won’t happen) or the market will respond with more innovation to fill the gaps. And other countries around the world are sure to follow.

Now back to IoT.

The wall is created when organizations don’t take the time to understand the goals, in this case being maximum use of data, and simply rely on a path of least resistance. But whether it is taking on the issue of data primacy or data privacy, the wall — and its implied limitations — can become an opportunity with a little innovation and thoughtful planning. And since so many organizations will indeed take a path of least resistance, the magnitude of the opportunity for those getting it right is all the greater.

Many believe IoT will ultimately dwarf all other technology waves in terms of importance and especially in terms of scale. Most also agree that the real value of IoT lies in the use gained from the underlying data. While data scientists are becoming amongst the most important people in the equation, it really doesn’t take a data scientist to figure out that an enterprise taking a thoughtful approach to data, be it for security, privacy or primacy, makes all the sense in the world. It is the opportunity.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 31, 2017  2:49 PM

How mobile operators can bring the smart city to life

Aman Brar Profile: Aman Brar
GATEWAY, Gateways, Internet of Things, iot, IoT applications, IOT Network, MNO, Mobile connectivity, Mobility, Smart cities, smart city

Can you relate to this? You find that one elusive parking spot — and as you are backing up, another car sneaks in. A study conducted last month in the U.K. found that more than half of British drivers suffer from stress when they cannot find a parking space. Probably all drivers in major towns and cities everywhere could relate to parking stress. You would think that this modern day scourge could be addressed with IoT.

In theory, it should be simple. Smart parking sensors would be able to flag vacant parking spots to motorists on an application, such as Google Maps, Waze or Apple Maps, as they enter a town or city. Parking would be a breeze! Not so. As things stand, motorists are still far from finding a remedy for their parking headaches. The smart parking use case is a prime example of a problem holding back IoT’s full potential.

The rise of NB-IoT

The reason for this conundrum is two-fold — a lack of standards and a gap between efficiently connecting parking sensors with multiple cloud vendors and central application servers. Currently, NB-IoT, especially the non-IP data delivery, provides the cellular functionality required to efficiently connect devices across large distances with prolonged battery life. A number of mobile carriers, including Vodafone, Three, China Mobile and Zain, either have deployed NB-IoT networks or are conducting trials. According to analysts, the NB-IoT chipset market could grow from $16 million in 2017 to $181 million by 2022 at a compound annual growth rate of just over 60%. Yet, connecting non-IP devices, such as smart parking sensors, over NB-IoT to platforms, such as Azure, CloudPlatform or AWS, via central application servers is complex.

Currently, IoT gateways form the bridge between smart sensors or devices and internet connectivity via NB-IoT on a cellular network. 3GPP and the Service Capability Exposure Function (SCEF) set the standards to connect the device/sensor to the IoT gateway. Things start to get murky when you want the device/sensor to connect to multiple application servers and cloud platforms over NB-IoT gateways due to the absence of agreed standards and protocols. For example, a developer of a smart parking sensor would have to send its data to a central application server, transmit it via Amazon’s Cloud or to Microsoft Azure, and then to the likes of Waze, Google Maps and so on, which would pick up that data from multiple clouds. As the data from sensors is not federated and easily available, it is cumbersome and complex. This long-winded process is stifling innovation.

A bridge too far?

So, which players in the IoT ecosystem are best placed to find a solution for this? The organizations responsible for carrying the data on NB-IoT or, in other words, the mobile operators. Rather than just being the workhorse for transporting IoT data, mobile operators can play a central role by using gateways and building an open application ecosystem to foster interoperability between applications, devices and enterprise backend systems. Operators need to be the bridge into IoT systems such as AWS IoT and Azure IoT platforms. Importantly, operators also have the technology to secure the network and the IoT devices from attacks and malware, as well as provide network abstraction and enhanced connectivity.

To enable mobile operators to do this, gateways should have the capability to handle telco-grade distributed databases with the scale to manage millions, if not billions, of devices. Put simply, gateways should allow operators to consolidate the functionality required from standards and protocols such as SCEF/SCS and extend it to other API’s like REST-JSON, MQTT and federate data from multiple sources.

If, as Gartner predicts, there will be over 20 billion connected IoT devices by 2020, mobile operators could secure revenues of around $48 billion by capitalizing on the IoT/M2M opportunity. To do that, they need to grasp the initiative and foster innovation. Parking might sound trivial, but it is a use case nonetheless, within smart cities, that highlights a growing problem for application developers with NB-IoT connectivity — and an opportunity for mobile operators.

Smart cities are touted as the future of urban living where everything from waste bin collections to streetlights and transportation will be connected and intelligent. Of course, smart cities and IoT are still in their relative infancy and this provides an opportunity to iron out problems and fine-tune networks. We need to address the issues that are stifling innovation — namely the gap in efficient connectivity and a lack of standards. Mobile operators have the power to add the “smart” into cities and bring that bright future to life.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 31, 2017  12:34 PM

Four stages of securing the super-connected world

Srinivasan CR Profile: Srinivasan CR
cybersecurity, Data-security, DDOS, Internet of Things, iot, IoT data, iot security, security in IOT

October is National Cybersecurity Awareness Month. The initiative by the Department of Homeland Security and the National Cyber Security Alliance is a huge collaborative effort spanning both public and private sectors, and a good demonstration of how the industry is coming together to safeguard the digital world.

While businesses in the U.S. and globally are still reeling from the WannaCry and NotPetya ransom attacks and the massive Equifax data breach, scrambling to update their systems to protect themselves, there is another kind of threat looming on the horizon.

The internet is today in the hands of around 3.5 billion people. And there are around 6.5 billion connected devices in use worldwide — a figure that is projected to hit 27.1 billion by 2021. What’s more, as consumers, we’re more connected today than ever: the average internet user today owns 3.64 connected devices, uses 26.7 apps and has an online presence across seven different platforms.

The ubiquitous global connectivity enabled by mobile applications and the internet of things opens up great possibilities for personal and organizational growth, from smart city advancements to transforming how industries produce goods. The industrial IoT has seen significant advancement in recent years. For example, by connecting assets in a factory, organizations can have better insight into the health of their machinery and predict any major problems with their hardware before it happens, allowing them to stay one step ahead of their systems and keep costly outages to a minimum.

Yet, IoT also exposes us to more security vulnerabilities that can cause financial loss, endanger personal and public safety, and cause varying degrees of damage to business and reputation. After all, anything that is connected to the internet is a potential attack surface for cybercriminals. For example, distributed denial-of-service (DDoS) attacks are getting better at exposing vulnerabilities in networks and infecting IP-enabled devices to rapidly form a botnet army of infected devices which grind the network to a standstill. Simply put, the more devices there are connected to a network, the bigger the potential botnet army of DDoS attacks.

Furthermore, without adequate security, innocuous items that generally pose no threat can be transformed into something far more sinister. For example, traffic lights that tell cars and pedestrians to cross at the same time, or railway tracks that change to put a commuter train on a collision course.

As the number of connected devices continues to grow and both public and private sector organizations embrace IoT, IT decision-makers must pause and think about how they can work together to create an end-to-end infrastructure that can deal with the influx of new devices and the inevitably rapid spread of cyberattacks in our increasingly connected world.

First, security must be built within IoT systems and the rest of the IT estate from the ground up, instead of retrofitting piecemeal security products as new threats emerge. Second, organizations need to adopt an adaptive security model, continuously monitoring their ecosystem of IoT applications to spot threats before attacks happen. Adaptive security means shifting from an “incident response” mindset to a “continuous response” mindset. Typically, there are four stages in an adaptive security lifecycle: preventative, detective, retrospective and predictive.

  1. Preventative security is the first layer of defense. This includes things like firewalls, which are designed to block attackers and their attack before it affects the business. Most organizations have this in place already, but there is definitely a need for a mindset change. Rather than seeing preventative security as a way to block attackers completely from getting in, organizations should see it as a barrier that makes it more difficult for them to get through, giving the IT team more time to disable an attack in process.
  2. Detective security detects the attacks that have already made it through the system. The goal of this layer is to reduce the amount of time that attackers spend within the system, limiting the subsequent damage. This layer is critical, as many organizations have accepted that attackers will, at some point, encounter a gap in their defenses.
  3. Retrospective security is an intelligent layer that turns past attacks into future protection, similar to how a vaccine protects us against diseases. By analyzing the vulnerabilities exposed in a previous breach and using forensic and root cause analysis, it recommends new preventative measures for any similar incidents in the future.
  4. Predictive security plugs into the external network of threats, periodically monitoring external hacker activity underground to proactively anticipate new attack types. This is fed back to the preventative layer, putting new protections in place against evolving threats as they’re discovered.

For organizations to protect themselves, they need to get this mix right; all four of the elements improve security individually, but together they form a comprehensive, constant protection for organizations at every stage in the lifecycle of a security threat. With billions of consumer and business IoT applications exchanging billions of data points every second, IT decision-makers need to map the end-to-end journey of their data, and the threats lurking behind every corner.

At the start of this year’s National Cybersecurity Awareness Month, Assistant Director Scott Smith of the FBI’s Cyber Division said, “The FBI and our partners are working hard to stop these threats at the source, but everyone has to play a role.” Organizations that work with their peers and security specialists to secure their IoT ecosystem and network will be rewarded in the long run. There’s no one-size-fits-all approach to securing IoT worldwide; it will take a considered, collaborative effort to safeguard the super-connected world today and tomorrow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 4 of 69« First...23456...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: