IoT Agenda


May 14, 2019  3:41 PM

Analytics: Can IoT manage what it can’t measure?

Wojciech Martyniak Profile: Wojciech Martyniak
Data Analytics, Internet of Things, iot, IoT analytics, IoT cloud, IoT data, IoT devices, IoT edge, IoT infrastructure, IOT Network, iot security

A few years ago, disruptive technologies expert and author Geoffrey Moore described the importance of data analytics in fairly dramatic terms: “Without big data analytics,” he wrote, “companies are blind and deaf, wandering out onto the web like deer on a freeway.”

The same concept applies to IoT data analytics. In most cases, organizations need the insights that their connected assets collect in real time, as IoT data has a short shelf life. When data flow is slow, achieving real-time analytics becomes impossible, meaning decisions are made without the critical insights that data is meant to provide. As a result, time-sensitive functions, like security monitoring, predictive repair and process optimization, suffer.

It’s important to understand the challenges that create such issues. Like IoT itself, the factors contributing to data flow delay and the resulting detrimental impact on analytics are complex and have grown over time. They are driven, in large part, by the sheer volume and complexities of the data that’s generated, infrastructure limitations and the latencies associated with cloud processing.

Data deluge

As IoT grows, the data it produces increases at staggering rates. A recent IoT Analytics report estimated that IoT will comprise 19.4 billion devices this year. With 10% growth in devices expected each year, it’s further estimated that there will be more than 34 billion IoT devices by 2025. Furthermore, a report from IDC predicted that IoT devices will create 90 zettabytes of data a year by 2025.

Moreover, the data that’s generated in the field doesn’t necessarily come in nice, easy-to-process packages. It can be intermittent, unstructured and dynamic. Challenges will continue to increase as machine learning gains more widespread application in the field. These complex devices require more memory and CPU, or they slow processing further.

Infrastructure and security issues

Amplifying the challenges of continuously rising volumes of data are limitations in the technology being used to collect, transfer, cleanse, process, store and deliver data. Many of these challenges are rooted in the fact that the technology was not necessarily intended for those purposes. When limitations exist in functionality and scalability, it increases the likelihood that data processing and delivery will be delayed.

As such, it’s increasingly critical for organizations to invest in new data management technologies and platforms. One option is to “try on” potential new IoT technologies before investing in a full-scale launch. Another option is to develop proofs of concept or pilot studies before a full-scale launch. Regardless, the technology used needs to be scalable and capable of handling inevitable increases in data, storage and computing demands.

Security is another important consideration. A streaming-first architecture can be valuable in this regard as it allows organizations to analyze multiple endpoint security system logs. Additionally, infrastructure component logs that are created in real time can catch breaches that wouldn’t necessarily be detected through an individual security technology.

Addressing these issues, however, is only part of a long-term management solution.

The cloud, the edge and standardization

While cloud computing is integral to IoT, sending data to the cloud — and waiting for it to be sent back — can bog down data delivery speeds. This is particularly true when large amounts of data are involved. Moving one terabyte of data over a 10 Mbps broadband network, for example, can take as much as nine days to complete.

This is where the benefits of IoT edge computing are most evident. IoT edge computing allows data processing to take place directly on IoT devices or gateways near the edge of the network, meaning the information doesn’t have to make a round trip before data is delivered. Removing the step of sending all data to a centralized repository minimizes the latency issues that come with cloud computing and resolves device complexity and bandwidth constraints.

However, this is not a one-size fits all solution. Servers and devices don’t necessarily have the computing power to accommodate enterprise data. They may have battery limitations and lack the storage necessary for analytics. This means that when greater storage and computing power is necessary, analytics have to be distributed to devices, edge servers, edge gateways and central processing environments.

Another key factor is the need to standardize the communications between devices and network elements. When multiple approaches and implementations are used, devices struggle to communicate and the flow of data is slowed.

It’s encouraging to note that work already is underway for creating standards in IoT. The ITU already has a global standards initiative in place and oneM2M is working to create architecture and standards that can be applied in many different industries, including healthcare, industrial automation and home or building automation.

Despite the range of challenges to be addressed, the consistent, timely delivery of data for analysis is doable. With a multipronged strategy and a willingness to invest in infrastructure when needed, organizations can realize the full potential of IoT, including its capacity to generate revenue.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 14, 2019  1:54 PM

Massive IoT and 5G: New technologies, new possibilities

Helena Lisachuk Profile: Helena Lisachuk
*RANGE, 5G, 5G and IoT, 5G network, Bandwidth, cellular IoT, cellular network, Internet of Things, iot, LPWAN, massive IoT

We’ve all heard the mind-stretching IoT statistics by now: 73 billion IoT-connected devices installed by 2025! $1.6 trillion to be spent on purchasing those devices — in 2020 alone! These numbers can be disorienting, but as you might expect, they’re best consumed with a few grains of salt.

Every technology has its strengths and weaknesses, and IoT is no exception. An IoT deployment isn’t for everyone, and for those for whom it is appropriate, adopting the right technology mix is less about trying to surf the huge wave of hype and more about balancing tradeoffs between different options. (I’ve written about making such tradeoffs before).

But even with all these caveats, I’m going to put a small toe in the hype waters and predict that the fifth generation of wireless communication, aka 5G — which brings together a wide range of new wireless technologies into a network of networks — will render a lot of the usual IoT communication tradeoffs obsolete. In fact, I believe 5G, along with other coming cellular technologies, will bring us an entirely new generation of IoT: truly massive IoT. Here’s how.

New technology, new possibilities

Time was that building an IoT system meant balancing performance factors like bandwidth, range and latency against support requirements like power, size and cost: Better performance also meant greater power consumption, size and cost, while low-power communications with small footprints came with severe performance limitations.

But recently introduced cellular technologies, such as LTE-M and Narrowband-IoT (NB-IoT), are offering greater power efficiency at longer ranges than anyone thought possible with cellular, with the added benefits of greater mobility, lower latency and better performance.

Together, these technologies can give IoT users lower costs and more options at the low end of performance, and entirely new capabilities at the high end:

Lower performance = Lower costs + more options
Cellular can now offer a competing option to the handful of communication protocols that previously met the needs of low-power, small-footprint systems, while at the same time driving down costs. Some areas of the world are already seeing 2G components sell for under $2 and NB-IoT for $3, where as recently as 2017 prices in the UK were in the $12-$17 range.

New options = New capabilities
Integrating existing technologies into 5G will bring new capabilities for managing networks at the higher end of the performance spectrum, creating entirely new options for IoT users. For example, many 5G networks can support the formation of mesh networks of smart devices, so individual IoT endpoints don’t need to communicate directly with wireless towers, instead working through other devices that connect to the tower. So now IoT devices in austere environments, like oil rigs or connectivity-challenged places like basements, can tap into high-speed cellular networks, opening up data-hungry use cases like predictive maintenance or AI tools.

More of a good thing

These cellular technologies paving the way to 5G won’t replace existing communications technologies, but augment them by giving IoT devices access to the whole RF spectrum. The result: an IoT that can do more things in more places with more devices.

Up to now, there’ve been few options for users at the low end of the performance spectrum, with low-power wide area networks one of the most common. The addition of a low-power, inexpensive cellular option will likely spur IoT adoption in industries where penetration had been slow. Infrastructure like pipelines or wind turbines in austere environments don’t need to transmit much data, so paying to connect them via cellular coverage — normally expensive in remote locations — hasn’t historically been a great option. And it’s been hard to create wide area networks over the vast landscapes this infrastructure inhabits. But new cellular technologies can break that tradeoff, allowing for cheaper communication with less equipment.

At the high end of the performance spectrum, entirely new use cases are opened up by 5G’s low latency and high-bandwidth connections. Augmented and virtual reality applications — previously dependent on Wi-Fi or wired internet connections — can go mobile, unlocking tremendous new value for IoT users. Consider the construction industry: Every crane you see on the skyline needs a trained operator in the cab, but it can be hard to find qualified candidates in boom times. The low-latency and high-bandwidth aspect of 5G has been used to create “connected cranes” with drivers precisely controlling huge loads from remote sites hundreds of kilometers away.

Connected vehicles provide another use case. Most autonomous vehicle technology today involves sensors feeding data from the environment into the vehicle. This arrangement helps the vehicle navigate its environment, but it’s a one-way street (pun intended) because the car can’t communicate with or influence that environment. But high-speed 5G communication can allow just that, making possible cars that not only drive themselves, but do so in collaboration with their environments — communicating real time with streetlights, parking spaces, even other cars! — fundamentally reshaping how we move around.

Start thinking of use cases today

More devices popping up in existing industries, entirely new devices appearing for the first time — this is how new cellular technologies can help drive massive IoT and make real the jaw-dropping predictions for the future. Here’s one: Some analysts predict that 5G-enabled IoT will funnel 1,000 times more data to mobile networks than before.

That means the time to start thinking about what massive IoT could mean for you is now. What does a massive IoT world — a world with IoT devices in nearly every conceivable location — look like? What might it mean for your business? Will it help you increase operational efficiency? Better connect with customers? Or offer fundamentally new services?

Thinking through these questions today is the key to being ready for massive IoT tomorrow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 14, 2019  11:24 AM

Voice assistants, AI and the cloud: An illusion of performance and privacy

Joseph Dureau Profile: Joseph Dureau
ai, AI privacy, Consumer IoT, enterprise ai, Enterprise IoT, Internet of Things, iot, iot privacy, iot security, neuromorphic chips, voice assistants

With AI, the choice between cloud performance and respect for consumer privacy is a false one. Not only can AI perform well locally, but through embedded technologies, it can also personalize understanding and thereby refine it.

The voice-activated personal assistants developed by U.S. tech giants currently flooding the market work mainly in the cloud. Despite the big players wanting to shape the world according to this 100% cloud model, it deprives users of the ability to control their own data. The only truly private solution is the use of embedded technologies, which does not require that user interactions are accessed by any third-party server.

The superiority of the cloud: A myth

A misconception, knowingly maintained by the Big Four tech companies, is that technologies hosted in the cloud perform better than those embedded locally. Although the cloud, by definition, has infinite computing capacity, cloud resources do not categorically make any significant impact on performance. For the everyday use of voice technologies, for example, having enormous computing power is simply unimportant.

The most advanced AI technologies, especially machine learning technologies, often have no need of the cloud. Embedded machine learning has become commonplace. Major smartphones and computers on the market use it for common tasks, like face identification, and in a range of other applications.

The arrival of neuromorphic chips: An impending technological leap

While it is often possible to have the same performance locally as in the cloud, the arrival of neuromorphic chips further eliminates any perceived advantages for relying on the cloud. These new chips are designed specifically for implementation of neural networks, brain-inspired models that have revolutionized artificial intelligence. Simply put, this new generation of chips makes it possible to embed fairly complicated AI without the need for the cloud’s compute power. The imminent arrival of these chips on the market will mean a technological leap in everyday terms. High-end phones are already equipped with neuromorphic chips, and the coming year will see their foray into everyday objects, including speakers, televisions and home appliances.

Continuing improvement in AI on the cloud: Another myth

A fantasy for major voice-activated assistants, such as Google Assistant or Amazon Alexa, is the ability to boil all their users’ data in the same cauldron. According to them, passage of data through the cloud should improve AI continuously and thereby perfect it without limits. This mindset is the basis for sharing user data without necessarily understanding the reasons or the implications.

For example, when granting Alexa this kind of access, few users imagined that recordings made in their homes might be shared with thousands of Amazon employees or subcontractors in the United States, India, Costa Rica, Canada or Romania to be manually categorized with the goal of enhancing the voice assistant’s performance. The need for such collection and manual labeling efforts is disputed, as competing technologies reach the same levels of embedded performance, without the need of user data for training.

Embedded technology: Toward customizable AIs

Besides the total lack of respect for user privacy, this mixing of data in the cloud has the further disadvantage of giving rise to a very generic intelligence, lacking in any sort of precision or specificity. In a generic speech comprehension model used by everyone regardless of the queries, the words are weighted according to their general probability of occurrence. For example, “Barack Obama” will carry more weight than a less popular name. So, when you say something phonetically similar like “Barbara” and the voice assistant does not hear you clearly, it is likely to assume you meant to say more popular phrase. This generic approach is therefore limited in that it does not take into account the context of the research itself.

By comparison, embedded speech recognition becomes inherently contextualized, meaning that if you are talking to a smart speaker it will know that you are referencing the musical domain and will not search for “Barack Obama” as an artist when queried with “Barbara.” Thanks to embedded machine learning, this tool will even be able to enrich users’ personal tastes, creating a customizable AI according to a person’s specific needs.

As technology moves toward an “everything connected” approach, including mail, contacts, files, chat history and so forth, the risks of going through the cloud only become more concerning. Data breaches and privacy mishaps no longer deal with a specific application or tool, but now encompass a user’s entire life. As embedded machine learning continues to make strides, it is likely only a matter of time before users no longer see the reason to risk their privacy when using their AI voice assistants.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 13, 2019  1:37 PM

Protecting the modern infotainment system

David Barzilai Profile: David Barzilai
Buffer overflow, car security, Connected car, connected car security, Cyberattack, embedded security, infotainment, Internet of Things, iot, iot security, remote hacking, securing IoT

Whether you are glancing at GPS prompts, connecting to your favorite streaming satellite music station or using Bluetooth in conjunction with a dizzying number of apps available on your smartphone, the infotainment system in your car is largely innocuous and inconspicuous — at least in the grand scheme of the car driving experience.

Most passengers simply don’t think of the infotainment system as a critical part of the car. After all, it provides you with information, such as weather, and it entertains you to the best of its ability with multimedia. However, the infotainment system doesn’t make the vehicle go. It doesn’t steer the vehicle. It doesn’t stop or accelerate the vehicle. It doesn’t make the car crash.

Or does it?

As a potential gateway to a car’s advanced driver assistance systems, the infotainment system can be linked to data that can affect a car’s sensors, electronic stability control, anti-lock brakes, lane departure warning, adaptive cruise control, traction control and more. It therefore creates an attractive target for cybersecurity hackers. That should worry passengers and auto manufacturers.

What else worries the auto manufacturer is that the infotainment system is a key component of monetizing tomorrow’s automated and connected vehicle experience. However, the challenges in securing infotainment systems are complex and considerable. Externally connected web applications employing Wi-Fi, Bluetooth and USB technologies can be exploited by any computer hacker on the internet, sometimes quite easily.

A massive amount of code — over 1 GB, including over 50,000 executable files — is typical of a modern infotainment system. The sheer amount of code involved alone presents opportunities for cybersecurity exploits, using coding errors such as buffer overflow, heap overflow and other memory corruption vulnerabilities. Exposure of code and design vulnerabilities to a cybersecurity attacker can threaten the safety of the passenger.

The security of such systems — and understanding how cyberhackers attack exploit them — is critical. Unfortunately, system vulnerabilities are not always given the attention they deserve until it is too late. Thankfully, we can be proactive by learning from hackers.

A necessary part of learning how to prevent a cyberattack is sometimes counterintuitive. We need to encourage hackers to hack cars to expose code and design vulnerabilities — we just don’t want them to hack a real car that is moving. In recent years, several high-profile cases have served to help us learn from vulnerabilities exposed in the infotainment system and they are worth our attention.

One prominent example includes a now a legendary 2015 Jeep Cherokee hack demonstration. In it, researchers Charlie Miller and Chris Valasek famously used a reporter as “a digital crash-test dummy” to underscore vulnerabilities in connected entertainment and navigation systems. In this case, the two hackers were able to disable the brakes and drive the vehicle into a ditch. Moreover, they demonstrated how easy it was to overcome password protections with brute force attacks and sometimes simple guesswork based on the systems boot time.

In another well-known incident, Chinese security researchers were able to hack a Tesla Model X, turning on the brakes remotely and getting the doors and trunk to open and close while blinking the lights in time. Using memory corruption vulnerabilities, they performed this demonstration to music streamed from the car’s radio — which they dubbed “the unauthorized Xmas show.” The complex hack involved sending malicious software through the infotainment system in a series of circuitous computer exploits. The researchers were able to remotely control the car via both Wi-Fi and a cellular connection.

If that isn’t scary enough, this should be: Computest, a Dutch firm, revealed that the infotainment systems inside some Audi and Volkswagen cars were vulnerable to remote hacking. The researchers, Daan Keuper and Thijs Alkemade, confirmed these exploits using a Volkswagen Golf GTE and an Audi A3 Sportback e-tron model. The researchers used a car’s Wi-Fi connection to the infotainment system to exploit an exposed port and gain access to the car’s in-vehicle infotainment. They also gained access to the system’s root account, which they say allowed them access to other car data.

Despite the challenges that cybersecurity hackers pose, there is hope. The modern vehicle can be made secure via software integrity validations now available also for embedded systems. Control Flow Integrity (CFI) is one of the only proven techniques to block exploits of this dangerous vulnerability family.

Control-flow integrity describes computer security techniques which prevent a wide variety of malware attacks from redirecting a program’s flow of execution. Associated techniques include code-pointer separation, code-pointer integrity, stack canaries, shadow stacks and vtable pointer verification.

The CFI lock against remote code execution has traditionally been difficult to apply on resource-constrained embedded systems, but that is changing. Google, for example, introduced CFI at the end of last year into its Android kernel and while its implementation is partial in scope — both code-wise (kernel only) and security wise (only validating forwarding addresses) — it’s solid proof that using CFI to combat buffer exploits is not only possible, but practical.

It is imperative, then, that infotainment system developers that use the Linux kernel or Linux-like Android kernels insist on state-of-the-art cyberdefense technology.

Securing a car’s infotainment system against a cybersecurity attack is about preventing them at the gate. Understanding the limitations of existing techniques and applying a built-in active defense mechanism, such as CFI, is critical in today’s advanced infotainment systems.

It’s time to start viewing the cybersecurity defense of that little innocuous infotainment system as the key to passenger safety, branded user experience and the automotive revenue growth engine.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 13, 2019  11:47 AM

The edge of trust relies on an open source, interoperable IIoT framework

Richard Beeson Richard Beeson Profile: Richard Beeson
ai, edge, Edge computing, IIoT, Industrial IoT, Internet of Things, iot, IoT edge computing, IoT framework, IoT interoperability, IoT standards

Go back 20 years ago: You visit a remote mining site a good distance from any major city. It’s humming with human activity. Operators work within a closed control network, manually monitoring and adjusting set points to keep things running smoothly. Physical dangers abound.

You return to the remote mining site today and it has few, if any, human workers. Autonomous trucks and other assets move, directed by operators from a remote operating center hundreds of miles away. In some cases, machine learning or AI suggest the optimal operating conditions that direct these autonomous operations. In fact, cement manufacturer Cemex unveiled at a recent conference its success with a pilot for AI-controlled operation of its clinker coolers. AI analyzes data, forecasts optimal operating conditions and then adjusts critical set points. (Disclaimer: The conference PI World is hosted by OSIsoft, where I am currently the CTO.)

For both AI-directed autonomous or operator-controlled remote operations, trust is based on three criteria:

  1. That the entity giving the directions, whether human or algorithm, will get the operating parameters correct;
  2. That those parameters can be communicated to assets at the edge of operations; and
  3. That those assets can adjust behavior based on that communication.

Autonomous and remote operations require edge-to-edge communication, as well as cloud-to-edge communication. Unlike the cloud, which has consolidated around a handful of vendors, including Microsoft Azure, AWS and Google Cloud, the edge is specialized and more diverse by nature, encompassing everything from IoT gateways to large mining trucks to local transmission lines. As a result, the edge is a fragmented and distributed landscape. The variety of edge devices and the jobs they perform make it unlikely that a single vendor or handful of vendors will gain the critical mass to win the market.

Edge devices and assets will continue to come from different vendors, complicating coordination of communication efforts. For example, how does a truck from one vendor talk to other assets that need to know it is coming down the road at 37 kilometers per hour, while continually optimizing its overall cost of operation? How can the cloud tell different manufacturing assets that while they can stamp out 10,000 widgets per hour, the optimal rate for the whole process and supply chain is 7,200?

One answer is the cloud. A centralized IoT platform can coordinate edge devices with the cloud as an intermediary. Latency, security, regulatory compliance and the need for self-sufficient, local operations makes cloud-based coordination suboptimal. That mining truck traveling 37 kilometers per hour in a harsh environment simply cannot rely on a server hundreds or thousands of miles away to tell it to adjust course for road conditions. It needs to make those physical adjustments locally.

This suggests the need for a common framework and language, so that edge devices and assets can talk directly to one another, no matter their maker or origin. The internet provides a useful example: Decentralized and distributed by nature, the internet came of age only when TCP/IP emerged as the dominant protocol, allowing seamless data transfer. Similarly, the edge needs standards that allow devices from different vendors to “plug and play” so information can flow freely across operations.

Today, some of the most promising developments on the edge aren’t coming from individual vendors, but from consortiums that are slowly converging on the necessary standards. The OpenFog Consortium merged with Industrial Internet Consortium, which has a joint agreement with Plattform Industrie 4.0. In that capacity, these groups align to accelerate and drive change by laying common ground of language, architecture and concerns. (Full disclosure: I am the CTO for OSIsoft, which is one of 60 founding members for LF Edge and are members of Open Fog and IIC.)

Add to this groups like the Linux Foundation which recently aligned a number of efforts through LF Edge. Using the mindset and dynamic of open source, various interests and working together with the belief that finding a common ground through an open source framework in this space is needed to accelerate time to value for edge systems.

While standards may not be the most exciting of technological topics, it is critical to realizing the full value of industrial IoT and edge computing, and to establish the hardware and software infrastructure for the development of application and analytics. According to McKinsey, “Interoperability between IoT systems is critically important to capturing maximum value; on average, interoperability is required for 40% of potential value across IoT applications and by nearly 60% in some settings.”

Interoperability will only be achieved when we reach a critical mass of participation across the IIoT and edge landscape. Unlike the critical mass phenomena that dominates the “network” effect, where the first mover has the opportunity to dominate the segment, IIoT and edge are dominated by the “internet” effect — which also depends on participation of anyone or anything that cooperates, yet stays decentralized, fluid and open.

The work of consortiums and foundations to bring together vendors, academia and consumers to establish a shared semantic for communication and ontology for representation of what devices do and how they interact will speed the adoption of IoT and edge technologies. Today, we have lights out factories operating with semi-autonomy, but these are just a micro version of what is possible. With a more complete framework and seamless interoperability, the edge and IoT have the potential to accelerate and reach the critical mass of participation needed to continue revolutionizing manufacturing, the entire supply chains and whole industries.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 10, 2019  2:07 PM

As Phoenix emerges from recession, smart city initiatives move front and center

Lauren Horwitz Profile: Lauren Horwitz
Internet of Things, iot, IoT applications, IoT use cases, smart city, smart city applications, smart city network

The recession of 2008 took place more than a decade ago. Still, for many cities, the impact of that economic freefall has only just moved into rear view.

The downturn dealt a blow to cities such as Phoenix, and the state of Arizona lost hundreds of thousands of jobs by May 2009. A decade later, this city continues to rebuild from the decline.

“We are just coming out of the recession that hit us hard out here,” said Matthew Arvay, CIO at the Phoenix city government. “This is the first year where we have had a small surplus of monies where we can invest in additional areas,” he said.

According to projections from the Arizona Office of Economic Opportunity, the number of jobs in the state is expected to grow 2.7% from the third quarter of 2018 to the second quarter of 2020. While construction will bring some job growth, other sectors, including manufacturing, education and health services, will also boost the economic expansion and bring diversity of industry along the way.

“Overall, the state is firing on all cylinders,” wrote George Hammond, director of the Economic and Business Research Center at the University of Arizona, in a brief on the economic growth in Arizona in 2018.

Now that the city is emerging from darker financial times and can claim a $55 million surplus, several technology projects have hit the docket of the office of the CIO in Phoenix. Among other projects, Arvay’s office is turning to smart city initiatives — which use technology to make cities more operationally efficient, safer and even innovative — to usher in the next chapter of growth for Phoenix.

Smart city initiatives take center stage in Phoenix

Over the past few years, Phoenix has already made initial forays into smart city initiatives. The city’s streetlighting program, which began in 2016, will, all told, replace 100,000 streetlights with LED lighting to promote cost savings and exploit small-cell technology.

New smart city initiatives include Hance Park Master Plan, which will transform a downtown park into an urban destination spot, enlisting smart city technologies.

Second, the Eastlake neighborhood will be refurbished through the Eastlake Revitalization Project, which is funded through a $30 million grant from the Choice Neighborhoods grant program at the U.S. Department of Housing and Urban Development. Arvay will spearhead a workshop to devise smart technology systems — such as Wi-Fi inside park or community center boundaries or smart irrigation systems — to incorporate into these projects for better results. Arvay expects that these projects will pave the way for further work.

Phoenix is also about to start establishing a roadmap for the next three to five years concerning smart city projects.

“We’re trying to create a more formal strategy and roadmap for the next three to five years,” Arvay said. “This is a prime time to understand who we are, what are our challenges, what our gaps are and how we can use technology to address them.”

Matthew Arvay

In advance of this year’s IoT World in Santa Clara, Calif., May 13-16, Arvay spoke about how Phoenix is approaching new smart city initiatives and how they will take shape. Ajay Joshi, Arvay’s deputy CIO, will appear as a session speaker at IoT World.

What technology do you see on the horizon that will have a major impact on smart city initiatives?

Matthew Arvay: The deployment of small-cell technology will have a major impact on smart city initiatives. As various providers continue to implement 5G, it will lay the foundation and provide the bandwidth that companies and cities have needed to expand their data-gathering efforts, to analyze that information and ultimately lead to better decision-making. So it will be key.

In Phoenix, there is a heavy investment in 5G. We’re estimating some 12,000 to 15,000 small cells [to be installed] throughout Phoenix by 2025. It’s about 5G providing the capacity — the highway, if you will — so we can do more predictive analytics and reduce some of the manual tasks today, so we can automate. City staff can focus on higher-level functions.

Local challenges often define the smart city landscape. What are they in Phoenix?

Arvay: Cities have slightly varying community challenges, so it’s important to understand what they are so we can invest in the right technology to ease those challenges.

Phoenix faces challenges that include traffic congestion, pedestrian and public safety, homelessness. One challenge somewhat unique to us in the Valley is urban heat islands [geographic areas that develop with higher temperatures than their rural surroundings], as well as other issues like the digital divide. Phoenix has started to address them through projects such as a beacon light signal system for crosswalks, expansion of the light rail system and expansion of the police department’s body-worn cameras program. We anticipate creating a three-to-five-year roadmap and identifying goals, milestones to address these issues.

How do you see smart city projects as helping to provide services for lower-income residents?

Arvay: With autonomous vehicles, those without transport could pull up to a house and have access to transport. This could reduce the number of vehicles on the street and also provide [ease of access]. Today there are those who have a distance to walk to get to public transport. Arizona is already becoming a hub for self-driving cars.

Another project that falls under the smart city umbrella, if you will, is Census 2020. The federal government has gone to more of an online process with fewer enumerators. It’s very important that we get an accurate count, especially in areas without access to computers or with few communication technologies, but also that we get as many residents to participate in the census as possible. We’re partnering with vendors to retrofit Dial-a-Ride buses with wireless communications and computer devices and staff that will go to these communities and market how important it is for them to be counted in Census 2020, let alone legislative representation. This brings in concerns about the digital divide.

Surveillance cameras and sensors installed in urban areas can seem like a big-brother move. How do you educate residents about the benefits of smart city technology?

Arvay: You have to work with community so they understand. If you put video surveillance in an intersection, residents need to understand why the city is doing so. Being transparent with intent of technology is very important.

We also hold numerous community outreach programs or meetings, depending on what the city project is — whether it’s for light rail, parks, even our budget process. Community outreach and engagement are critical to have that two-way community input.

Other cities with smart city projects in flight have emphasized the importance of public-private partnerships and lessons learned from other cities. Have these been important for you as you embark on your journey?

Arvay: Phoenix is at the beginning stages of understanding what others are doing. Some cities are further along, more mature than Phoenix. San Diego and its partnerships [with the private sector] and how it has built out infrastructure are notable. Of course, we are looking at Kansas City in terms of its best practices [for smart city projects]. We’re also looking at Atlanta. It will be interesting to see how they move forward.

We have had regional meetings with other CIOs and with Arizona State [University] to create a smart regional consortium. Arizona Institute for Digital Progress is heading the smart region initiative. We have various challenges across geographic boundaries to see if there are collaborative opportunities [to launch smart city initiatives in] multiple cities, with Arizona State and other higher-educational institutions.

There is a lot of energy concerning smart city technology in Phoenix right now. This is just the tip of the iceberg.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 10, 2019  12:49 PM

Use cases of RFID in retail

Morris Qiu Profile: Morris Qiu
Internet of Things, iot, IoT applications, IoT in retail, IoT wireless, retail IoT, RFID, RFID tags, RFID use cases, smart retail

The development of the social economy has changed the way people shop. Consumers not only have high requirements for quality, but also expect a better shopping experience. Retailers need to rationally coordinate all aspects of retail, based on market conditions, in order to adapt to new consumption patterns.

The rise in labor costs and demand to reduce logistics’ operational costs have generated objective demands for the rapid use of RFID in modern retail. With the rise of smart retail and high competition among companies, RFID technology plays an increasingly important role in improving the competitiveness of modern retail enterprises.

Retailers have always been pioneers in RFID use, and many of have already benefited from RFID implementation for both internal management and shared supply chains. The entire supply chain is interlocked during the processes of procurement, storage, packaging, loading, transportation, distribution and sales to service, so retailers must grasp the entire business flow accurately, including logistics, information and capital flow in real time. RFID effectively provides the retail industry with input and output of business operational data, control and tracking of business processes, and reduction of error rates, which help retailers improve work efficiency, enhance visibility and reduce operational costs.

With the in-depth development of RFID in retail, new industrial use cases are gradually developing. Thanks to RFID, retailers can obtain more accurate user behavior information, intelligently analyze customer shopping behaviors to further optimize product allocation, improve marketing strategies and achieve precise marketing insight.

In recent years, RFID technology has brought new opportunities to footwear and apparel chains. Increasingly, well-known international and domestic apparel companies, such as Uniqlo, Zara, H&M, Prada and La Chapelle, have been using RFID in their operations.

Date collection of user behaviors

How can RFID benefits retailers? Taking the use case of footwear and clothing retailers as an example, RFID can be added to shoe cabinets, clothes panels, fitting rooms, fitting mirrors, island shelves and warehouse entrances and exits to record customer shopping behaviors. Customers’ product preferences can be analyzed based on data, including time, frequency and transaction records of customers picking up and trying on products, so that products can be optimized with the desired configuration, resulting in higher sales.

In specific use cases, short-range near-field antennas can be installed under each shoe bracket or in a concealed position of a clothes panel and a 16-port collector with wireless functionality can be placed in a hidden position. The number of collectors depends on the area of the store. Customer behaviors can be judged based on the distance between the collecting antenna and the UHF tag of the product. It is considered a “try-on” when the product is picked up or taken away for a certain period of time, often more than one minute. Thus, this behavioral customer data provides accurate and comprehensive support for business management decisions.

Quick product positioning and search

Retail shoe stores are often filled with all kinds of shoes with different specifications, models and colors. However, due to a limited display area, there is only one sample for each shoe. Staff often needs to search manually in storefront warehouses, and it is often difficult to find a specific product in a short time. As a result, many customers are unwilling to wait and leave without purchasing anything.

RFID assists in fast and accurate product searches. Using an RFID terminal, staff can detect goods quickly in the warehouse and lock targeted goods accurately, which greatly shortens search time and helps workers responds to customer requirements with high efficiency.

Jewelry management with RFID

RFID can also help monitor sales, transfer, delivery or safe replenishment processes at jewelry stores and counters. An RFID reader can read electronic tag data of every piece of jewelry on a counter via the antenna. Once an unexpected circumstance occurs, for example, if the reader cannot collect information from a certain tag within a certain period of time, the system will alert staff of a potential issue via an alarm. Jewelry is monitored by RFID in the tray, and an alarm is triggered once it leaves the specified safe range. When jewelry is presented to a customer, the related details of the jewelry can also be displayed on company’s system screen. A settlement can be completed by the RFID integrated reader. At the close of business, the RFID device can automatically confirm the specific quantity of each type of jewelry. The contactless, multi-tag reading and strong penetrating features of RFID technology will help enterprises quickly complete an inventory of goods, significantly reducing the manpower and time required while improving operational efficiency.

RFID has become increasingly popular in the retail industry. The segmentation and expansion of industrial applications will inevitably meet the needs for more efficient and comprehensive use cases in the future, a testament to the fact that the usage rates of RFID technology are still increasing.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 10, 2019  12:30 PM

Is IoT creating a more lucrative oil and gas industry?

Sanjeev Verma Profile: Sanjeev Verma
asset tracking, end-to-end IoT, Enterprise IoT, IIoT, iot, IoT applications, IoT data, IoT use cases, IoT verticals, Predictive maintenance

Every industry has its own set of challenges to overcome. In 2014, the oil and gas market hit its lowest point. Since 2016, however, the industry has successfully recovered from the challenges that were bogging it down. Praise must be given to the concrete potential of state-of-the-art technologies and strategies.

One of the main contributing factors which led to improvements is IoT. Through the successful implementation of automation, machine learning, AI and more, IoT technologies for oil ¬†and gas have successfully bridged the gap between the industry’s low and current status.

Improving safety measures and profit, along with enabling asset tracking and predictive maintenance, IoT has uplifted underperforming areas of the oil and gas industry. As we move forward, smart IoT strategies help hone the edges while withstanding increasing market stress coming from geopolitical and climate changes.

After much research and analysis, IoT has motivated the oil and gas industry to be more accurate, efficient, influential and competitive. This industry lost billions of dollars due to underperformance and nonproductivity — prominent errors that slowly drained its strength. But now, by depending on IoT, the oil and gas industry has managed to unclog its potential and achieve an improved profit.

Let’s explore the IoT technologies behind the current success in the oil and gas industry.

Predictive and preventive maintenance

Oil and gas activities require regular monitoring; one wrong move can cause irreversible damage within a matter of a few milliseconds. Facilities equipped with remote services can react to challenges through predictive maintenance. Machines require regular supervision to assess wear and tear. Predictive maintenance offers insight on the current status of a specific equipment part, and know in advance what needs to be fixed, replaced or shut down. IoT sensors on the machines accumulate data that can alert organizations of equipment failure, wasted money, loss of manual working hours and more. With predictive maintenance data, big differences can be manifested more easily, thus separating required activities from the unnecessary ones. This reflects well on profit margins — a sure indicator of the improvements and optimized execution of plans.

Asset tracking and monitoring

The oil and gas industry experiences constant pressure to improve safety and operational processes, and faces prominent price volatility. In order to stay prepared, organizations are spending more time analyzing their investments and internal operations to get a clear idea where possible and practical reductions or changes can be made that will not hamper the overall business while also encouraging asset utilization.

Asset management can significantly influence operational performance due to its ability to improve productivity levels. Optimization makes production more predictable with asset management. When different organizations want to transform and digitize their operations, the smart integration of assets into one unit creates a strong base. This can help, for example, monitor multiple wells or sites simultaneously.

Improve data management

Some oil and gas technologies are outdated and thus have motivated the industry to redesign with digital in mind. Redesigning will help the industry match the pace and creation of the newly connected technologies and improved data collection.

True transformation happens when efficient, safe, effective and appropriate practices take place. With connected devices, a great deal of data can be collected and accessed. Integrating this data into new technologies and processes provides more insights and further improves processes.

Health and safety

The oil and gas industry has witnessed several accidents due to unpredictable and hazardous events. Hazardous working environments on oil rigs and at gas plants will function better after adopting IoT systems that enable automated remote monitoring. When less manual and more automatic functioning is involved, fewer health and safety risks occur. IoT can heighten these possibilities and provide a superior level of safety to workers and people in the community.

Conclusion

Taking proper action in real time with IoT applications enables improved operational efficiency in oil and gas. The industry is segmented and has various streams that can be further supported by IoT applications and deployments. All these can be monitored across an ecosystem that is a combination of upstream, midstream and downstream players.

One factor that cannot be overlooked is that the oil and gas industry is extremely asset-intensive, making it a prime candidate for IoT asset tracking and predictive maintenance.

With IoT, the global oil and gas can achieve a competitive edge while unlocking several gates of growth and expansion within a noticeable short span of time.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 9, 2019  3:02 PM

IoT and blockchain bring transparency to logistics, supply chain

Serge Koba Profile: Serge Koba
Blockchain, Blockchain Development, Internet of Things, IoT and blockchain, IoT logistics, Logistics, smart supply chain, Supply chain, supply chain visibility

Despite sheer scale and worldwide coverage, modern logistics still faces a number of problems that have yet to be solved, such as tracking components along supply chains.

Part of the problem is outdated technology that leads to a lack of traceability and transparency, which make it difficult for businesses to make informed decisions. Without real-time updates, various entities within the same supply chain could generate significantly different data. The end result is typically mass confusion that often ends up costing large sums of money.

Craig Fuller, the co-founder of Blockchain in Transport Alliance (BiTA), said that there’s nearly $140 billion tied up daily in payment disputes regarding transportation. To make matters worse, it takes roughly 42 days for both parties to reach a settlement on an invoice. This is a lot of money for a company to have tied up in something that could be mitigated or, in some instances, solved completely.

The answer to these problems lies in the introduction of new technology-based systems, which aim to help reduce costs, increase accountability and ensure accurate data is being reported at every point in the supply chain.

Benefits of IoT-based technologies for logistics and supply chain

Of all the potential uses for IoT, the ability to track goods across a supply chain is among the most promising. Digital sensors can keep track of products in the supply chain from start to finish.

Consumer package delivery and several other industries have already widely adopted this means of tracking. It pushes delivery times and offers clarity in logistics. In fact, it’s this advanced type of tracking that makes services such as two-day and same-day shipping possible.

The potential of blockchain for logistics and supply chain

A number of big companies are keen to dive into blockchain technology. It has the potential to revolutionize the way businesses exchange information within the world of logistics.

In fact, BiTA already has roughly 500 members in more than 25 countries around the world. There’s $130 billion currently being stored in public blockchain networks and the value tends to grow. In order to fully understand the benefits of blockchain, though, businesses need to know what it has to offer the logistics industry:

1. Transparency of transactions
Lack of transparency prevents businesses from building trust and naturally leads to everyone trying to protect themselves. A solution can be found in private blockchain, where each involved party retains a personal copy of all the information, which cannot be accessed by outsiders or changed. As a result, the transaction history is kept transparent at all times and trust can flourish. For example, using blockchain for food traceability offers more visibility into where their food came from, or helps pharmaceutical wholesalers and manufacturers to combat counterfeit drugs.

2. Permission-based access for security
Traditional ledger systems open the door to malicious attacks because the information stored in a ledger can potentially be accessed and changed. Thankfully, blockchain technologies go well beyond offering accuracy. They also provide cryptography tools that keep data secure.

Blockchain-based systems have means of controlling and limiting access to information at various levels of a block. For example, companies can secure contact data so that only certain parties are able to access the information. Likewise, less sensitive data, such as weight or shipment size, could be left open.

3. Smart contracts
Blockchain technologies offer adoption of smart contracts, which automate legal negotiations throughout the logistics process. Perhaps the best-known systems to achieve this are developed by Hyperledger. These systems use smart contracts to monitor each step of the process. They can check for certain rules that are laid out within contracts to ensure that all of the parties fulfill their ends of the deal.

The great thing about smart contracts is that they give reliability to businesses on both sides of the chain. As a result, even smaller parties can participate in the overall process. This is important because it’s extremely difficult for smaller or startup companies to get into a supply chain. Traditionally, they would need backing from stronger, more reputable companies. A smart contract, however, ensures that all of the parties involved will complete the contract to its entirety.

4. Clarity in assets and order management
The digital ledger system of blockchain helps easily track and pair assets with claimants without agonizing over ownership. This record allows companies to deal with the verification and transfer of assets in a much simpler way than other modern methods can offer.

Blockchain technologies are nearly limitless when it comes to scalability. This means that even massive deliveries won’t suffer a bottleneck effect as a result. The importance of ensuring clarity is more important than ever thanks to the growing popularity of same-day shipment services. Meanwhile, companies that still use conventional methods risk getting left behind.

5. Real-time delivery tracking
Systems for real-time location tracking can be created by means of integrating IoT, blockchain and mobile technologies. You may check out this demo for an example. Based on Hyperledger, it allows the retrieval of GPS data and updates delivery statuses with new geolocations. Such systems are open to potential integrations with other IoT devices.

Conclusion

Overall, blockchain is capable of providing supply chain businesses with an edge. However, without a proper understanding of how it works and how to make it viable, companies might run into problems along the way that could hinder further growth in the ever-changing market.

Additionally, a lot of technology, infrastructure, governance and integration are required before blockchain and more sophisticated IoT technologies can go mainstream. However, those who wish to achieve this with a business-driven approach can expect to leave competitors behind in the long run.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 9, 2019  12:33 PM

Putting autonomous vehicle legislation in the driver’s seat

Eric Tanenblatt Profile: Eric Tanenblatt
autonomous vehicle, autonomous vehicles, Connected car, Internet of Things, iot, IoT legislation, IoT regulations, self-driving car

Soon after autonomous vehicle legislation failed to cross the finish line in the 2018 lame-duck session of Congress, Senator John Thune (R-SD) and Senator Gary Peters (D-MI) got to work on a bicameral and bipartisan solution they are hoping will come to pass in 2019.

Sen. Thune, who chaired the Senate Committee on Commerce, Science and Transportation before turning in the gavel to become his party’s majority whip, described the inability of the Senate to pass the AV Start Act last year in one word: “disappointing.”

Yet, after months of silence, a renewed effort to spur a regulatory overhaul at the Department of Transportation is in the works. “There have been bicameral meetings, House and Senate, Republicans and Democrats, to see if there’s a consensus path forward” on autonomous vehicles, Sen. Thune told Politico. “If there is, we’ll pursue that. That gives us the best chance of getting a result.”

It should be noted that the legislation proposed in both the House and the Senate in 2018 was wholly bipartisan. The House bill enjoyed strong support, with lawmakers passing it on a voice vote. That fact is a source of hope among supportive members and their staff, who are optimistic that progress can be made amid a divided government.

However, the new Democratic House majority brings with it an enhanced focus on vehicle safety and oversight of manufacturers. Bolstered by low public trust of driverless vehicles, Democratic members of the House are expected to force a complete overhaul of the 2018 legislation.

That overhaul will come to fruition in three key committees — the House Energy and Commerce Committee, the House Transportation and Infrastructure Committee and the Senate Commerce, Science and Transportation Committee — that are operating under the leadership of new chairs who now hold significant influence over future autonomous vehicle legislation.

The House Energy and Commerce Committee is chaired by former ranking member Frank Pallone, Jr. (D-NJ). As the ranking member, Rep. Pallone was an advocate for the SELF DRIVE Act, but now as Chair, he is expected to place a greater emphasis on consumer protections. Additionally, Representative Peter DeFazio (D-OR), the new Chair of the House Transportation Committee, will likely exercise more stringent oversight of the Department of Transportation.

In the Senate, as noted above, Sen. Thune left the chairmanship of Commerce, Science and Transportation to assume the number two position in Republican leadership as majority whip. Sen. Thune’s growing influence in the upper chamber could benefit autonomous vehicle legislation. Moreover, support from Sen. Peters increases the chances of a bipartisan solution to autonomous vehicles. “We want to try to make sure we work coordinated,” Sen. Peters told Politico. “We did a lot of negotiation on the bill last year. We weren’t able to get it on the floor and move it, but we made quite a bit of headway.”

Notably, if the two chambers are unable to come to an agreement on compromise legislation, Sen. Thune and Sen. Peters vowed to reintroduce the bill that failed last session. That bill would increase the number of National Highway Traffic Safety Administration (NHTSA) safety exemptions, spur an autonomous directed update to the Federal Motor Vehicle Safety Standards (FMVSS), require cybersecurity protections and preempt state laws regulating vehicle safety.

At present, the federal government regulates the vehicle itself — its construction, composition and reliability — while state governments regulate driver competence. But now that distinguishing the driver from the vehicle is increasingly difficult, overlap between state and federal law has left automotive companies without a clear standard.

Without passage of a new law, automakers will continue to be constrained by the FMVSS. These prescriptive standards define how nearly every component of a vehicle is designed and constructed, addressing everything from the position of rearview mirrors to the need for power-operated windows. For example, FMVSS specify how components must react to a driver turning the wheel, pressing the brake pedal and engaging a turn signal, just three of the estimated 30-plus driver-specific vehicle requirements.

As vehicles become more advanced, many of the human controls will be unnecessary — and a burden to innovative design. As such, automakers have, for the most part, temporarily abandoned the idea of constructing new utilitarian vehicles, devoid of human controls, in favor of retrofitting traditional vehicles with autonomous technology. Such vehicles can operate freely, regardless of level of autonomy, as long as the vehicle is compliant with FMVSS and state law.

An FMVSS update, increased NHTSA exemptions and an end to the patchwork of state laws and requirements necessitate the most immediate passage of federal legislation. After Congress was unable to address the issue last session, the emergence of bipartisan talks is a welcome sign for autonomous vehicle advocates. Should it come to fruition, we will see a more deliberate push to prepare federal standards for the fast-approaching autonomous future.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: