In the physical world, each person is unique, with their own set of relationships, personal preferences, financial profile, physical characteristics, past behaviors, future plans and so on — the attributes that make up their identity.
Being able to recognize each customer’s unique identity makes it possible for companies to do business with them — to know what kind of services to provide and recommend, charge and track payments accurately, measure and enhance satisfaction, and provide the kind of continuity that delivers optimal value for customers and providers alike.
Digital identity is the extension of this concept into the digital realm — and it’s central to modern connected life. The ability to recognize and manage individual customer identities effectively is the foundation of:
- Trust, as companies safeguard each customer’s personal information and use it with consent for their benefit.
- Consistency, by harmonizing identities and connecting user identity record across organizations and industries.
- Experience, making it possible for companies to know their customers, personalize services, simplify online interactions and increase satisfaction.
- Privacy, allowing customer transparency and choice on what where, and how their personal data is used.
- Security, helping companies protect against identity fraud, hacks and breaches.
- Innovation, as companies use identity across industries to capitalize on synergies and deliver new and dynamic connected experiences powered by context.
Most fundamentally, digital identity makes it possible to take a customer-centric approach to business. By building trusted relationships and delivering more personalized and consistent experiences, companies can improve customer retention, strengthen their brand, increase their share of wallet and achieve competitive differentiation.
So, digital identity is the ultimate “vehicle for success” that must underpin the new mobility. To have a clearer understanding of that role, it’s helpful to review a few of its core concepts.
The fundamentals of digital identity
Digital identity can apply to things as well as to people. This is important to keep in mind in our world of connected devices and things. Just as businesses and systems need to know who they’re interacting with, a thing (such as a connected car) needs to be able to recognize another thing (such as another car, a charging station, a drive-through payment terminal, a tollbooth, etc.) to enable secure new mobility functions and experiences.
Authentication is simply the trusted recognition of the user’s digital identity: Who is this? Is it really who he claims to be?
Authorization goes one step further: Based on their authenticated digital identity, what should this person be allowed to do? What applications and data should he be able to access based on factors such as his business role or relationship, customer subscriptions, account status, current scenario and so on?
Single sign-on simplifies the customer journey by allowing customers to log in once for access to all of your applications that they’ve signed up for, rather than having to log in application-by-application. Frictionless login across applications isn’t just convenient; it’s also fast becoming an industry standard. Meeting this expectation is increasingly important for maintaining a brand’s credibility and trustworthiness.
Federation extends single sign-on beyond your organization to encompass your ecosystem partners as well. In addition to making life easier for customers, federation positions your company as a trusted identity provider and go-to access point for a broad range of content and services.
Simple multifactor authentication is, as its name suggests, the use of multiple factors to authenticate who someone or something is. Multifactor authentication typically uses a combination of identity types such as something they know (e.g., a password), something they have (e.g., a key fob or an iPhone app) and something they are (e.g., biometrics, such as a thumbprint or retina pattern).
Privacy is critical. Personal digital data is precious — customers have to be able to trust you with theirs. As the number of connected devices and things grows, companies must be able to secure the user experience wherever and however services are used, tailor it to the customer’s data-sharing preferences and ensure that their data is never used in a way they haven’t approved.
Security becomes more challenging all the time — and more important. As consumers become more mobile and do more online in more ways, businesses need to ensure continuous protection not just at login, but throughout each digital session. This includes responding to threats in context by asking for additional identity verification when something unusual takes place, like a resource request from an unfamiliar location or device.
From IAM to CIAM
Initially, digital identity was used primarily as a way for businesses to control access to their systems by their own employees. Based on your digital identity, verified through your username and password, you would be granted to the appropriate applications and data for your role. By the same token, you would also be prevented from accessing applications and data that you shouldn’t, aiding customer privacy and security.
Digital identity also makes it possible to track your behavior over time, helping companies meet requirements for auditing, regulatory compliance, internal security and the like. Within the tech industry, technologies to manage digital identity fall into the identity and access management (IAM) category.
Digital identity has now expanded to encompass personalization and quality of experience as well. As any successful business knows, the better you know your customer, the better service you can provide, helping you drive loyalty, growth and revenue. The personal information customers share with you to establish an identity with your organization, complemented with personal data from additional sources, helps you understand their individual needs more fully.
This in turn helps you cross-sell, upsell and deliver more personalized experiences. Of course, security and control remain paramount as well. Reflecting the customer-centric orientation of this way of thinking about digital identity, this technology category is called customer identity and access management (CIAM).
How digital identity adds value for new mobility
There are many ways the tools we use to provide and protect a secure digital identity can add value to the present-day development trends in connected and autonomous cars. For example:
Personalization and services: Feature on demand
For the most part, today’s cars are personalized during the purchasing process — not afterwards. If buyers subsequently wish they’d opted for more horsepower, matrix-LED lights or additional connectivity or GPS features, their only option is to try their luck with expensive after-sales projects. With digital identity, both owned and shared connected cars can allow flexible personalization of their software-enabled features on either a per-ride or ongoing basis. The identity of the user is linked with the identity of the car to sync the user’s preferences with the car’s configuration and trigger the corresponding monetization processes.
Identity for privacy and compliance
Some connected car capabilities raise delicate issues for user privacy. As part of predictive maintenance, a car’s ECUs may push alarm messages to the carmaker’s back end to signal a problem with the engine, gearbox or brakes. This message can include driven speeds, gear and RPM history, and geographical locations. And there’s the catch: A driver or user may appreciate the alert there’s something wrong with the car and where to find the next garage, but may not necessarily want to share information about how the car is being used. The carmaker needs a way to let users and drivers choose which data to share — a preference that can be linked to their digital identity.
Connected car security and safety
A modern car’s functions and features are controlled by upwards of 100 complex ECUs whose interaction is critical for the safety of the passenger and of the car. Equipping each ECU with its own unique and secured digital identity makes it possible for these control units to identify themselves to each other when sending messages, helping prevent hackers from injecting malicious messages to cause malfunctions.
These examples show how much even today’s connected cars depend on secure identification of different parts of the cars to each other, as well as of the entire car to its driver or owner, to ensure both a good user experience and the protection of the data being generated during each ride. Designing digital identity and its corresponding tools into the vehicle from the very beginning provides a vital backbone for security, privacy and monetization.
Identity use cases in new mobility
As the industry moves beyond connected cars to fully realized new mobility services, federated digital identities will play an increasingly important role, as illustrated in this next set of examples.
Bring the end user’s digital life to a connected car
One of the most important targets of the industry is to bring the digital life of the user into a connected car — to enable the same set of services during physical mobility as at home or in the office. To make this simple and frictionless, carmakers need to provide a version of single sign-on into the “car as a service,” linking the authenticated sessions of various digital services to the car for the duration of the journey. Digital identity will provide the mechanism for this seamless yet secure experience.
Enable the best user experience for car-sharing or ride-hailing services
User experience is a prime factor in people’s willingness to use shared vehicles rather than their own personal car. Federated mobility services will allow people to handle every part of the journey using a single app, from summoning their vehicle of choice to payment at their destination, with streaming media, GPS and other connected services along the way. The same app even works across fleet providers — no more separate apps for each car sharing or ride hailing service.
Make the connected car interact with the smart city
The examples above illustrate the links between users, services and preferences. As a next step, the car and the driver need to securely interact with the infrastructure of a smart city, such as identifying the car and payment at the charging station, autonomous parking, tolls and so on. Here, digital identity goes beyond the relationship between the car and the driver to manage the interaction of the car and driver with the world around them based on secure authentication.
As we see, digital identity is more than just a mechanism to secure and authenticate cars and devices; it’s also a foundational tool for enabling the entire new mobility and smart city ecosystem. Service providers offering digital experiences from the connected car to payment can collaborate to deliver new mobility services which are solidly built upon the security, trust and interoperability of digital identities across business domains.
That’s vital in building the most critical element of new mobility adoption: the ultimate trust and confidence of new mobility users.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
With the soaring number of mobile and IoT devices and onslaught of new apps that businesses are faced with on their wireless network, it is time for innovation that can help IT scale and meet these new requirements. Thankfully, AI and modern cloud platforms built with microservices are evolving to meet these needs, and more and more businesses are realizing that AI is a core component to enable a learning and insightful WLAN. AI can help bring efficiency and cost savings to IT through automation while providing deep insights into user experience, or service-level enforcement. It can also enable new location-based services that bring enormous value to businesses and end users.
At the core of a learning WLAN is the AI engine, which provides key automation and insight to deliver services like Wi-Fi assurance, natural language processing-based virtual network assistants, asset location, user engagement and location analytics.
There are four key components to building an AI engine for a WLAN: data, structure and classify, data science and insight. Let’s take a closer look.
Just like wine is only as good as the grapes, the AI engine is only as good as the data gathered from the network, applications, devices and users. To build a great AI platform, you need high-quality data — and a lot of it.
To address this well, one needs to design purpose-built access points that collect pre- and post-connection states from every wireless device. They need to collect both synchronous and asynchronous data. Synchronous data is the typical data you see from other systems, such as network status. Asynchronous data is also critical, as it gives the user state information needed to create user service levels and detect anomalies at the edge.
This information, or metadata, is sent to the cloud, where the AI engine can then structure and classify this data.
Next, the AI engine needs to structure the metadata received from the network elements in a set of AI primitives.
The AI engine must be programmed with wireless network domain knowledge so that the structured metadata can then be classified properly for analysis by the data science toolbox and ultimately deliver insights into the network.
Various AI primitives, structured as metrics and classifiers, are used to track the end-to-end user experience for key areas like time to connect, throughput, coverage, capacity and roaming. By tracking when these elements succeed, fail or start to trend in a direction, and determining the reason why, the AI engine can give the visibility needed to set, monitor and enforce service levels.
Once the data has been collected, measured and classified, the data science can then be applied. This is where the fun begins.
There are a variety of techniques that can be used, including supervised and unsupervised machine learning, data mining, deep learning and mutual information. They are used to perform functions like baselining, anomaly detection, event correlation and predictive recommendations.
For example, time-series data is baselined and used to detect anomalies, which is then combined with event correlation to rapidly determine the root cause of wireless, wired and device issues. By combining these techniques together, network administrators can lower the mean-time-to-repair issues, which saves time and money and maximizes end-user satisfaction.
Mutual information is also applied to Wi-Fi service levels to predict network success. More specifically, unstructured data is taken from the wireless edge and converted into domain-specific metrics, such as time to connect, throughput and roaming. Mutual information is applied to the service-level enforcement metrics to determine which network features are the most likely to cause success or failure as well as the scope of impact.
In addition, unsupervised machine learning can be used for highly accurate indoor location. For received signal strength indicator-based location systems, there is a model needed that maps RSSI to distance, often referred to as the RF path loss model. Typically, this model is learned by manually collecting data known as fingerprinting. But with AI, path loss can be calculated in real time using machine learning by taking RSSI data from directional BLE antenna arrays. The result is highly accurate location data that doesn’t require manual calibration or extensive site surveys.
AI-driven virtual assistants
The final component of the AI engine is a virtual assistant that delivers insights to the IT administrator as well as feeds that insight back into the network itself to automate the correction of issues, ultimately becoming a “self-healing network.”
The use of a natural language processor is critical to simplify the process for administrators to extract insights from the AI engine without needing to hunt through dashboards or common language interpreter commands as legacy systems that lack AI do. This can drive up the productivity of IT teams while delivering a better user experience for employees and customers.
Wireless networks are more business-critical than ever, yet troubleshooting them is more difficult every day due to an increasing number of different devices, operating systems and applications. AI engines are a must-have for businesses that need to keep up with soaring numbers of new devices, things and apps in today’s connected world.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Farming is one of the oldest occupations in the history of mankind, and perhaps among those changing the most in today’s world. Modern farmers face challenges like never before. With rising global population rates, changing dietary demands, resource constraints, climate change and enterprise competition, there is immense pressure on the agricultural industry to produce more food.
In fact, the Food and Agriculture Organization estimates that global food production will need to rise by 70% to meet the projected population demands by 2050. Fortunately, innovations in technology offer new ways for the agricultural industry to meet this global challenge. Systems such as IoT sensors and big data analytics are offering avenues to reinvent antiquated farming practices, creating more cost-efficient processes that produce higher quantities of food with less strain on resources.
IoT impacts every level of the agricultural lifecycle by simplifying and streamlining the collection, inspection and overall distribution of resources. For example, using sensors on farming equipment and in the field provides the opportunity to harvest huge amounts of information, which ultimately offers insights farmers can use to make better-informed decisions. Businesses across the globe have recognized this technology’s potential, and are using innovative solutions to generate safer, better food production and create environmentally smarter practices.
Leveling the field with IoT
Large farms have historically had a market advantage over their smaller competitors. With more room for trial and error, big farms have had the capacity to use large amounts of historical data to yield the best results from their crops and livestock. Additionally, larger farms have the capital necessary to invest in leading technologies. These conditions have left smaller farms at a disadvantage, struggling to complete with less information and resources at their disposal.
Fortunately, recent advances in IoT technologies have leveled the playing field in agricultural production. IoT sensors offer a cost-efficient, low-power system with the capacity to aggregate sensor data on everything from soil and water quality to temperature, humidity and livestock vitals.
Using analytics from this information, farmers can generate predictive insights to optimize their operations regardless of farm size. For example, farmers will know the most opportune planting and harvesting times to yield the highest crop production due to environmental and nutritional factors, no matter the size of their farm.
Putting data to work: NB-IoT and precision farming
Progress in low-power wide area (LPWA) technologies is creating powerful opportunities for the agricultural industry. Narrowband IoT (NB-IoT), a unique LPWA option, has proven particularly suitable for smarter farming practices.
NB-IoT is a low-cost technology that provides ubiquitous, battery-efficient connectivity, offering massive value with the capacity to aggregate data for tens to hundreds of billions of devices.
Today, farmers are using NB-IoT in smarter farming practices such as precision farming. Precision farming uses data insights to guide both immediate and future decisions on everything from where in the field and what quantities to apply chemicals and fertilizers to when it’s best to plant seeds. Smarter farming practices like this offer many benefits, including efficient and sustainable crop production. Additionally, insights from IoT sensor data can enable a more precise use of pesticides and fertilizers. This can save money, deliver better results and lower the impact of chemicals on the environment as well.
IoT and livestock management
IoT plays another important role on the farm beyond crop production — the technology is also used to monitor livestock health.
Despite many advances that have improved livestock health and fertility practices, bovine mortality rates today remain comparable to mortality rates of unassisted human child births from several centuries ago. Fortunately, progressive companies like MooCall are turning to IoT to improve modern calving practices.
MooCall, which specializes in calving and herd management technology, engineered a sensor that uses the Vodafone Managed IoT Global Connectivity Platform to monitor the health and labor of expectant cows. The tail-mounted sensor gathers over 600 pieces of data per second to accurately predict when a cow will give birth, measuring tail movement patterns triggered by contractions. Alerts are sent via cell or app about one hour before birthing. To date, approximately 250,000 calves have been born safely with their owners on site ready to help if needed.
Advances in IoT across the agricultural industry will have an immense impact on the way we’re able to meet the rising demands of our global population. Mass amounts of data gathered through IoT systems allow farmers to act with the more information than ever, enabling them to yield higher crop production and create safer livestock birthing practices. The benefits will only continue to grow as this technology evolves.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The ability to predict the future is a powerful form of knowledge for any business. If you know an event that could cause harm to your business lies ahead, like equipment failure, you can take steps to address it and avoid negative consequences, like unnecessary costs or downtime. If you know a beneficial event is ahead, such as an increase in demand, you can harness it to increase revenue, customer satisfaction and more. Use cases like predictive failure, predictive (or forecasting) demand and predictive maintenance are quickly gaining popularity across a range of industries for these reasons. Individually, these IoT-enabled capabilities can pack a strong ROI punch by helping businesses predict the future using real-time data. When layered together, their benefit potential grows exponentially.
Maximizing the uptime of revenue-generating equipment is critical for all industrial companies, which has made predictive failure one of the more desirable predictive capabilities. Failures can cost thousands of dollars or more in unplanned downtime and present the additional threat of expensive collateral damage. Additionally, false positive alerts from equipment sensors pose an altogether different problem by straining service resources and driving up expenses when technicians are deployed to find that no problem exists. Fortunately, sophisticated data analytics make it possible to predict and resolve these problems before they occur, virtually eliminating unplanned downtime and reducing false positives.
Take, for example, a pump failure at an oil refinery. Typically, there’s not a backup option, so the facility is forced into an unplanned shutdown. These sorts of environments are generally highly utilized — often running nonstop, 24/7. Such elevated levels of plant activity mean any production time lost to down equipment is gone forever, due to a shortage of extra capacity. And that’s in addition to the outsized costs associated with emergency repairs. Failures of this nature can also introduce contaminants into the production pipeline, which run the risk of corrupting product quality and damaging equipment downstream. Furthermore, these sorts of breakdowns come with the ever-present danger of causing a line breach, which can carry a steep cleanup price tag all on its own.
With data insights into the pump’s health, as well as like assets across a population, the refinery could have foreseen the upcoming failure. Knowing an asset is headed for trouble, management could have proactively scheduled a repair to minimize the impact on production and prevent unfavorable side effects, such as unnecessary cleanup costs and equipment damages.
Predictive (or forecasting) demand is another powerful capability for businesses. Itron, a leading provider of utility and smart city technologies, uses IoT to gauge demand for electricity. For the longest time, electricity demand patterns were static. Basic forecasting was possible by analyzing the number and density of homes against historical records. However, the introduction of electric cars, solar panels and more has complicated demand forecasting. This added complexity creates an issue as regulations dictate utilities must generate a certain amount of electricity to ensure they can meet consumer demand. Without predictive capabilities, utilities struggle to right-size energy generation, leading to wasted excess and unnecessary costs. Using IoT capabilities for edge devices, such as energy meters, Itron is able to harness real-time data to predict demand more accurately.
That said, demand data isn’t necessarily limited to information coming from a device. It’s easy to focus on device data because that’s the new variable, but there are more pieces to the puzzle. Companies experiencing the most predictive success typically incorporate additional variables, such as enterprise system data and public data sources (e.g., weather, geospatial, etc.), with device data. This sort of supplementary information helps build context to create a more complete picture of what’s actually happening with a device.
Whether it’s a wet versus dry environment or a cold versus warm climate, there are countless external factors that can affect different assets in a variety of ways, with varying degrees of severity. In fact, this secondary information is significant enough on its own that, in some cases, it can provide a general sense of what’s wrong with a device or when an error is likely to occur. But it takes an IoT system that integrates real-time device data with these additional data sources to pinpoint root causes and predict failures with accuracy.
Predictive maintenance is another IoT use case with exciting potential applications across the industrial sector. Traditional methods of equipment maintenance are generally reactive — servicing equipment once it fails — or preventative based on static time intervals or amount of use. Reactive maintenance is often expensive due to compounding costs associated with unplanned downtime and emergency repairs. Preventative measures carry the risk of under- or over-servicing equipment, which can inflate maintenance costs and reduce asset longevity.
Condition-based maintenance (CBM) is an important step in the right direction, as it uses real-time operational data, as well as environmental and historical information, to identify when service is truly needed based on actual device performance. While this does help industrial companies improve maintenance and repair processes across all of their connected equipment, CBM is still reactive in that it’s a response to current conditions. Predictive maintenance takes CBM and makes it proactive by flagging service needs in advance, so that companies can take action before those needs arise. In addition to significantly improving asset reliability and longevity, this ability to better plan maintenance can significantly lower service costs and help defer capital expenditures.
Predictive analytics is transforming many of today’s connected industrial organizations by proactively identifying asset efficiency, performance issues and other key factors before they impact operations. However, a lack of understanding surrounding a key enabling technology — machine learning — has led some businesses down a path of disappointment. While machine learning can recognize patterns in large volumes of data to help identify the different states of equipment operation, it cannot incorporate related factors or time values with those potential state changes — a key factor in predicting future events. This is why digital twins are so valuable. By combining the states identified through machine learning with other variables (time, environment, etc.) influencing those changes, digital twins create a model that makes predictive capabilities possible.
Keep in mind that even with the most advanced technology, the ability to carry out any sort of predictive analysis hinges primarily on having the right data set. So be sure to perform a thorough review of what data you have versus what’s required before beginning any initiative. I also typically recommend starting with the predictive capability that shows the most potential for profitability or cost reduction. Stakeholders deserve a return on investment and shouldn’t have to bear the burden of a work in progress.
Despite what some may believe about the oil and gas industry, the number one priority of its major players is to mitigate health, safety and environmental-related risks — yes, even over making money. Oil spills and fracking leaks cripple the industry and company’s reputation, which will always negatively affect the bottom line. So, it goes without saying that nobody wants another Deepwater Horizon, much less a smaller disaster.
To achieve their mission of maintaining top-level safety protocols, protecting the environment and enhancing their reputation, remote offshore drilling providers — and the enterprises that rely on their services — will need to begin embracing IoT technologies if they haven’t already. Fueled by advancements in open source data management, IoT will go a long way toward helping energy companies avoid spills and other catastrophes. A few forward-looking companies are already using these advancements.
Even with the current administration’s intent to relax safety regulations implemented in the wake of Deepwater Horizon, the Bureau of Safety and Environmental Enforcement will still require offshore drillers to monitor safety-critical equipment in real time and archive the data at an onshore facility. In fact, the larger industry has pushed to keep many such laws in place, knowing well what comes of too lax a view on laws that safeguard critical regulations.
An achievable goal: Real-time crisis response in remote locations
That’s where IoT comes in: 1) by harnessing real-time data collection and integrating legacy and new data sources to power advanced analytics that can automatically speed safety measures when danger strikes, and 2) by identifying patterns from older data and using them to influence current and developing workflows and processes.
Remember this: The Deepwater Horizon oil spill was caused when methane gas in the well expanded and rose into the rig where it ignited and exploded, engulfing the platform in flames. A modern IoT system combined with advanced analytics could have helped predict the impending event and provided prescriptive actions to help contain what is considered to be the largest marine oil spill in history.
Offshore rigs are still collecting data today from sensor equipment almost exclusively with local control systems. These had already been deployed to monitor operations and signal alarms only when something goes wrong.
With IoT technologies today, it’s possible to exchange and analyze information in near real time among the rig, ships and onshore facilities — and then automate preventive actions to limit any damage.
IoT is already at work around the globe
In Trinidad and Tobago, where locals depend on the fisheries and the economy depends on the oil and gas industry, the government’s Institute of Marine Affairs has deployed a specialized buoy that monitors water quality in real time. It consists of a digital cellular connection and an IoT platform equipped with underwater sensors that detect instant changes in pollution levels and other water quality standards. The aim is to accelerate communications and proactive and reactive responses when a spill occurs.
Another global leader, Mitsubishi Heavy Industries, has developed specialized robots powered by IoT and artificial intelligence to inspect oil and gas production facilities. These explosion-proof robots have cameras and can operate autonomously, which frees up working time and keeps human workers out of harm’s way.
The path to IoT always needs to be open
For mission-critical IoT projects to be successful — not just those in oil and gas — the foundational strategy needs to be based on open source software and open architectures. If you look back, IoT’s popularity occurred before connectivity standards were created. The open source community exists specifically to solve issues like this, by creating common connectors for data to flow through various networks to an infinite collection of disparate devices.
If commonly accepted open source code bases are used to drive IoT technologies, those active in open source communities and organizations, such as the Apache Foundation, will be able to develop analytics tools on scale-out, open hardware architectures quickly and easily. That will lead to better mapping and reduction of IoT’s larger data sets.
Simply put, an open source IoT platform will allow operators to maintain secure control over their data and analytics.
Apache NiFi, incidentally, has been a real game-changer in IoT. NiFi allows you to easily and securely ingest data from a variety of sources and import it into a platform for analytics and business intelligence.
For example, the offshore drilling provider Rowan Companies has installed a complete, global IoT system on 25 rigs that supplies seamless data connectivity for near-real-time monitoring, troubleshooting, diagnostics and performance measurement on 25 rigs. Rowan’s system represents a state-of-the-art template for IoT oil and gas projects using Apache NiFi. It incorporates:
- An enterprise-ready, open source Apache Hadoop distribution based on YARN, maximizing the value of its data at rest from a range of sources and formats to provide big data analytics;
- An end-to-end, hybrid data analytics platform integrated with Apache NiFi that collects, curates, analyzes and acts on data at rest; and
- An IoT gateway that collects and instantly streams real-time data from multiple systems into its analytics platform, which prioritizes, compresses and encrypts the data before storing it locally. This enables personnel to search, monitor, visualize and analyze the data for complete visibility into its operations.
Keywords: Fast, precise and effective
A system like Rowan’s can take a huge burden off other oil and gas enterprises that have cautiously relied on maintaining their traditional data monitoring, collection, processing and reporting systems.
More to the point, with today’s processing power available for IoT, decisions can actually be made in real time, enabling companies to shift from reactive to proactive. Moving forward, the next generation of IoT will require improvements to our data management processes and access controls that will allow even faster, more seamless and more secure communication between devices and platforms.
The goal, as it always should be, is to make sure that catastrophic events like oil spills don’t happen. The industry’s health and continued good will depends on that.
Try going a whole day without the internet of things. In 2018, that’s almost impossible. Many of us wake up to IoT when our sleep trackers gently nudge us awake and give us an instant report on the quality of the last six to eight hours (on a good night!).
We get ready for work with an IoT toothbrush that assesses our brushing; we take the last iced coffee out of the fridge and it automatically reorders through Instacart; we leave the house and the smart lock kicks in when we exit our geofence; and the car automatically adjusts to our seat, temperature and musical preferences. All that and we haven’t even made it to work yet!
But the value of the IoT ecosystem is in the personalized digital relationships it can build, by correlating data across users, devices and connected things, and translating it into instant insight that makes our lives — or at least our mornings — better.
Businesses see the flipside. If they can analyze and act on IoT data when it’s most valuable, they can offer us products and services tailored to our wants and needs. To capitalize on this massive market opportunity, though, they can’t let their data sink to the bottom of the typical data lake. They need the right tools to analyze, understand and act on their data in real time.
Toyota, for example, recently went through a global reorganization to expand its work in data science technology. It launched Toyota Connected, focused on data-driven initiatives that include offerings like connected cars that share traffic details, telematics services that learn the customer’s preferences, and insurance models that price according to actual driving patterns. Now, Toyota will focus not just on producing great cars, but on the power of IoT data to reinvent the driving experience — making it more personal, safer and more appealing to customers throughout the life of the car.
IoT is likewise redefining industry, from manufacturing to utilities to logistics. For example, individual smart utility meters can deliver IoT data that helps the utility determine different rates for different seasons or times of day, and gives them the opportunity to offer consumers conservation incentives, as well. And with smart meters, utility companies can proactively change out overloaded and older equipment that shows evidence of potential failure or fire risk. It’s clear that the value of IoT is in building more intimate digital relationships by correlating data across users, devices and things, and translating it into instant insight.
Most businesses, however, are still using technologies that rely on the old serial computing paradigm, running on CPUs to store, manage and analyze IoT data. The problem is these technologies are just too slow to extract value from data in real time, to make operational decisions on the go, or to quickly and accurately assess risk.
CPU-powered databases take a long time and require users to decide what elements of the data they think will be important to analyze in advance, and they struggle to produce real-time visualizations. Traditional CPU-powered databases aren’t designed to handle the increasing complexity of both data sources and analysis; today’s data can be big or small, static or streaming, structured or unstructured, human or machine, long-lived or perishable. They’re holding the internet of things back.
IoT requires an insight engine powered by GPUs that analyze data simultaneously in real time. By using GPUs, a technology pioneered by Nvidia, companies can process data in parallel.
While a CPU is designed to process a trickling stream of data, a GPU is designed to process a rushing river of data — the difference between a hose and a waterfall. IoT is already generating 100x more data 100x faster than ever before, and requires a different foundation for success.
A GPU database can also take geospatial and streaming data and turn it into visualizations that reveal interesting patterns and hidden opportunities, whether that’s days, times and locations where traffic backups occur, or where it’s safe and profitable for an oil company to drill. It can also apply algorithms to augment human knowledge with artificial intelligence, quickly identifying complex patterns that are hard for humans to pick out, but which only a human can analyze in depth and in context.
At its core, with accelerated parallel computing, a GPU database makes it possible for businesses to process extreme volumes of IoT data — from the streaming to the historical — visualize it, analyze it and instantly feed insight back into the business for immediate action. Across industries, this will be the competitive edge in IoT.
The internet of things is expanding its reach! Early last year, Gartner calculated that around 8.4 billion IoT devices were in use in 2017, up from 31% in 2016. It also noted that this number is set to reach 20.4 billion by 2020. In terms of spending, IDC predicted that total expenditures will hit $1 trillion in 2020 and $1.1 trillion in 2021. It’s clear that now’s the time to start monetizing your business for IoT, that is, if you haven’t already begun. But what steps should you take to get there?
1. Have a firm grasp of your business model — and nail the fundamentals
Ultimately, everyone wants to drive revenue and create profitable IoT offerings. That will come from new digital business models and operational efficiencies. BUT, the best business model won’t help when you don’t get the basics right:
- Ship a secure product that’s free of vulnerabilities. Then, monitor and manage vulnerabilities during the device lifecycle. Literally every IoT offering uses open source components, but only a few suppliers manage them diligently.
- Implement a licensing and monetization technology that enables you to actualize new business models, deliver software and updates to devices, capture device and usage insights, and manage compliance.
2. Make it customizable
Use the power of software to differentiate devices and activate premium features for customers at the push of a button. Flexible activation of features is the key that differentiates a smart device from a dumb one. Your customers will appreciate the flexibility and the accelerated time to value.
3. Use the value of data and insight
A successful digital business model is based on insight. When thinking of IoT and data, many just think of the process data that comes back from sensors. But there’s more: Which device is using what software? And which customer is using what and how much? By aggregating device, software, customer and usage data, you build the groundwork for strategic decisions and future business success. You can proactively react on status changes on devices and serve your customers better. And, most importantly, you can also make better decisions for the future because you know exactly how your products are being used.
4. Monetize the full IoT stack while keeping your customers top of mind
Start monetizing the value of software and be smart about it. Build packages that can be consumed as a service, rather than monetizing components of your IoT offering differently or, in some cases, not at all. One thing’s for sure — recurring revenue models like subscription will outpace the more traditional models. With this trend, a close customer relationship is more important than ever. Know what your customers are using, measure the value they’re getting from it and act at the right time when you identify growth potential or attrition risk.
All you need to embark on this journey is a software monetization platform. So, what are you waiting for?
In today’s IoT world, two things are abundantly clear. First, with the number of IoT devices expected to grow dramatically in the coming years — up to 31 billion in 2018 alone — the potential business opportunity is extraordinary. Second, only the fittest of businesses will survive.
Want proof? Consider the sobering tale of one trucking company telemetry firm that made the decision to employ IoT modules in its day-to-day fleet activities. The company exhaustively tested its system prior to implementation, but did so under the assumption that the cloud server would always be present. Once in operation, the cloud server failed. Within 24 hours, the IoT modules had consumed so much cellular airtime trying to find the server that the company couldn’t afford to pay its network bill at the end of the month and was forced to close.
At every step in the IoT product lifecycle, missteps like these can occur that hamper a business’s ability to compete effectively in the marketplace or derail its efforts altogether. One misstep product makers commonly make is underestimating the true cost of their IoT device. It’s a mistake with profound consequences on a business’s brand, image and profitability.
Failures drive true device cost
Many think the cost of an IoT device is the price a user pays to own it. It’s not. Its true cost also includes the price of any maintenance and to find and fix errors and failures. If the IoT device is involved with processes of great value, as is the case with mission-critical IoT applications, then the cost of those errors and failures can be greater than the value of the device itself. A device selling for $10 could easily end up costing the product maker $1,000. That’s not a sustainable business model under any circumstance.
Think of it this way: If a smartphone fails, the impact may affect one or two users. If mission-critical IoT devices fail, the impact may be much greater since those devices are often embedded in large, complex systems or in hard-to-reach locations.
Just imagine a medical alarm in an elderly patient that fails to operate on the very day it’s needed to save the patient’s life. And what if there is a recall of an implanted medical device that fails? It might only require a doctor’s visit for a firmware update, but the faulty hardware could also have to be surgically removed and replaced. What about a failure in an IoT sensor that monitors temperature? It might cause a boiler to overheat, which is costly enough, but what if that sensor is used in an industrial cooler and its failure allows a dangerously high temperature that causes food contamination? The resulting costs could be substantial.
Reliability is priority one
Fixing these types of faults requires intervention from a technician. If the technician has to climb a ladder, go down an access hole or venture into a remote location or harsh environment, that fix can easily equate to 100s of dollars or more. For the product maker, these unforeseen expenses drive up the true cost of an IoT device, making it imperative that those devices be as reliable as possible.
How does a product maker ensure their IoT devices are reliable? In a smartphone, users play a role by serving as unpaid technicians (see the Figure). They dutifully reboot their devices when they fail to connect to Wi-Fi. They download software patches to fix problematic updates. And they replace their phones without fuss when failures become too much of a nuisance. When it comes to mission-critical IoT devices, which must be 10 to 100x more stable and reliable than the standard $1,000 smartphone, this level of quality and reliability is, quite simply, not good enough.
Eliminate hidden device cost with test
Avoiding these hidden costs and positioning a business to thrive in IoT requires one key thing of product makers and device manufacturers: test, and lots of it. IoT devices in mission-critical applications must be properly tested to ensure reliable performance, even if that testing is more expensive than the device itself.
The challenge is that IoT devices operate in very different environments. They may be mounted in the air, down a hole or against concrete or metal. To ensure they will work as expected, they must be tested after final assembly and under conditions that simulate real-world deployment. By measuring a device’s important performance indicators under normal operation and with production-release software, product markers can catch any potential flaws before they have the chance to drive up the IoT device’s true cost.
Various approaches can be employed to perform this testing, such as using a golden radio, paired devices or parametric testers. If a less complex, more cost-effective system is desired, an IoT device functional tester can do the trick. Which approach is utilized will depend on the product maker’s specific situation and requirements; however, selecting one that features test automation software can be a dramatic time and cost saver.
The internet of things is ripe with opportunities for those businesses fit enough to overcome its challenges and sidestep its pitfalls. Understanding the true cost of an IoT device is one pitfall product makers can avoid by employing adequate manufacturing and development test to improve product reliability. It’s a smart investment for any business looking to make its mark in the rapidly growing IoT industry.
Ignore the scary “human vs. robots” headlines. The complex reality is actually one of mutual growth and gradual change.
Just over the horizon awaits an army of robots, standing motionless in endless columns, their metal heads gleaming in the moonlight. When the signal comes, they will march forward into our offices and factories, shove us out of our desks and workstations, and take our jobs from us.
That’s what it feels when you read the news. You’d be forgiven to think that the rapid advances in robotics and AI are converging squarely on everything it means to be a productive member of society. When McKinsey estimates that “between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world,” it’s hard not to be scared.
Some perspective is in order.
Why robots seem so inevitable
Before you begin panicking about robots and AI, it’s important to understand why a business would choose to deploy a robot over a person.
Let’s take manufacturing, which is where my background lies, and which is often considered ground zero for automation.
You’ve probably read about the resurgence of American manufacturing. And you’ve also probably read about the labor shortage faced by American manufacturers. It may come as a surprise to people who are worried about automation, but humans remain absolutely essential to manufacturing. In fact, Boston Consulting Group estimates that up to 90% of manufacturing tasks are still performed by humans. The exact number varies within manufacturing industry verticals, but it’s safe to say that humans are the greatest contributor of value in the factory.
Human are, unfortunately, also the factory’s greatest contributor to process variability.
So, today’s manufacturers don’t see the problem as one of having too many humans. The real challenges are high turnover, labor recruiting shortages, long training times and worker safety risks — and the unpredictability that these factors create within the value chain. Manufacturing operates on extremely tight timelines (remember kanban?) and low margins; unpredictability is unwelcome because unprofitability often follows.
But if it were just so easy as to summon a horde of robots to kick all the people out, that would have been done by now. The truth is that, even with robotics and AI advancing at extreme rates, there are some very physical limitations on robot proliferation. I’ve outlined three of them below.
Robots aren’t rabbits; they multiply very slowly
A robotic system is a very complex manufactured good. Ask Elon Musk about how easy it is to scale up production of a complex manufactured good, even when incentivized by staggering demand. It’s not.
The International Federation of Robotics expects the global industrial robot population to increase from 1.8 million in 2016 to 3 million by 2020. Sound like a large number? Well, according to Goldman Sachs Research, there are more than 340 million people working in manufacturing worldwide. So, the robot population is actually growing at a fairly slow rate relative to the perceived demand.
If economists Daron Acemoglu and Pascual Restrepo are correct, each robot replaces 5.6 workers. Their estimation seems large at first glance, but it’s actually less than 2% of the global manufacturing workforce.
The market demands skills that robots don’t have
Henry Ford would have killed to have a few good robots in his factories. His was the ideal use case for robots: a high-volume, low-mix product that rarely changes and has very few variations.
Unfortunately (for robots), today’s manufacturing environments are driving toward the exact opposite trend: from mass production to “mass customization.” Lot sizes of one. Here, the manufacturer strives to deliver flexible and even personalized products without incurring the high unit costs associated with artisanship. The best example comes from Ford: 100 years ago, Henry Ford said, “Any customer can have a car painted any color that he wants so long as it is black.” Today, the Ford F-150 has a staggering 4,147,200 build combinations — or two billion, depending on what you count as a variation.
This kind of environment is precisely where robots struggle. “Machines excel in highly repeatable, high-volume operations,” said Peter Marcotullio, vice president of commercial R&D at SRI International. “Unfortunately for machines, the trend in manufacturing is for mass customization — small production runs, more process variations, constantly changing components — which is very hard to automate because of the intrinsic upfront costs of getting a flexible robotic system tooled and programmed. But it’s very easy for a person to adjust on the fly. People are more flexible and can learn faster than machines.”
Behind every robot is a cadre of … humans
Factories operate on a very delicate cadence, marching to the beat of the takt time. The process engineers know exactly how fast a person is supposed to work, and have designed complex systems to ensure that raw materials and components reach every workstation in exactly the right rhythm to ensure the operator never has to stop and look for parts.
If a robot is to work faster than the person it replaces, then the materials flow around that workstation needs to be rebuilt as well. A factory is a complex system, and any change at one node cascades throughout all the adjacent nodes.
That’s why every robot requires an ecosystem of programmers, process engineers and skilled technicians just to get started, and far more significant redesigns to reorient the process around the robot’s natural advantages. The second-order effects cascade through the supply chain, which requires people to address — who, as I’ve already mentioned, are actually the scarcest resource in any factory.
The real way this movie plays out
Everyone loves a good invasion story. Robots, aliens, body snatchers — that’s what fills movie theaters and sells newspapers (or, in this day and age, attracts clicks).
In reality, I expect the relationship between people, robots, AI and future unknowable technologies to continue the same pattern we’ve seen in 200 years of technological change: Some jobs are made redundant, some jobs are enhanced and many new jobs are created.
There’s no doubt our society will reorient and be reoriented by new technology. But the invading robot armies will remain the subject of fiction for a long time to come. The real story — which will be exciting to historians but less so to moviegoers — is one of humans and machines, not humans versus machines.
Our hyper-connected world has created a complex and ever-changing landscape that many IT teams are struggling to navigate. With continuous everything becoming the new norm, every aspect of application development needs a rethink. This includes testing, which needs a complete overhaul in order to avoid delays in deployment, updates and user acceptance.
IoT has helped accelerate the complexity of systems with many now consisting of an array of data systems, devices and apps. It is clear that testing in its current state is not ready for this “smart new world.”
Below are five steps that DevOps teams need to take to deliver smart IoT testing today:
Manual testing can’t be the foundation of test strategies with IoT due to the complexity and variation of products and services. Automation of test execution doesn’t suffice as the entire testing process from creation through to analysis needs to be automated. This requires intelligent models to auto-generate tests, with AI, machine learning and analytics allowing DevOps teams to analyze data from testing and to identify the patterns with bugs.
Test the product, not the code
Teams need to re-orientate from focusing on the code to the product and the actual user experience. With IoT, products and services are composed of multiple technologies from an array of vendors, and user experiences are therefore based on products from an array of vendors. Testing is no longer a compliance function; thus, you have to move from testing the code as smart IoT only magnifies the gap between testing code and testing the product.
Test channel consistency
Products and services are now accessed through a range of interfaces including mobile, web and voice interface, or an on-device screen. The system also could be interacting with other products via APIs. For example, if different information about where an item is located is provided depending on the interface, this will create confusion and result in errors. DevOps teams testing IoT systems need to ensure users receive a consistent view of the service independent of the interface used. So, it’s essential with smart IoT systems to test for channel consistency.
Converge testing and monitoring
By converging testing and monitoring, DevOps teams bring the user into the automated testing process, so testing is not just product-focused, it’s really user-focused.
Teams observe what users actually care about, what impacts their productivity, what impacts their effectiveness and what impacts their sentiment. They can then use this to determine whether the test was a pass or failure for a smart IoT product. IoT is often about bringing technology deeper into our lives, and user acceptance is critical; therefore bringing the user into testing is even more critical in IoT than elsewhere.
Get ready for load testing
As IoT continues to gather steam, DevOps teams need to start thinking about load testing. Many companies add IoT technologies to their existing IT infrastructure, but very few are ensuring their infrastructure can handle the resulting surge in data. Load testing is an essential preventative measure that DevOps teams must undertake to ensure their network can cope with the explosion in data volumes without impacting the product or user experience.
By adopting these five steps, DevOps teams will be able to deliver smart IoT testing, ensuring that the digital experience delights.