Across America, the autonomous vehicle trend continues to accelerate forward. Driverless cars are hitting the streets of California, and several other states are rolling out regulations about autonomous vehicle testing. Competition is fierce as automakers and tech companies race against the clock to create the perfect machine. But in March 2018, a self-driving vehicle operated by Uber hit and killed an Arizona pedestrian. The incident is thought to be the first pedestrian fatality within the autonomous vehicle space. Unfortunately, it’s a grim reminder that driverless tech is still in its early stages and there is much work left to be done.
However, some numbers paint a positive outlook for a self-driving future. One report claims that autonomous vehicles (AVs) could decrease accidents in the U.S. by a whopping 90%, saving thousands of lives and up to $190 billion annually. Recently, Esurance explored the ways in which data from today’s smart cars will help influence the development of tomorrow’s self-driving technology.
Once this tech proliferates throughout the mass markets, we should expect to see a decrease in the number of accidents. That is just a starting point to a much brighter and more efficient future, however. Along the way, it is important to understand how the next generation of AVs will interact with each other (and us) to keep drivers safe on the roads.
A quick overview of AVs
Autonomous vehicles need three things in order to function properly:
- An internal GPS system;
- A sensor system that recognizes complex road conditions; and
- A computing system that reads information from the previous two systems and transforms it into action.
AVs come equipped with all sorts of cutting-edge technology to help these systems work together — including cameras to see their surroundings, radar to allow for advanced sight (for example, to navigate through unfavorable weather conditions or in the dark) and laser sensors that can detect objects down to the millimeter. These features, along with incredibly powerful internal computers, are what get you from point A to point B in an AV.
How AVs communicate
Along with being aware of their surroundings, AVs must be able to “talk” with other vehicles. Vehicle-to-vehicle (V2V) communication helps cars share data with other nearby cars, including overall status and direction, such as their braking status, steering wheel position, speed, route (from GPS and navigation systems) and other information like lane changes. This tells neighboring vehicles what’s happening around them so that they can better anticipate hazards that even a careful driver or the best sensor system would miss. This data can also help an AV “see through” another car or obstruction by sending the same sensor data between vehicles. Soon, your car will be able to see over the vehicle in the left lane that is blocking your view as you try to turn right onto a busy street.
There’s also vehicle-to-infrastructure (V2I) communication, which allows cars to understand and connect with various road infrastructure. This includes traffic lights, lane markings, road signs, construction zones and school zones. Imagine if your car could alert you of a traffic jam or a sharp curve long before you came into contact with it — that will be a reality with V2I technology. In a fully autonomous world, these data sets will combine to help your car find the safest, most efficient route to your destination in real time.
V2V or V2I: An interim solution?
Although driverless vehicles could one day rule the roads, we’re still several years away from this reality. The technology isn’t quite there yet, and consumers are not completely ready to take their hands off of the wheel. However, car-to-car communication could provide a more real-time, positive effect on road safety. And government officials realize this — in 2017, the Federal Highway Administration announced V2V guidance that can improve mobility and safety by accelerating the launch of V2I communication systems. It’s certainly a step in the right direction for emerging technologies.
How self-driving cars can become a reality
A fully autonomous future isn’t out of the question, but it will require extensive collaboration from many different parties to see it through to fruition. Automakers and tech companies must build safe and reliable products that are virtually fail-proof before consumers can begin to trust the technology. City officials should consider smart road infrastructure to help vehicles better anticipate issues and communicate with each other. Local and federal policymakers need to create laws and regulations to protect our safety. Most importantly, these parties will need to gain the trust and confidence of the American public.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Analytics and intelligence play a key role in the overall IoT system, where collected data is turned into information and augments business decisions with actionable insights. Data collected from IoT devices can help enterprises reduce maintenance costs, avoid equipment failure, improve business operations and perform targeted marketing. Synchronizing an enterprise IoT ecosystem with core engines, such as artificial intelligence, machine learning, predictive analytics and digital counterparts, will be the key to unlocking the potential of IoT at enterprise level.
IoT is all about the data flowing between devices and gateways and to central platforms. According to Gartner research, there will be 25 billion connected things connected to internet by 2020. The amount of data generated by these things will be unimaginably huge. Finding pieces of useful information over this pile of data will be nothing less than finding a needle in a haystack. This is where artificial intelligence will play a big role, filtering that huge cluster of data which will result in intelligent business friendly insights.
In one of the top use cases, the IoT and AI combination can be in the field of security. Artificial intelligence can be used to determine regular access patterns in any vulnerable environment to help security control systems to avoid any security failures.
It’s often necessary to identify correlations between a large number of sensor inputs and external factors that are rapidly producing millions of data points. Considering the frequency at which IoT devices generate data, a computing technique that can make best use of this information becomes inevitable. The evolution of the IoT and machine learning combination is the result of millions of data points generated at a great frequency by IoT devices. Machine learning works on huge amounts of historical data to produce cognitive decisions. Thus, the combination of IoT with machine learning becomes a great enabler of business optimization.
Predictive analytics and maintenance can create huge impacts on business economics using IoT data. Predictive analytics facilitate automated, consumable replenishments in the consumer segment. Using predictive analytics based on IoT data, failures and downtimes of the machinery in manufacturing facilities can be prevented. Enterprises can mitigate the damaging economics of unplanned downtime. According to statistics, using predictive analytics can reduce maintenance costs by 30-40%.
Using digital replicas of physical entities and systems as part of an IoT architecture allows organizations to start simulations and compensate ecosystems as and when needed. Digital twins can generate predictions and insights into the operations of machines which will be very useful when part of existing business processes. Triggering appropriate remedial business processes and workflow is one of the purposes of digital twin projections.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
3D face recognition systems are poised to deliver fast, accurate and secure authentication on mobile devices and other applications. By pairing today’s tiny, accurate 3D cameras with powerful AI software, we are entering a new era in providing secure access to smartphones, laptops and tablets. While these two elements are key to the adoption of 3D face recognition for authentication, there is another aspect that, in my opinion, too often gets overlooked.
The fact of the matter is that to deliver fast and accurate 3D face authentication, you need to have a great deal of very diverse data. A huge repository with lots of similar data is not going to cut it.
Big is not always best
One of the major factors that too often gets overlooked when tech companies are developing biometric-based technologies — especially ones that use face recognition for authentication — is the fact that we are all now operating in a global business community. A quite telling article in the New York Times not long ago by Steve Lohr basically decried the current state of mobile face recognition. The reporter’s key takeaway on the present situation was that it works well if you are a middle-aged white male.
Given that the world is flat and continues to get flatter (great book by Thomas Friedman, btw), this is not an acceptable situation. People on the planet have faces of all different shapes, sizes and colors. Plus, tech research firm Gartner predicts that by 2021, 40% of smartphones will be equipped with 3D cameras. Add to this the fact that the number of mobile phone users is forecast to reach 4.77 billion in 2017 and pass the five billion mark by 2019. The net impact of inconsistent, face-based authentication could be significant, as people increasingly expect to easily access their devices using this approach.
Lots of diverse data to the rescue
Having worked in this space since 2006, I have to say that the inconsistent performance of both standard 2D and the new 3D face authentication is too often a result of simply defined or poorly curated databases. This is one of the reasons why face authentication has struggled to achieve mainstream acceptance.
Delivering accurate, consistent and fast 3D face recognition requires the factors I mentioned above: exploiting today’s tiny and accurate cameras and using advanced AI algorithms to capture, manage and rationalize the torrent of data generated. For perspective, today’s state of the art 3D cameras capture 30K-40K data points per scan every time they look at a face — but that is not all.
Many companies claim that because they have a large database, their technology is going to be accurate. Which is, frankly, inaccurate. More important than size is the diversity of the database for the specific intended use cases. Millions don’t matter. Databases can in fact be statistically significant with as few as 250 persons if the appropriate factors are captured.
In order to deliver an acceptable level of 3D accuracy, the data needs to:
- Include people of different races and different genders, with and without glasses;
- Represent people in different positions;
- Be acquired over long periods of time;
- Be captured with the same types of cameras used for recognition on various end user devices; and
- Represent expected use case environments — for example, will people be only sitting in a well-lit office or might they be lounging on the beach in bright sunlight?
My recommendation has always been to focus on quality and diversity of data rather than just the size of the database. Over the past 12 years, our team has been able to generate and use a database of millions of people. More importantly, it includes images of users of different races from all over the world acquired with cameras and in environmental conditions and poses representing how they are actually using the technology in real-world situations.
I encourage organizations interested in exploiting the power of 2D or 3D face recognition and authentication to be sure their proposed approach has access to a meaningful and diverse data set. It will ensure consistent results and, in turn, help drive overall customer satisfaction when deploying a face authentication system.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
After years of evangelization and waiting for the promises of the internet of things to come true, it seems that we are finally close to reaching the trough of disillusionment phase, when we begin to forget all the hype generated so far and focus on reality — a harsh reality that involves selling IoT and not selling smoke anymore.
The time to sell IoT is now
The sale of IoT is perhaps more complex than the sale of other disruptive technologies, such as big data, cloud or AI, and maybe as complex as blockchain today. In the article “Welcome to the first ‘selling IoT’ master class,” I commented how it should be the evolution of M2M and IT technology vendors to sell IoT. However, many of these companies still have difficulty forming and finding good sellers of IoT.
The truth is that nowadays it does not make any sense to sell IoT as a technology. Enterprise buyers only want to buy technologies that provide measurable business outcomes while, on the other side, many IoT vendors want to sell their portfolio of products and services that have been categorized under the umbrella of IoT either as quickly as possible or at the lowest possible cost.
During last five years, I have been analyzing how IoT companies sell their products and services. Some of my customers, including startups, device vendors, telco operators, platform vendors, distributors, industry applications and system integrators, asked me to create IoT sales material to train their sales team on selling their IoT solutions and services. Sometimes I also help head hunters or customers searching for IoT sales experts.
Based on this varied experience, I have launched “IoT sales workshops” to help companies train their internal teams to sell IoT. Here are some of the lessons I learned:
- There is a time to act as an IoT sales generalist, and a time to act as an IoT specialist.
- You need to adapt IoT storytelling based on your audience.
- Being an IoT expert is not synonymous with being successful in selling IoT.
- You need to show how companies can get more out of IoT by solving a specific business problem.
- Make it easy for the customer to see the benefits of your IoT product or IoT service, as well as the value it adds.
- Given the complexity and specialization of IoT by vertical, explain the need to focus more closely on business cases and IoT business models, as well as the ROI over three to four years before jumping into technology.
- You need to be patient; IoT selling is not easy and takes time to align strategy and business needs with the IoT products and services you are selling.
- Build a strong ecosystem and adoption of end-to-end IoT systems easier by collaborating with partners.
- Train IoT business and technical experts to get better at telling stories. Design a new marketing and sales communications playbook. Keep it simple. Build your narrative from the foundation up — one idea at a time.
- If you want an IoT sales expert, you need to pay for one — don’t expect miracles from external sales agents working on commission.
- IoT sales is a full-time job. You will not have time for other enterprise activities.
- Selling IoT to large enterprises takes teamwork.
- Be persistent. Do not expect big deals soon.
- Be passionate, be ambitious and be disruptive to sell IoT.
I do not consider myself an IoT sales expert, nor a superman of sales. In fact, I have shied away from classifying myself as a salesperson, even though over time I have given weight and value to this work that once seemed derogatory to me.
Selling IoT is not easy. In a few years, we will forget about IoT and sell newly hyped technologies. But in the meantime, you need to be prepared for disillusionment — long sales cycles and a lot of work with sometimes poor results. However, I don’t know — maybe by 2020, if you persevere, you will be awarded as “best IoT sales expert” and will finally earn a lot of money.
Just remember: Be persistent, be passionate, be ambitious and be disruptive to sell IoT.
Thanks for your likes and shares.
The history of technology in the trucking industry has had many watershed moments. The advent of diesel engines, which had been mainly used in large stationary operations, brought a large increase in hauling power and fuel efficiency. In the 1970s, electronics began revolutionizing how those engines were controlled, again boosting their efficiency.
Today, this $700 billion industry is once more on the cusp of a technological revolution. This time it’s the industrial internet of things that is poised to drive dramatic improvement in asset performance and utilization.
One of the biggest promises of IIoT is predictive maintenance. But IIoT success requires more than just implementing technology like sensors and telematics — it requires that the technology is integrated into a connected process that aggregates, organizes, filters and prioritizes data into a usable format. There is also significant value in IIoT beyond predictive maintenance — ranging from improving the service process, maximizing the lifetime value of an asset and optimizing purchase and disposition decisions based on real data.
Connecting assets is as easy as installing vehicle telematics systems to get data regarding the condition and operation of the asset. But connecting that data with the management of that asset means connecting assets, service locations, fleet management systems, diagnostic tools, OEM data, call centers and remote service providers. This means that all the related parties in the service supply chain are able to share and collaborate, which drives dramatic gains in asset performance and uptime. Collectively using the data from vehicles to inform and enable an improved approach to service management ensures that the anticipated value of IIoT will be realized.
Today’s IIoT reality: Disconnected data, little service value
The reality for many fleets is that commercial vehicle service and repair operations are still hampered by disconnections. Management information systems are siloed and unable to communicate with each other. Stakeholders, including fleets, service providers and OEMs, are unconnected and forced to make do with data that is incomplete, incorrect, inconsistent, unstructured or out of date. This is the challenging reality of early IIoT implementations for many organizations — a flood of unusable data that hampers the service process rather than speeds it up.
How can fleets, dealers, service providers and OEMs enable IIoT to live up to its potential? They can start by fixing the disjointed ecosystem that currently exists and improving the information sharing and collaboration that is essential to enhancing efficiencies and uptime. This requires thinking about IIoT in the context of a service relationship management (SRM) tool. SRM links all the stakeholders involved in service management, ensuring the right people get the right information, in the right place and at the right time.
Realizing IIoT’s promise: Lower downtime, optimized service processes
There already is evidence that IIoT adoption can positively impact asset performance and service and repair operations. The diagnostic and analytic capabilities enabled by IIoT provide great intelligence to service teams when they are available at the point of service. In commercial vehicles, remote diagnostics allow for data collection, both as an issue is reported as well as prior to the issue occurring. This leads to greater accuracy and the ability to provide proactive support. Connected vehicle data also provides for more effective root-cause analysis and enables more robust case-based reasoning, helping address downtime, fuel efficiency, maintenance costs and service optimization issues.
One commercial truck manufacturer already equips all its vehicles with remote diagnostics that provide around-the-clock monitoring and detailed analyses of critical fault codes, enabling identification of emergent issues for action, improvements in planning for repairs, and streamlined diagnostic and service procedures. On top of this, the manufacturer has been investing time and resources into building a consistent, measurable process to improve uptime for its customers and dealers through a combination of IIoT deployment and SRM implementation — and has integrated its telematics and analytics tools into the SRM platform. The results speak for themselves: Service providers in the program have seen reductions in both diagnostic time and service event downtime (36% and 29% less, respectively).
SRM as the on-ramp to IIoT promise
Incorporating asset data in an SRM platform can provide real-time transparency and visibility and enable connected service technology that maximizes uptime and reduces costs across an entire service ecosystem. Further, when advanced data analytics are layered on top of high-quality, consistent data delivered by IIoT, it can provide insights that inform service process improvement for stakeholders across entire service supply chains.
An SRM tool unifies those supply chains to improve uptime, ensure consistent network-wide service delivery, create and strengthen customer relationships, reduce warranty and support costs, and lower goodwill expenses. Using an SRM platform addresses the four C’s of effective service event management:
- Connectivity that facilitates seamless data flow between assets, service points, OEMs and fleets;
- Communication that enables contextual information sharing and collaboration at the point of service;
- Controls that provide tools to reduce risk, increase efficiency and improve decision-making; and
- Consistency across service networks and repair processes, including shared service histories for real-time decision support and post-event reporting that drives accountability and process improvement.
The commercial trucking industry is a critical part of the U.S. economy — the American Trucking Association estimates it moves about 71% of the nation’s freight by weight. Using IIoT to improve its efficiency promises to spread beneficial affects throughout supply chains for industries as diverse as food production, manufacturing and retailing — all while improving the bottom line and building strong customer relationships.
According to a 2017 McKinsey survey, 92% of senior enterprise stakeholders think the internet of things will generate a positive impact by 2020. Yet, more than half of companies use 10% of their IoT data or less.
The gap is likely due to the fact that stakeholders at many firms do not completely understand how smart logistics will strategically impact their businesses and provide ROI. Understanding how real-time location services (RTLS) can affect positive change is crucial to drive your business into the top percentile in your industry.
RTLS products have not only become better at addressing supply chain visibility as a whole, but have also become cost-effective. Hybrid IoT real-time locating systems that combine the capabilities of GPS, GSM, Bluetooth and Wi-Fi help you monitor at a package-level and cost a fraction of what traditional RFID technologies do, sometimes as low as $1 per tag per month and $30 per gateway.
According to an ABI Research report called “Next-Generation Asset Tracking and RTLS Opportunities, Applications and Revenue,” a 1,000-unit active RFID system, including both software and hardware, can cost up to $39,100. By comparison, the cost to implement the same system using Bluetooth is only $10,890, resulting in a 73% savings when using a Bluetooth beacon system.
In short, RTLS is poised to bring a “new logistics order” with a world of benefits including:
1. Shift in the balance of control
Logistics providers had near absolute control before the advent of shipment tracking and monitoring technologies, and clients had nothing more than a foggy, filtered view of the condition, handling and, more importantly, current location and ETA of their shipments. Real-time location tracking coupled with condition monitoring will change the relationship between shippers and their logistics partners, giving shippers unfiltered insight into day-to-day operations and the upper hand in service level agreement (SLA) enforceability.
2. Customized benchmarks
Produce shipments don’t need the same stringent precautions or scrutiny as pharmaceuticals, and there’s no need to spring for the premium shipping service if goods can get by with something thrifty that still minimizes risk. Real-time location services, coupled with localized contextual information like ETAs, current demand, likely overheads and current local demand for inventory, as well as measurable supply chain risk, give shippers and logistics companies alike an analytical tool that can help them agree on realistic and agreeable expectations and compensations, keeping both parties happy.
3. Better supply chain resilience
The biggest advantage of real-time location services is the potential to improve supply chain resilience, especially in the face of growing market turbulence. Political unrest, labor shortages, shifting consumer demand — these are just a few things that could affect changes in shipping patterns and/or logistics strategies. Through contingency plans, a real-time situational awareness about current disruptions and the options available to resolve a situation, knowing which shipment in transit can make the delivery window for another that’s delayed, for instance, can help minimize service disruptions proactively.
4. Improvement on the go
Most organizations are wary of sweeping changes to operations, even if they’re well-intentioned. Location-based dynamic route optimization systems have the potential to change things for the better, one turn at a time. It can, for instance, measure the stops a driver must make against variables, like the window of delivery, route congestion, possible deviations and fuel consumption, continually suggesting different — but more efficient — routes that may not have been evident to the untrained eye. Iterative improvements over time have a greater impact than sweeping procedural changes fraught with risk.
5. Promises become guarantees
Logistics companies have gradually changed tactics; they no longer over-promise without worrying about under-delivering. Instead, they’d rather under-promise so it’s easier to over-deliver. While that helps save face in an industry where service disruptions are inevitable, the underplayed capability makes it harder for logistics companies to win over new business. With the help of location-based logistics performance analytics, better predictability due to real-time data-driven operational optimization, and the assurance that disruptions can be contained and managed, IoT-enabled smart logistics companies can make bigger and bolder service promises without second-guessing their ability to meet them.
6. Cohesive working culture
With real-time shipment data easily available and shareable, everyone’s on the same page. This results in fewer arguments over SLA violations, better transparency about genuine incidents that were beyond a transporter’s control and more accountability when avoidable service disruptions do occur.
It’s important to understand that real-time shipment location data isn’t just feedstock for machine learning; it’s a powerful tool, depending on what a business does with it, of course. It is location monitoring combined with other contextual data from IoT-enabled monitoring systems and APIs that can help deliver better results, drive improvements in operational efficiencies and make logistics companies — and their clients — more competitive.
The internet of shadowy things sounds like a shady place to be, and it is.
In the past, shadow IT was a nightmare for most enterprises — it was known for being outside of IT’s control with a plethora of security issues. However, with the influx of mobile within the enterprise, this mindset has shifted. Now, it is seen as an indicator on how to help with productivity or, in other words, it’s all about tapping into innovation, securely.
It’s still tempting to go back to the traditional IT playbook, fear the technology entirely and to “just say no.” This happened with Wi-Fi in the late ’90s and with iPhones in the late ’00s. But, new IoT devices could be the source of real business value. Connected refrigerators seem silly until they potentially help drive both revenue and productivity in a market like pharmaceuticals. IP cameras can help coordinate first responders in case of emergencies by providing real-time video to coordinators that improves situational awareness. Digital media players can provide immersive experiences for consumers in retail by ensuring that relevant content is displayed to them in any store, anywhere in world. These are just a few real-world IoT examples that are in use today.
A recent report shared, “Our IoT world is growing at a breathtaking pace, from 2 billion objects in 2006 to a projected 200 billion by 2020 — that will be around 26 smart objects for every human being on Earth.” There is no doubt that IT organizations will be quickly overwhelmed.
The answer here is to develop the building blocks that let organization’s say “yes” to the internet of shadowy things.
There are five tips for tackling the internet of shadowy things:
- Segment the network: Users will bring new devices onto the network that organizations likely don’t want to connect to critical infrastructure. It’s time to add a couple of new SSIDs and VLANs to the network. Some might already have a guest network in place that provides internet connectivity while blocking access to enterprise resources and that’s a start, but IoT devices may need access to some enterprise resources whereas guests need none. IT can decide over time what resources are accessible on the IoT network. Ultimately, an IoT network fits somewhere between the outright-trusted enterprise network and what organizations use for guests.
- Think seriously about PKI and NAC: Organizations don’t want users taking their credentials and putting them into the refrigerator to get it online because, if it is compromised, the refrigerator is acting on the network as an employee. Public key infrastructure (PKI) can help by ensuring only authorized endpoints enrolled by the user and trusted by IT can connect. Layering in network access control (NAC) ensures that devices are actually trusted and meet minimum-security criteria. Less trusted IoT devices are kept segmented to the correct network.
- Block Telnet: If it’s feasible, block Telnet connections from networks entirely. At a minimum, block connections made over Telnet from the outside world. Unsecured connections like Telnet, combined with devices with default passwords, allow worms to spread.
- Think about traffic shaping: Traffic shaping, particularly around suspicious traffic flows can help mitigate the effect of attacks launched from the network and provide improved connectivity for mission critical services.
- Manage what’s possible: Employees can bring some connected devices under enterprise mobility management and other security frameworks. If an organization is prototyping the development of its own IoT devices, look to platforms like Windows 10 and Android because their security toolsets are more mature than consumer development platforms. If devices can’t be configured through a central platform, work with employees to set them up in order to disable the types of default configurations that have led to exploitations.
These security best practices are needed for an enterprise IoT foundation. By applying these recommendations, enterprises can lay the security groundwork for future connected devices and make their organizations more secure today.
The traditional chatbot isn’t much better than being stuck in phone tree hell. This is because they are created using a laborious coding process based on extensive decision trees that attempt to mimic every possible user interaction. They provide a stilted experience based on “if-then-else” logic that doesn’t follow a natural conversation.
Then you have the ever-trendy industrial internet of things, which is based on sensor data pulled from machines and equipment. It covers all sorts of cool use cases from medical devices to industrial assets. As IoT continues to evolve, it will have even bigger business ramifications across virtually every industry.
Now, think about these two technologies from another perspective — what if the chatbot was actually intelligent and conversational? What if you could pair a conversational chatbot with the plethora of sensor data from IIoT to help manage a response or activity based on a predictive outcome?
Applications of chatbots and IIoT
Can’t quite picture it? Let’s look at a practical example. Say the network operations center at a large telecom provider gets a notification that an IIoT-connected HVAC system in one of its 4G base stations is predicted to fail within the next 90 minutes in Texas. It’s the middle of the summer and the temperature is 105 degrees Fahrenheit. At that temperature, the tower’s equipment room will overheat in less than an hour once the HVAC stops.
The automated reset didn’t work to clear the alarms, so a field technician is dispatched after receiving an alert via his mobile device. The tech gets the request and is redirected to the location, which is 60 minutes away. This leaves him only a small window to fix the problem. To save time on site, he uses a voice-activated interaction with the chatbot while driving to get a full briefing on the situation, a summary of which parts are required to make the repair and a step-by-step overview of the repair procedure. Once he arrives, he puts on his smart glasses (complete with augmented reality and integrated chat with the chatbot) to make the repairs.
In this scenario, the intelligent chatbot unlocks a new way for the field technician to operate. Instead of lugging around hefty manuals or tying up other support agents on the phone, the tech can interact with a chatbot to get the step-by-step information he needs. The chatbot can provide information in a more natural context while improving the self-sufficiency of the field agent.
It doesn’t take much to see how this could be applied to other fields as well. Look at healthcare — there is all sorts of patient data streaming in from embedded devices or wearables. Say this data is analyzed and triggers a notification to the patient upon discovering an abnormal reading. The first line of response in addition to automated alerts could be a triggered interaction with an intuitive chatbot that engages the patient to answer questions or schedule a doctor’s appointment.
AI and machine learning: The cornerstones of the future
AI and machine learning form the foundation of this vision. Just as we need to avoid traditional chatbot development approaches to create truly conversational chatbots, we have to do the same for the development and training of the analytical models.
For analytical models, we have to break the dependency on developing models that require intensive labor — there is too much data being generated to analyze it all manually at the scale required by today’s digital economy. We have to use AI to automate the machine learning process. AI is the only way to create predictive models that produce accurate results on an asset-by-asset level and at the speed required to keep up with a fast-moving production environment.
Similarly, AI is also pivotal to creating the conversational chatbot that doesn’t doom users to unnatural conversations and phone tree hell. AI enables businesses to make chatbot communications conversational and intuitive without intensive programming effort. How, you ask? By eliminating the need for decision trees and replacing it with a declarative AI-powered chat experience.
With AI, we can train chatbots based on the capabilities that we want the chatbot to provide instead of taking the time-consuming approach of writing code that replicates every possible chat response. Whether you’re providing support to a field tech or customer service to a patient, you can use FAQ-style information and documentation to train the bot. You can also train the bot to collect key information, like part numbers for the field tech or appointment times for a patient. You basically train the bot as you would train an actual employee.
This all might sound futuristic, but it’s not. This challenge is what drove the bold vision for AI-powered platforms like Progress NativeChat, used for creating your own conversational chatbot. AI and machine learning can do a lot of heavy lifting for businesses when applied correctly, enabling them to unlock new capabilities and capitalize on new opportunities.
Just a couple of years ago, the internet of things was more vision than reality. Today, enterprises are increasingly deploying IIoT technologies as part of digital transformation initiatives designed to minimize maintenance costs and maximize asset utilization. Examples include instrumenting commercial aircraft, large commercial buildings, power plants and truck fleets. But large-scale IIoT isn’t easy. It requires the real-time processing and analysis of massive amounts of data, which is a significant challenge for almost any enterprise. To meet this challenge and ensure real-time responsiveness at scale, enterprises have begun taking advantage of hybrid transactional/analytical processing (HTAP) powered by in-memory computing.
The IoT growth projections are staggering. Ericsson expects there will be approximately 18 billion connected devices related to IoT by 2022 and that between 2016 and 2022, IoT devices will increase at a compound annual growth rate (CAGR) of 21%. Machina Research predicts the total number of IoT connections will grow from 6 billion in 2015 to 27 billion in 2025. IIoT investment could surpass $1 trillion over the next 10 years, and GE Digital anticipates $60 trillion in connected industrial assets by 2030. And overall, Nokia Bell Labs predicts IoT’s value will be 36 times that of today’s entire internet, based on the number of devices connected and how users perceive and experience the value of IoT devices and applications.
It’s important to recognize that this growth depends on the wide range of IoT and enterprise IoT use cases delivering as promised — not just in the early implementation stage, but at scale, with ever-soaring data flows and data-intensive real-time analytics.
Consider the concept of the digital twin. A digital twin uses advanced analytics to model the current state of real-world manufacturing and industrial assets — from an automobile to an aircraft to a power plant — using large numbers of IoT sensors on the real-world devices, combined with feeds from other relevant data sources, such as weather, temperature and moisture, plus historical data. All this data is combined and analyzed in real time. By creating this digital model of a complex system, companies can plan maintenance to minimize costs and maximize utilization.
For example, a digital twin can determine if a jet engine requires maintenance without requiring a physical inspection. Inspecting a digital twin instead of the physical object can also reduce the risk of inspecting items in potentially dangerous environments, such as with underwater pumps or power plant cooling systems. Companies have been using modeling for years, but creating a real-time digital twin of a complex physical system requires that sensors be deployed and that the sensor data feeds into a system with the processing power and bandwidth required to benefit from the data. Gartner is predicting that by 2022, IoT will save consumers and businesses $1 trillion a year in maintenance, services and consumables, and digital twins can play an important role in that.
Many other enterprise and industrial IoT use cases have been equally challenging for organizations. For example, production tracking, inventory maintenance, logistics and patient monitoring have all required a tremendous investment in compute power as the applications scale, prohibiting many companies from realizing their IoT goals.
All of that is changing thanks to HTAP powered by in-memory computing.
HTAP and in-memory computing
The key to successful real-time IoT processing and analysis at scale is the ability to implement HTAP powered by in-memory computing. Traditional approaches rely on an outdated, bifurcated database structure. Online transaction processing (OLTP) databases are designed to handle only operational data. Separate online analytical processing (OLAP) databases handle the analytical processes. To bridge the two systems, extract, transform and load (ETL) processes periodically move data from the OLTP database to the OLAP database — which introduces a delay which cannot support the real-time analytical demands of IIoT.
HTAP eliminates the delay associated with ETL. It enables real-time analysis on the operational data set without impacting performance. However, until recently HTAP was too expensive for most enterprise budgets, requiring huge hardware and software investments.
Today’s in-memory computing platforms make HTAP affordable. In-memory computing platforms maintain data in RAM in order to process and analyze data without requiring the delays inherent in reading data from a disk-based database. Architected for massively parallel processing across a cluster of commodity servers, these platforms can easily be inserted between existing application and data layers with no rip-and-replace of the existing database. They can also be easily and cost-effectively scaled out by adding new nodes to the cluster, which automatically takes advantage of the added RAM and CPU processing power. The benefits of in-memory computing platforms include massive performance gains, the ability to scale to petabytes of in-memory data and high availability thanks to distributed computing.
In-memory computing isn’t new, but until recently, the cost of RAM and the lack of affordable technologies severely limited adoption. However, the cost of RAM has dropped steadily, approximately 10% per year for decades, and mature, easy-to-install in-memory computing platforms are now available.
This makes in-memory computing perfect for HTAP because the entire transactional data set is already in RAM and ready for analysis. By applying massively parallel processing to the data, sophisticated in-memory computing platforms can run fast, distributed analytics across the data set without impacting transaction processing.
According to “Market Guide for HTAP-Enabling In-Memory Computing Technologies,” in-memory computing-enabled HTAP can have a transformational impact on a business. It will also help power a new generation of IoT platforms, which are on-premises software suites or cloud services that monitor and manage various types of IoT endpoints. IoT platforms will eventually make IoT projects easier to launch and simpler to manage, and Gartner predicts that by 2020, 65% of companies adopting IoT will use an IoT platform for at least one project.
Call it a wave or a deluge, refer to it as an era or an age, but IIoT will dramatically change how businesses manufacture, deliver and maintain products. Just looking at the revenue side, Machina Research also predicts overall IoT growth from $750 billion in 2015 to nearly $3 trillion in 2025, growing at a 15% CAGR. IoT platforms and middleware revenue will grow from about $50 billion in 2015 to $250 billion in 2025. But IoT requires a critical technology infrastructure reinvention, which can be enabled only by HTAP powered by in-memory computing. Enterprises developing IIoT strategies should immediately begin exploring the power of integrating in-memory computing with HTAP to help make those strategies a reality.
The internet of things has finally come into its own. Although it’s still pretty green, IoT has established itself as a (reasonably) well-defined, financially sound and stable industry. Even more importantly, it has proven itself capable of providing real, undeniable value. From smart stormwater management systems that prevent flooding to smart home security systems that keep families safe — real IoT is already providing real value.
And now that IoT is standing on its own two feet, some interesting traits are beginning to come to light. Perhaps one of the most interesting of those traits is the actual shape of the IoT market. Or, rather, the lack of one. I often refer to IoT as the third wave of computing — preceded by personal computers and smartphones. And just like PCs and smartphones, IoT’s “childhood” is proving to be a bit chaotic. Major fragmentation complicates the market and leads to many short-lived start-ups and ill-conceived ideas.
Thankfully, for computers and cell phones, the “troubled pre-teen phase” didn’t last too long. Soon, companies like Microsoft, Apple and Android helped the industry “get a haircut and get a real job“, so to speak. With clear industry leaders ruling their respective roosts, the PC and smartphone industries consolidated and continued to innovate.
IoT bucks the trend
But for all the entrepreneurs and corporate innovators out there dreaming of their billion-dollar thrones atop the field of IoT, I’d suggest pursuing a different field. Because I’m extremely confident that IoT will remain fragmented for a very, very long time. Like, a forever long time.
That’s because IoT is fundamentally different than the types of computation that have preceded it. “The internet of things” is an immensely broad term. It includes everything from internet-connected air quality sensors to “smart” robo-pets that double as personal assistants. And there’s a market for both.
So, the next time the topic of IoT comes up in a conference room, just remember that the speaker could be referring to:
- Home automation
- Security systems
- Fitness trackers
- Traffic monitoring systems
- Parking meters
- Points of sale
- Asset trackers
- Manufacturing automation
- Industrial equipment monitoring
- Environmental sensor networks
- Irrigation systems
- Self-driving cars
- Smart grids (i.e., connected energy meters)
- Plus just about any other non-traditional object that could be connected to the internet (e.g. robo-puppies)
Because of this virtually limitless frontier, it becomes supremely difficult to “master IoT” as Bill Gates “mastered the PC.” IoT devices made to monitor the turbidity of volatile chemicals most likely wouldn’t thrive in the robo-pet market.
Unsurprisingly, this has left many eager entrepreneurs and investors scratching their heads. But, what those people find maddening, I find inspiring. That’s because, within this extremely fragmented environment, a culture of freedom, inclusiveness and experimentation is taking root.
I don’t doubt that clear industry leaders will eventually emerge from today’s jumbled landscape. However, I’m confident that the nature of “industry leadership” itself will take on a significantly different shape in the field of IoT. And if any businesses manage to establish the kind of market leadership we’re familiar with today, it will be those that master the building blocks of IoT rather than those that attempt to address a multitude of use cases.
But until then, our emergent industry will benefit immensely from this period of unimpeded exploration, experimentation and innovation. And I have no doubt that this unique stage of development will someday be recognized as essential to the long, life-changing history of the internet of things.
How startups can survive (and even thrive) in the IoT industry
Entrepreneurs looking to enter the IoT marketplace should begin by thinking long and hard about which specific niche they plan to go after. Startups without clearly defined, focused targets will quickly sink in a crowded sea with practically no salt in its waters. My company Particle, for instance, is an IoT device platform, which means that we provide the technology and expertise that companies need to get their IoT devices online. And although our customer base is quite diverse, our role in their products is always essentially the same.
In the simplest of terms, we support companies who join sensors and actuators with microcontrollers to solve problems. We do not do heavy computers, we don’t do streaming audio and video, we don’t do super-complex systems like self-driving cars, and we definitely don’t do robo-pets. Our market is still relatively broad, but we are unflaggingly focused on our area of expertise.
And that’s why we’re so damn good at what we do. We may never be the “Masters of IoT” like Bill Gates is for PCs; but we’re growing increasingly confident that we’ll be the masters of our own, powerful domain within this near-limitless landscape called “the internet of things.”