I was recently asked what my biggest takeaway was from this previous year, as well as what I thought might happen in 2018. In a nutshell, it comes down to this: With all the hype around IoT, I believe technology firms invested ahead of actual demand. This resulted in a number of products and solutions on the market that don’t yet address the actual needs of the customer’s business.
In 2017, we saw several distinct categories of IoT technologies emerge:
- Hype-chasing fad products, like IoT coffeemakers and home security systems;
- Hardware and software infrastructure products that provided the basis upon which customers could build their IoT solutions;
- Packaged IoT platforms; and
- Development tools and environments.
As customers grapple with determining what technology will actually suit their business needs, I think 2018 will be a year where vendors either drop off or seek to merge and/or partner together for solutions that show a clear return on investment.
Here’s what’s coming
My bet is that we’re going to see a lot of the following this year:
- Customers with a pinpointed focus on IoT projects that can produce measurable success around very specific business outcomes, such as driving out cost, increasing operational efficiencies, or reinventing the customer experience to increase demand.
- Increased interest in a stable, dependable infrastructure, built on open standards, on which to implement IoT projects.
- The need to connect new data sources into existing infrastructure while ensuring smooth integration and data management capabilities
In addition, I expect we will see potential IoT customers asking themselves these very strategic questions:
- Am I willing to change my business processes to match how a cloud-based or off-the-shelf IoT solution works?
- Does an IoT infrastructure built on open source afford me greater flexibility and will it allow for changes as well as growth into the future?
Supporting data from customers
I have had the opportunity to work with dozens of customers over the past few months, ranging from telecommunications companies and as-a-service providers to G8000 enterprises and government agencies. Their industries included transportation, healthcare, utilities services and retail. In my discussions with them, the following points became clear:
- Many have strategically adopted open source technologies for interoperability needs.
- They are concerned about the security of their data in the public cloud, as well as vendor lock-in.
- IoT is a strategic investment for them, and many already have a pilot or production architecture in place.
- They are looking for modularity and deployment flexibility.
- They want to reduce risk and complexity.
- End-to-end analytics and features designed to provide end-to-end security are essential.
One of the most important takeaways from all of this is the need for an enterprise-worthy, end-to-end open architecture that customers can rely on for hosting their specific IoT projects. What’s needed is an architecture that enables them to integrate what they have today and evolve as they digitally transform their business.
New year, new ways to address the challenge
IoT projects need to be focused on generating business results. Those who wish to implement IoT know their business, but few have the skills required to develop an entire IoT infrastructure. They need complete platform solutions, and I would argue that they’ve known this from the start.
But early adopters of IoT who invested in proprietary platforms are now finding themselves frustrated by limited functionality, locked into a particular vendor and rethinking their choices. The value of open source alternatives is recognized as a hub for continuous innovation, but it is a significant challenge to manage multiple open source projects, validate that they work together, integrate them to provide the right functionality and ensure future enhancements.
Envisioning the future
So, what would an end-to-end, open source architecture for IoT look like? Here are key components I think are important and should be included in an IoT architecture:
- Connected “things” that generate device data, a connection designed to be more secure and seamless connectivity.
- An intelligent gateway stack that simplifies data flow management and offers intelligence and analytics capabilities at the edge.
- An integration hub to manage disparate devices and control the operational flow of data.
- A comprehensive, centralized advanced analytics and data management platform to enable deep business insights and actionable intelligence and manages IoT data processing, offers persistent storage and machine learning capabilities.
- Application development, deployment and integration services.
The model would essentially look like this:
Packaged IoT platforms that were promoted in 2017 may work well for completely new businesses building a separate IoT infrastructure from scratch. But few companies have the luxury of completely overhauling their infrastructure just for an IoT project. IoT implementations are part of a journey businesses take toward digitalization. There are a lot of already existing components that need to be taken into account. And a lot of future additions and enhancements that require a foundation that is flexible enough to adjust to growth and change.
The modular nature of the open, enterprise solution described above could enable system components to be swapped out as needed. This way, businesses can integrate new technology while at the same time preserving existing investments.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
January’s Hide ‘N Seek IoT botnet has reminded us all of the sophistication of attacks that will continue to appear and hamper our collective network management.
These shifting IoT security threats of today will mirror the efficacy and complexity of PC and workstation threats of a decade ago. Having fought this fight before, expectations and game plans of attack and defense (respectively) can at least be estimated and planned for from a reasonably successful template.
In 2018, we can expect IoT deployment of multifactor authentication backed with hardware, even more sophisticated attacks against IoT networks and, for the short term, confusion over standards and certifications.
A trend toward security incident recovery through hardware
With the prevalence of weak passwords and access control on IoT devices leading to a growing number of attacks, there will be a shift toward designing systems to include incident recovery as a core IoT security requirement.
Product designers will begin asking the question, “When this device does get attacked, how do we ensure it can be disabled to prevent further damage and then recovered?” Expect the emergence of design requirements that mandate implementation of key rotation mechanisms at the time of manufacture. Still, during the event horizon of an attack, the existence of purely software key rotation mechanisms can still lead to device identity to be spoofed until after the attack is mitigated.
Designers and engineers should consider adopting multifactor authentication in their device design and making use of a hardware secure module or other hardware-secure security element as an additional factor in their identity and access management implementations.
Existing vulnerabilities will allow single event damage with maximum payload
Hide ‘N Seek and other attacks have shown us that exploits are becoming complex enough to attack a wide variety of device types in varied deployment types, no longer aimed at a specific product, stack or environment.
As a result, a single attack can adapt to maximize its payload and spread faster than device-specific or stack-specific patches and upgrades can be applied comprehensively.
We can expect at least one major attack that causes damage of astronomical, never-before-seen proportions. This attack will affect several architectures and singlehandedly cause damage greater than multiple major prior attacks combined.
How can we prepare? 2018 will be a good time to revisit open ports and services running on devices, and to consider adopting cloud- or controller-based configuration interfaces rather than running administrative services directly on devices.
Confusion over security standards to continue
Several efforts have birthed various certification and self-assessment initiatives; however, with the exception of specific industries, like payments and healthcare, don’t expect to see patterns in adoptions emerge until either regulation or a critical mass of buy-in is reached, both of which take time.
As such, new efforts for security accreditation and standardization won’t have a widespread positive impact by the end of 2018. What can we do in the mean time? Continue to back the certifications and standards that make the most sense for your applications — the more thoughtful voices we have, the sooner that critical mass of adoption and regulation will happen.
Because of the success in mitigating malware on PCs, workstations and phones, the devices the world employs are, quite frankly, assumed to be secure. Blind trust is often attributed to the devices and the data they serve — even life-saving devices. We’re in a battle to succeed and make the internet of things a secure place, whether or not anyone notices.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The real challenge for enterprises in deploying an IoT solution today is the lack of neutral network integrators able to deliver solutions using any combination of LPWAN technologies. The LPWAN community is supported by several specialists using either Sigfox, LoRa, Ingenu or Weightless technologies, each with its strengths, weaknesses and distinct business models. Several mobile operators are promoting their own cellular LPWAN IoT networks including LTE-M (CAT-M1) and NB-IoT (CAT-NB1). Enterprises often require true global coverage and these different LPWAN technologies will likely live alongside each other for quite some time. Traditional IT system integrators have focused on integrating complex components and analytics to deliver a full IoT solution, including Accenture, Deloitte and PricewaterhouseCoopers, IBM and HP.
Wireless 20/20 believes there is a lack of neutral network integrators able to deliver IoT network solutions using multiple technologies. This article focuses on the key role played by IoT connectivity providers that can use any combination of technologies and match the right IoT network service to each use case and application. No one connectivity technology can meet all IoT requirements, and any solution will likely require multiple connectivity technologies just for coverage purposes for applications deployed today. Multiple connectivity solutions may be required just to meet the use case requirements, independent of coverage. Many solutions providers are tied to specific network technologies — and a neutral party is needed to sort through to the right answer.
Low-power wide area network, or LPWAN, is a set of wide area wireless networking technologies that interconnect low-bandwidth, battery-powered devices with low bit rates over long ranges. The leading specialists in the LPWAN technology community include Sigfox, LoRa, Ingenu and Weightless. The primary advantage of LPWAN technologies is the ability to support a greater number of connected devices over a larger area with a lower cost with greater power efficiency than traditional mobile networks. The main challenge for enterprises is that none of the LPWAN networks are mature, so coverage is limited although improving every day.
Here is a quick summary of the strengths, weaknesses and distinct business models for the leading specialists in the LPWAN community — Sigfox, LoRa, Ingenu and Weightless:
- Strengths: Solid technology, large ecosystem, many devices, low cost, low predictable power consumption, managed service.
- Weaknesses: Relies on ultra-narrowband technology combined with DBPSK and GFSK modulation using 200 kHz of unlicensed spectrum in the 868 to 869 MHz and 902 to 928 MHz bands depending on regions. Limited network coverage in the U.S. since the company did not reach its 2017 buildout goals. Immature ecosystem, but improving rapidly.
- Business model: Business model flaws in subscription-based connectivity at low cost.
- Countries are franchised to a variety of carrier and utility partners.
- Strengths: Solid technology, large ecosystem, but largely private (not generally accessible) but improving, low cost, low not-as-predictable power consumption, it is mostly private and (mostly) not offered as a managed service.
- Weaknesses: No central service manager or clearinghouse to interoperate networks (not a managed service — both a pro and a con), wide-scale coverage in its infancy, needs a more robust open ecosystem. Monopoly (thus far) on chips make devices a little more expensive.
- Business model: Senet is the largest U.S.-based provider of LoRaWAN IoT technology with network coverage in 225 markets. Recently launched low-power wide area virtual network to offer IoT connectivity solutions for cable, CLEC and wireless operators. Comcast is also expanding its LoRa LPWAN-based network to 12 U.S. markets.
- Strengths: Random Phase Multiple Access (RPMA) technology uses the unlicensed 2.4 GHz ISM band. Supports higher speeds, closer to LTE in functionality. Ingenu reports its U.S. network currently covers around 45 million POPs.
- Weaknesses: Expensive network and solution. CEO John Horn abruptly left Ingenu during the summer of 2017. Looks unlikely to survive.
- Business model: Subscription based.
- Strengths: Narrowband modulation scheme, can operate in both sub-1GHz license exempt and licensed spectrum, offers full acknowledgement of 100% of uplink traffic for unmatched QoS in a system operating in unlicensed ISM and SRD spectrum.
- Weaknesses: Guarantee low cost and low risk, and to maximize user choice and ongoing innovation.
- Business model: Open global standard promoted by a SIG rather than a specific company with a proprietary technology.
Here is a quick summary of the strengths, weaknesses and distinct business models for the leading Cellular LPWAN IoT network solutions including NB-IoT (CAT-NB1) and LTE-M (CAT-M1).
- Strengths: Carrier-backed, ubiquitous coverage due to it being an LTE upgrade, with higher data rates possible.
- Weaknesses: More expensive in terms of power consumption and device cost than other LPWAN technologies such as LoRa or Sigfox. The requirement for a SIM and pairing of some sort may also be an issue — compared to LPWANs that are paired at the factory.
- Business model: T-Mobile is the only major U.S. carrier to deploy NB-IoT as its low-power wide area network of choice. Verizon recently completed its first successful NB-IoT Guard band data session trials and plans to deploy an NB-IoT network across Verizon’s nationwide network in 2018.
- Strengths: Becoming pervasive — it offers higher data rates than LPWAN, but at higher cost and power consumption.
- Weaknesses: CAT-M probably provides the high end of the “LPWAN” segment.
- Business model: Verizon, AT&T and Sprint have all opted for LTE-M, another LPWA technology standardized by 3GPP, and have completed their initial nationwide LTE-M launches in 2017.
T-Mobile became the first U.S. operator to launch NB-IoT commercially and has committed to extend coverage to a nationwide NB-IoT network during 2018. This would be a major feat if accomplished, and the announced pricing is aggressive and will give the LPWAN players pause. T-Mobile parent Deutsche Telekom (DT) recently updated its plans for its narrowband-IoT rollout across Europe. DT launched NB-IoT technology in Germany in 2017, and the network is now available in approximately 600 towns and cities across its home market, with more than 200 companies trialing the technology. This DT NB-IoT network deployment in six of its European markets was on track at the end of 2017, including Germany, Poland, Slovakia, Czech Republic, Hungary and Greece.
Wireless 20/20 believes these different LPWAN technologies will likely live alongside each other for quite some time. The major technologies (Sigfox, LoRa, NB-IoT) will live alongside each other for a while, but only one may survive in the long run unless dual-mode chips become common.
Key enterprise verticals and horizontal applications require true national coverage and need to integrate multiple IoT network technologies. Among these, the following have enterprises that can integrate their own IoT network solutions and others that cannot develop their own solutions:
- Shipping and logistics/supply chain: Much of the supply chain is national in scope and needs national coverage — pallets, shipping containers, truck trailers, etc. But a portion is also local. Use case is to track location and sense temperature and shock/vibration. Local = the last mile of product distribution — think local warehouse to convenience store for consumer goods, for example. Multiple modes are needed. Examples: A pallet en route may need to report location only when it starts or stops (LPWAN), but a pallet in a warehouse may need to report its contents to a smartphone (Bluetooth).
- Agriculture: Reasonably localized per customer, but national in scope with big areas in California and the Midwest. Ag requires large coverage areas with modest populations. Use cases = many and varied: tracking, soil sensors, moisture sensors, tank monitoring and many more. Multiple technologies may be required — LPWAN for remote areas and fields, Bluetooth for applications expected to communicate to a smartphone.
- Facilities and spaces: Localized per customer, but national in scope … such as major hotel chains like Marriott or Hyatt. Shopping malls and office complexes and resorts and casinos and theme parks are largely localized, but there may be many facilities under the same owner nationwide. Use cases include a varied portfolio — door open/close, irrigation on/off, soil moisture monitors, water leak detectors, buttons to provide a service and many more.
- Industrial: Usually localized coverage when talking about the factory or enterprise, but national when talking about supply chain. Use cases are varied: track inventory or tools within a factory facility, fuel tank levels, panic buttons for personnel, personnel tracking for safety, etc.
Traditional IT system integrators, including Accenture, IBM, HP, Deloitte and PricewaterhouseCoopers have focused on integrating complex components and analytics to deliver a full IoT solution. Most of the traditional large system integrators focus on the higher end of the technology stack — connectivity is usually assumed to exist. Certainly, some of these companies can do large-scale Wi-Fi implementations and have solid experience with cellular services, but the core of their offerings focus on analytics and business process integration with legacy ERP and other systems. Newer connectivity solutions such as LPWAN are not yet fully integrated into their domain expertise.
The current need for IoT network integrators is driven precisely by this need — the capability to determine the right network connectivity solution for lower tier access in terms of power consumption and cost is nascent, and this very capability is what determines whether new projects can be delivered at a reasonable ROI.
IoT network integrators and connectivity providers can use all these technologies and match the right IoT network technology to each use case and application. IoT is not a one-size-fits-all environment in terms of connectivity. Every application and use case is different and connectivity requirements need to be evaluated on an individual use case basis. There are many parameters — cost, power consumption, latency, range, application, etc. Bluetooth may be the right answer for some, Wi-Fi may be the answer for others, LPWAN for still others.
In our next blog post, Wireless 20/20 will examine how some of the leading neutral IoT network integrators can provide a deep and rich set of services to ensure the highest quality of service for mission-critical IoT network management solutions in the U.S. market.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The idea of startup studios is becoming more popular, and with good reason. There are fundamental repeatable processes that can equate to substantial leverage when creating a startup from within a studio. And the idea that the internet of things is a significant and broad advance of technology in our world is no longer up for debate. It’s not a fad; it’s real and here to stay. That said, we still see mountains of IoT pilot projects and far, far fewer production IoT examples, much less ones that are fully integrated and compliant with the organization’s larger information technology architecture. By looking a little deeper into why the startup studio model is gaining steam, and why the broader IoT market is struggling in moving from pilots to production, we can spot a correlation that re-enforces the idea of the studio model for IoT.
Aren’t studios like WeWork?
Let’s start with a basic definition of a studio. Many people conflate the idea of a “tech incubator,” of which there are many, to include studios. Incubators generally begin with shared working space environments where there are general resources, like offices, cubes, internet access, meeting rooms, etc., where small startups work in order to access the infrastructure without building themselves. But good incubators will also be the site for tech meetups, additional resources like beer taps and ping pong tables designed to create the right “vibe” for young tech companies, and may even include entrepreneurs in residence as mentors, or access to service providers, like venture investors, lawyers, accountants or more. A studio, by contrast, may or may not be located in an incubator, but operates under a completely different charter. The studio is a company. It is a team of people who work together to start companies. The studio team will have a range of skills applicable to any startup, so for example, this might include CEO, CFO, chief product officer, CTO and probably a VP of sales, bus dev and/or marketing. But the studio team applies its collective knowledge across its portfolio of companies with the express goal of getting these companies funded or sold, whereby they move from the studio to a bigger and better opportunity.
Think of the team as the mother bird in the nest, protecting and feeding the babies until it is time for them to fly. They protect and feed the babies, in this case, by applying knowledge, skill sets and processes that are repeatable. They engage in constant ideation to evolve the opportunities and generate new ones. This does not suggest they lack the passion of the founder with the laser vision, although that is a reasonable concern. The laser vision of the studio is to breed strong new companies. And while the primary issue with many startups is those founders who may have passion and good ideas often lack the experience and knowledge base to know how to go from the idea to execution, the studio model should provide that in spades.
So why is this important for IoT?
The evolution of IoT was described well in the 2014 Harvard Business Review article by Michael Porter of Harvard and Jim Heppelmann of PTC, whereby they forecasted the projection of the IoT market to be the progression from “products” to “smart products” to “smart connected products” to “product systems” to “system of systems.” They seem to be spot on, although the progression is taking longer than many expected. This may be due to the means by which most IoT-enabled devices (“smart connected products”) made their way into the market, combined with the slow adaptation and understanding within organizations as to how to effectively use IoT. Or said differently, organizations are struggling to move from “organizations that use IoT-enabled products” to becoming “IoT-enabled organizations.” The difference is profound.
The product makers saw and increasingly capitalized on the ability to deliver “IoT-enabled products.” But the delivery of such products generally included the capture of the data coming from the product used by the product provider who, in turn, used that information to better serve the customer and, to some degree, provided additional value-added data back to the customer who purchased the product. This all makes complete sense, but explains how organizations become ones that “use a lot of IoT-enabled products.” But over time, organizations begin to understand that there is a massive amount of information being generated by these products, and if most of that information is owned by the product makers and goes directly to the product makers, they can then prescribe to the organization what data they can and cannot receive. At that point, the organization begins to realize the imperative for consolidating that wealth of information into their own enterprise architecture. By doing so, they contextualize the information and likely cleanse and enrich it with other enterprise data and external data to create a dramatically more granular and more impactful data repository, by which they can drive better insights for each and every constituent important to the organization. But this architectural model is a long way from what you experience in deploying a silo — which is what we get with most pilots.
As the market matures, it will demand the IoT offerings be capable of integrating into common platforms, be they AWS IoT, Google, Hitachi Lumada, SAP Leonardo, PTC ThingWorx or others. For many companies, they lack the internal expertise to adapt the IoT-enabled products into their own architecture. The movement from pilots to products must come with consideration of the foundational elements of security, privacy and data governance, including what data can be delivered to what constituent and in what form, as well as the delivery of the analytic stack, including operational analytics, investigative analytics, predictive analytics and, ultimately, machine learning. A studio approach can bring together a team with this scope of expertise to apply to companies in its portfolio to bring market-ready solutions that contemplate the needs of the organization amid a maturing IoT market. In other words, they can go from creating IoT-enabled product companies to creating, in a repeatable fashion, companies suitable for “IoT-enabled organizations.” This means they contemplate the foundational elements, addressing the configuration, deployment (AWS, Google, etc.), and other key foundational elements to solve these problems, especially for the smaller to medium-sized firms that might otherwise lack the expertise internally for doing this.
It is also possible, but less likely that companies coming from incubators but lacking the experience or the platform for iterating this process, will be as adept at delivering such capabilities. Hence, the intersection between the studio model and the market evolution of IoT should be quite symbiotic. We’ll see.
5G — or “fifth generation” internet — is the catalyst industrial mixed reality technologies need for today’s primary use cases to take hold. With 5G’s promise of virtually eliminating latency, augmented reality (AR) and virtual reality (VR) will finally thrive within the enterprise in instances when time is of the essence.
Two primary use cases established for mixed reality in an industrial context are informational overlay utilizing AR (think technician repairs on- or off-site), and training and education in high-stress or dangerous situations (think training for an oil spill without the risk). While both of these use cases have seen success in experimentation and small-scale use, operators are still challenged by achieving real-time data flow to complete tasks in virtual and real-world environments — especially in areas of low signal.
For example, a surgeon is needed to operate on a patient, but not at her hospital; the patient is halfway around the world. Utilizing a virtual reality headset, haptic gloves and virtual twin robotics onsite, the surgeon is able to operate without being physically present. This theoretical healthcare industry use case is only possible if there is zero lag between the actions performed by the surgeon and the effects happening in real time across the globe. For now, it’s relegated to training scenarios only.
Or, consider a construction company throwing out its training and operator manuals and relying on augmented reality glasses, like Microsoft Hololens, to deliver all technician information in real time as needed to assist in repairs, part replacements, on-the-job training and more. This will only achieve operational efficiencies if employees’ AR hardware works appropriately in all locations served, sending and receiving data with ease even in the most rural of locations. This is especially true in an emergency scenario where real-time information is critical to reducing equipment damages or human danger. See what Caterpillar is doing along these lines today.
With 5G connection, the aforementioned use cases — and many more — are not only made possible, but made better, with less margin of error. This is due to one of 5G’s greatest promises: low end-to-end latency that increases the amount of interactivity possible. According to Qualcomm, truly interactive remote experiences require low end-to-end latency, often ranging from 40 ms to 300 ms. 5G offers potential to bring over-the-air latency down to 1 ms.
Plan for the industrial mixed reality impacts of 5G now
Though we may not see mass U.S. availability of 5G at a consumer level until 2020 (at the earliest!), both B2C and B2B enterprises are beginning to experiment today with prototype devices and simulation networks. If you are utilizing AR and/or VR within your organization, it’s time to start considering the operational and managerial impacts 5G will have on your current mixed reality implementations and initiatives:
- Increased hardware needs: When more employees can access information on-demand using AR glasses, goggles or helmets — in any situation, anywhere — expect to feel the effect on your IT budget. Continually measure the increased efficiencies achieved to justify expenditures.
- Greater visibility into operations: Alongside faster internet connections and increased bandwidth comes the potential to increase the information garnered from onsite sensors within your industrial IoT network. Consider the software and hardware implications in storing, analyzing and accessing the extra data quickly and seamlessly through a new tactile internet — as well as the ability to build applications on the fly.
- New revenue streams and scalability: Once 5G is available on a global scale, the potential scalability of workers in virtual environments will bring on new business models. Think of the possibility of scaling one worker’s abilities to hundreds of locations without expenses of travel. Or, scaling training initiatives using AR and on-demand educational content.
- Renewed focus on change management: As with any new technology implementation in the enterprise, change management principles guide employees into newly supported “digital first” roles. Internal education, along with an acceptance of failure and learning by experimentation, can help your workforce understand the benefits of utilizing mixed reality to perform their duties.
5G’s effect on industrial AR/VR programs is one of many ways we’re beginning to see mixed reality take a stronger foothold in employee operations, site and product maintenance, training, development, sales and more. It’s an enabling technology that serves as the foundation for increased use of mixed reality, as well as other IIoT implementations that rely on fast, reliable data connections, coverage and speed.
How are you planning for 5G’s role in your mixed reality and IIoT initiatives?
From computers to robots to artificial intelligence, multiple key technology advances have been inspired by the need to replicate or emulate human intelligence, sensing capabilities and behavior.
Various sensors, such as acoustic, vision and pressure, have been inspired from human hearing, vision and pressure-sensing abilities.
Undoubtedly one of the most important human sensing ability is vision. Vision allows humans to see the environment and interpret, analyze and take action.
Human vision is a remarkably involved complex, intelligent “machine” that occupies a significant part of the brain. Neurons dedicated for vision processing in the brain take up close to 30% of the cortex area.
Making devices, objects and things “see” the environment visually, as well as analyze and interpret, has been a key research area for a number of years.
Technological complexity, massive computation power requirements and prohibitive costs previously restricted vision sensing and intelligence to security and surveillance applications using surveillance cameras. However, the situation has changed dramatically today; the vision sensor market has exploded. Cameras are getting embedded everywhere and in all sorts of devices, objects and things, both moving and static. In addition, computation power available on the edge and on the cloud has increased dramatically. And this has trigged the embedded vision revolution.
The right sensor/camera price point, various technological advances in vision sensors resolution, dynamic range and amount of computation power available to process the video/imaging are leading to mindboggling growth and diverse applications.
Vision intelligence enabled through a combination of classical image processing and deep learning, has become possible in today’s world of connected embedded systems, devices and objects, taking advantage of both edge computing power on the device itself and also cloud computing power.
This has triggered rapid growth in self-driving vehicles, drones, robots, industrial applications, retail, transportation, security and surveillance, home appliances, medical/healthcare, sports and entertainment, consumer augmented and virtual reality, and, of course, the ubiquitous mobile phone. Vision and its intelligence is a storm in the IoT world, and it’s only going to increase.
In fact, no other sensor has had such a dramatic impact. So prevalent has video become in day-to-day life that most people take it for granted. From streaming live video systems to video on demand to video calling, it’s easy to forget the dramatic impact vision sensors have had in a world of internet-connected environments and devices; it is truly the unsung hero sensor of IoT. Combine it with vision intelligence and the whole market takes a new dimension.
The growth in prevalence of embedded vison has its roots in the explosive growth of mobile phones with embedded cameras. Prior to the mobile phone revolution, video/cameras and intelligence remained associated with security and surveillance. But then mobile phones with embedded cameras arrived and this aligned with simultaneous massive growth in computation power for video analytics and intelligence, both on the edge and the cloud. This explosive combination has led to the remarkable growth, and vision sensors started getting embedded everywhere from robot and drones to vehicles, industrial machine vision applications, appliances and more.
There are various types of vision sensors, but complementary metal-oxide semiconductor, or CMOS, sensors by far have had the largest impact and have led this explosive growth of vision sensors in various embedded systems and smartphones.
Sensors are everywhere — and are numerous. Self-driving cars today have more than 10 video cameras, drones have three to four video cameras, security surveillance cameras are everywhere, mobile phones are streaming live video. Video data from these sources is streamed for further intelligence in the cloud, while real-time edge processing is happening on devices and things themselves.
Vision Sensor resolution, dynamic range and number of vision sensors continues to scale up with no end in sight. With massive amounts of video data getting produced by these sensors, naturally amount of computation power required is huge, as its transmission and storage requirements.
Previously, there was a rush to stream video to the cloud for real-time or stored vision analytics. Cloud offered immense computation power, but bandwidth necessary for transmission even after compression was high. Huge amount of storage, latencies involved, and security and privacy concerns are making customers rethink the cloud and instead consider vision analytics at the device/object level and then doing offline video processing in the cloud.
With low-latency, high-speed 5G connectivity promises, there is a thought to distribute real-time video processing between the edge and the cloud. However, it remains to be seen how much of this is possible — if it at all — and whether it makes sense to transmit real-time compressed video to the cloud from millions of endpoints hogging transmission bandwidth.
The importance of edge analytics has market enabled various system-on-a-chip (SoC), graphics processing units (GPU) and vision accelerators. Cloud with GPU acceleration is getting used for non-real-time video analytics, or for training neural networks on large amount of test data while real-time inference is happening on the edge with the accelerators.
With deep learning and optimized SoCs now available, along with vision accelerators for classic image processing, the edge analytics trend is likely to continue, with additional events, parameters and intelligence pushed to cloud for further analysis and correlation. The cloud will continue to remain important for offline stored video analysis, while some systems can still do real-time analysis there.
Vision applications in the real world
Vision and the vision intelligence market continue to evolve rapidly. There are some striking technology trends happening, and they are expected to fuel the next massive growth of vision over the years. Here are a few examples:
3D cameras and 3D sensing. 3D Cameras, or more generally 3D sensing technology, allow depth calculation in a scene and the construction of 3D maps of a scene. This technology has been around for a while, popularly used in gaming devices such as Microsoft Kinect, and more recently in iPhoneX’s 3D sensing for biometrics. Here again we see the market on the cusp of taking off with smartphones providing the needed acceleration for a much wider set of applications. In addition, robots, drones and self-driving cars with 3D cameras can recognize the shape and size of the objects for navigation, mapping and obstacle detection. Likewise, 3D cameras and stereoscopic cameras are the backbone of augmented, virtual and mixed reality.
Deep learning on the edge and in the cloud. Neural networks-based AI has taken the world by storm, and again it’s the computation power available today which is making deep learning networks possible. There are other contributing factors which have led to the growth of neural networks in practical applications and that include massive amount of data (videos, photos, text) available for training and cutting-edge R&D by universities and tier 1 companies and their contributions to open source. This in turn has triggered a lot of practical applications for neural networks. In fact, for robots, autonomous vehicles and drones, deep learning inferences running on GPUs/SoCs at the edge has become the norm. The cloud will continue to be used to train deep learning networks, as well as for video processing of offline stored data. Split architecture processing between the edge and cloud is also possible as long as network latencies and video pipeline delays are considered acceptable.
SLAM in automotive, robots, drones. Simultaneous localization and mapping, or SLAM, is a key component of self-driving vehicles, robots and drones fitted with various types of cameras and sensors such as radar, Lidar, ultrasonic and more.
AR/VR and perceptual computing. Consider Microsoft HoloLens; what’s behind it? Six cameras with a combination of depth sensors. Microsoft even announced the opening of a computer vision research center for HoloLens in Cambridge, U.K.
Security/Surveillance. This article does not focus on this traditionally video and video analytics dominated area. This is a large market by itself.
Mobile phone- and embedded device-based biometric authentication. Biometric authentication can trigger the next wave of mobile apps, and again it’s the camera sensor, combined with video analytics at the edge and cloud, triggering this. As technology matures, it will spread into various Embedded devices.
Retail. The Amazon Go store is an example of using cameras and high-end video analytics. Soon we are going to have robots in aisles assisting humans, all outfitted with multiple cameras and vision intelligence along with other sensors.
Media. Video-based intelligence is already used heavily in the media industry. Video analytics can allow you to search through large video files for a specific topic, scene, object or face.
Sports. Real-time 3D video, video analytics and virtual reality are going to enable the next generation of personalized sports and entertainment systems.
Road ahead, challenges, motivations and concerns
A need for ever-increasing high-resolution video, wide dynamic range, high frame rate and video intelligence has created an ever-growing appetite for high computation power, transmission and high storage capacity. And it’s hard to catch up continuously.
A few companies are taking a different path to solve this problem. In the same way neural networks are biologically inspired, ongoing research and commercialization of bio-inspired vision sensors which respond to changes in a scene and output a small stream of events rather than sequence of images have started appearing. This can result in large reduction of both video data acquisition and processing needs.
This approach is promising and can fundamentally change the way we acquire and process video. It has a high potential to reduce the power consumption as a result of much reduced processing power.
Vision will remain the key sensor fueling the IoT revolution. Likewise, edge video intelligence will continue to drive the SoC/semiconductor industry to continue on its path of video accelerators using GPUs, application-specific integrated circuits (ASICs), programmable SoCs for inference, field programmable gate arrays (FPGAs) and digital signal processing (DSP), accelerating classing image processing and deep learning, and giving developers room for programmability.
This is a battlefield today, and various large established players and startups are aggressively going after this opportunity.
Low-power embedded vision
With the growth of vision sensor and embedded intelligence in millions of battery-powered objects, low-power embedded vision remains one of the prime factors for growth of the entire industry for next era, yet also remains one of the key problems to solve. Building products and systems with embedded vision and intelligence is going to raise privacy and security concerns that need to be handled properly from the design stage.
Despite the challenges, the future of embedded vison in IoT is bright and the market opportunity huge; the companies solving these challenges are going to reap huge rewards.
While the common belief is that the internet of things is inherently insecure, with the right security measures in place, the technology can be used to enhance network security. Indeed, with IoT visibility technology, safe authentication mechanisms and firmware updates in place, IoT is well-suited for a fruitful friendship with IT security professionals.
Let IoT take the reigns
IoT devices are expert data collectors. Their capabilities to mine data on processes can streamline mechanical and manual processes in a way that not only automates them, but improves performance with machine learning over time. Indeed, IoT is a kind of intersection between the rapidly evolving worlds of automation, artificial intelligence and machine learning, which are key in helping reduce overhead from IT staff. If an issue arises with a mechanical arm on the factory floor, or the office HVAC system is bugging out, IT staff no longer needs to directly address the issue — they can use data from the connected device that controls them to better understand the issue and reach an appropriate plan of action. Being able to address these issues from “afar” not only saves time that could be devoted to more productive tasks, it helps solve issues so as not to affect immediate productivity. In addition, IoT devices can help paint a picture of potentially looming issues and vulnerabilities on the network, which can be factored into larger policy decisions and security strategy. While IoT security concerns should be considered, and steady awareness of potential vulnerabilities maintained, the devices can be used for IT benefit by saving precious time and reducing unnecessary effort.
Gain actionable intelligence
As IoT devices contain an array of valuable data, they can be used in an IT context to better understand the network and optimize security strategy. For instance, through an IoT device, it is possible to monitor network activity — which devices or appliances are taking up the most bandwidth, or where there may be a loophole in the network architecture. By analyzing and understanding the data provided by IoT devices, it’s possible to control for vulnerabilities before they ever occur, transforming that data into actionable intelligence. IoT can be used to provide intelligence all the way to the periphery of the network, as observed by Jim Gray of Microsoft who believes that IoT devices will start working as mini-databases, collecting information on everything from servers to sensors. Though there is the immediate difficulty of processing all of this data, with the right data management tools in place, early adapters of IoT-IT technologies could see drastic improvements in the areas of network and connectivity management.
Understand IoT’s potential and limitations
Any good friendship is based on a mutual understanding of value — the value that you bring to the friendship and the value that they bring to the friendship. While it may seem utilitarian to view friendship in this way, there is a subconscious understanding between good friends that their immediate and long-term value is what is hanging in the balance.
The same is exceptionally true of IoT devices and IT professionals. IoT needs IT to function correctly and safely, and in some cases to have all the data that it collects understood. IT needs IoT to increase efficiency, reduce overhead and provide actionable intelligence. However, an IT professional that blindly embraces IoT without taking its limitations or security risks into account is bound to be let down (as an IoT device is an inanimate object, it doesn’t really work the other way around). Therefore, a clear understanding of IoT security risks, such as low computing power, weak memory, specified OS and limited patch/firmware updates, is essential for the friendship to work and bear fruit.
The first step for IT professionals looking to appropriately embrace IoT technology is to integrate a detailed implementation plan. It may involve a slow “phasing in” of simple IoT devices, like a smart coffee machine or connected security cameras, which can help IT pros better understand what needs to be in place before full-force deployment. In addition, it provides ample opportunity to assess the company’s network security stack, ensuring that the right technologies are in place. Visibility, control and management systems are essential if IoT is to work and live up to its end of the bargain. Following a small-scale deployment, and with the proper device discovery tools in place, IT teams are likely to be ready for a full-fledged friendship with IoT devices.
When it comes down to it, IT professionals don’t have much of a choice when it comes to embracing IoT technology. According to the majority of projections, there will be at least 30 billion IoT devices connected by the year 2020. Any IT professional that cares about their organization’s culture of innovation and security would be wise to befriend IoT now, before they are engulfed by a wave of devices to integrate, see, control and manage. At this stage, IoT is essentially asking IT, “Why can’t we be friends?”
IoT is one of the most talked about digital transformation initiatives impacting industrial enterprises today. Its ability to significantly improve decision-making has exceeded expectations. In fact, we recently found (note: registration required) 95% believe IoT has a significant or tremendous impact on their industry at a global level, and 73% plan to increase their investments over the next 12 months. But without killer reporting and visualization tools, this transformative experience wouldn’t be possible. Teams would drown in their data, miss critical insights and more. So one caveat I always share with businesses new to IoT is to start planning out the reports and visuals they anticipate carrying the most meaning for the business early in the process. Waiting until the last minute could result in frustration or missing the business objective all together.
To avoid unfavorable results, here are five key factors to consider when building reports and visualizations that make it easy to understand what you — and your peers — want to learn from your connected enterprise.
#1. Pinpoint what needs to be communicated. IoT deployments collect more data than can possibly be utilized. You don’t have to look at it all either; you can prioritize data based on your particular business objective(s). Matching the output report to your initial goal, such as improve asset uptime or reduce costs, is critical in extracting the most value from your data. Anything else can be a distraction. So, ask your stakeholders what a) they want to learn from the data, and b) what decisions they want to make from it. Incorporating only those data sets that align with the desired outcome can help you achieve success.
#2. Choose the right reporting style for your audience (and there will be more than one). There is a plethora of reporting options available natively in most platforms or applications. With some customization, you should be able to get up and running quickly. The reporting style preferences will vary depending on the audience. For example, dashboards and email alerts may be more fitting for the day-to-day needs of asset managers whereas less frequent, higher level reporting may be preferred across the C-suite.
But, be warned that complex representations of data can easily lead to confusion. Which I’ll go into next.
#3. Edit the story. Simplicity is the key to effective data visualization. However, it is very easy to create chaos in reports by underestimating the complexity of your assets and their associated variables.
Let’s consider a simple example from the transportation industry. For a semi-trailer rig, how is fuel consumption (ex., mileage) related to engine horsepower? Generally, you can picture what the relationship would look like — as engine horsepower goes up, so too would the consumption of fuel (i.e., a positive correlation between a single independent and dependent variable).
Now let’s make this simple example more complex: How would you visualize the relationship between fuel consumption, type of route traveled (in-city, long haul, short haul), engine horsepower, truck configuration, time of day and driver experience level, all in the same visual — that’s not easy. But, each of these variables could be analyzed and characterized separately in a very meaningful way.
So, when the usage scenario is complex, the best approach is to break the problem into multiple layers of analysis, each with a simple two or three variable maximum.
#4. Consider enterprise data integrations. Cross-system reports can extend the value teams can get out of IIoT data sets. IoT applications can plug into popular reporting platforms, such as Amazon QuickSight, Power BI from Microsoft and Tableau. Here, asset managers can connect IoT asset information with other enterprise system data to create new data views that can be critical for business success.
For example, a quick-serve restaurant may want to forecast food demand in order to reduce inventory costs and waste. Sensors in entryways and the parking lot provide part of the picture, but it’s also necessary to tie that to historical point-of-sale data, as well as weather and traffic information. This additional context improves the accuracy of the predictions.
#5. Set best practices. Because a variety of teams may need to communicate IoT data to the same audiences, it’s important to set templates where possible so reporting is streamlined and cohesive. Publishing to a shared location with templates will help save teams time and allow them to standardize on what works well.
Regular tweaking of reports will also be required to keep aligned with your maturing IIoT deployment. Some other tips for creating compelling visuals include: orient text horizontally, clearly label data and use some color (but not too much color). The key is to let the visual do the talking. If you’re finding that you need to explain every detail, it’s time to fine-tune.
Effective reporting is one of the biggest drivers behind IIoT maturity. It enables industrial organizations to build on success and quickly spot and fix issues. As a result, businesses are able to reach the latter stages of IIoT maturity — analytics, orchestration and true edge computing — faster. It is also essential for getting the broad number of stakeholders involved in an IIoT deployment on the same page to ensure maximum value from your investments.
For every hit IoT product, there are dozens that unnecessarily met their demise.
Take Juicero, the creator of an unquestionably cool connected juice machine. Complete with a custom-built scanner, microprocessor and wireless chip, Silicon Valley’s darling juicer crushed sealed bags of fruit and vegetables to produce a drink.
Investors poured millions into the startup only to find that customers could squeeze the bags themselves to accomplish the same result more quickly. In his defense, Juicero founder Doug Evans was quick to point out that he spent years developing a dozen prototypes of the machine.
Some who hear Juicero’s story might take from it that IoT product prototyping just isn’t worth the trouble. True, prototyping an IoT product tends to be more costly and complex than other tech products. If anything, though, it’s more important with IoT products than standalone software. Prototyping is creators’ best tool for judging technical feasibility while also validating the product’s market fit — two areas in which IoT products are particularly likely to be tripped up.
Same results, fewer costs
To be sure, IoT prototyping is well worth the money. But if you’re concerned about costs, rest assured that there are plenty of ways to do it inexpensively and effectively:
- Build a simpler mousetrap. Your prototype does not have to be perfect; in fact, it shouldn’t be perfect. Identify and test only your product’s riskiest assumptions. For Juicero, for example, this should have involved testing whether users even needed the device to squeeze their juice. Think of your prototype as a test to see whether your product warrants further investment, not a production run.
- Test with a minimum of users. Don’t skip user testing because you think you need a big research department to do it. Find just five users to put your prototype in front of. Use their insights to cheaply tweak your design. For hardware components, in particular, post- or mid-production tweaks could cost millions. Let your prototype testers save you from your own blind spots.
- Focus on the service, not the features. Young IoT companies often try to load as many features as possible into their products. Instead, focus your prototype on the service your product will provide. Ultimately, you’ll wind up with a product that’s cleaner, easier to use, and cheaper to produce.
For example, why did Juicero bother to embed a scanner and blocking system into its app? Couldn’t it have just used a QR code scanner to log the packets and check their expiration dates? Save money and reduce the risk of being scooped by a competitor by building a no-nonsense prototype.
- Manage the device from an app. Another common mistake greenhorn IoT creators make is trying to do too much on the device’s firmware. By their very nature, IoT devices are connected to a mobile or web app. Why not let that software handle interface control and processing instead? Trust me; it’s much easier to update an app than it is to push a firmware update to a hardware device. Taking this route will drop development costs and cycle times down the road.
Yes, prototyping takes time and money. But the alternative is to let the market test your IoT device’s most daring assumptions. Considering the costs of hardware production — not to mention the value of your company’s reputation — that’s one lemon that you don’t want to wait to squeeze.
If you are involved with IoT, you have witnessed a surge of activity around the idea of digital twin. The digital twin concept is not a new one — the term has been around since 2003, and you can see an example use in NASA’s experiments with pairing technologies for its Apollo missions. However, until recently, technology barriers made it difficult to create a true digital twin. Today, asset-heavy enterprises and others are using the breakthroughs in technology to plan or implement digital twin products or manufacturing processes. We can expect this interest and growth to continue: Gartner predicts that by 2021 half of all industrial companies will use digital twins and see an average efficiency gain of 10% from them.
The most simple definition of a digital twin is an image connected to a physical object by a steady stream of data flowing from sensors, thereby reflecting the real-time status of the object. The data flow stream connecting the physical and digital is called a digital thread. In some implementations, the digital twin not only reflects the current state, but also stores the historical digital profile of the object.
We cannot overstate the importance digital twins will have for a number of industries, especially for manufacturing installations and processes that need close interactions between machines and human. There are two key reasons for this: visualization and collaboration.
If you were to measure human senses in bandwidth, sight is the highest. As a result, human decision-making is reliant on being able to see the situation in full and take necessary action. This is why factory floor managers usually had a floor overlooking the factory floor. Today, with manufacturing installations and machines becoming more complex, that advantage of being able to see the processes has largely disappeared. Instead, computerized systems feed data to shop-floor managers to enable decision-making through data sheets or basic charts.
A digital twin can combine the best of both worlds by presenting to decision-makers in real time the data in an exact visual replica, including information previously not available as easily, such as temperature or internal wear and tear. In fact, a digital twin far enhances the efficiency of the visual bandwidth by removing non-critical information, processing basic information into a format much more easily absorbable and providing a more flexible (e.g., 360-degree or micro/macro) view of the system.
Finally, the visual aspect also helps immediately benchmark and compare across historical data or best-in-class data in real time. The potential of this aspect is tremendous as it identifies areas of improvement, shows areas of immediate concern and enables fast decision-making — all in real time.
The second critical aspect of a digital twin is the ability to share this digital view of machines irrespective of viewer’s physical distance. This, therefore, allows a large number of individuals to see, track and benchmark manufacturing installations globally. This ability also removes the delay in reporting alerts to management, removes single points of failure due to human error and makes seeking expert help easier.
A digital twin expands the horizon of access of the shop floor to product managers, designers and data scientists. Armed with this new understanding of how processes and machines are working or not working, they can design better products and more efficient processes, as well as foresee problems/issues much earlier than before, saving time and reducing materials wasted on building physical models. They can also see the gaps between desired and actual, and do root cause analysis.
So, what should a business considering digital twins examine? First ask, “What do I need to know about my manufacturing operations that will allow me to drive decisions?” This forms the basis of what kind of data to capture and what kind of visualizations to implement. The follow-up question is, “What are the top three to five roles in my business for which I primarily want the digital twin?” The answer to this question can effectively clarify what views to create from the captured data. Digital twins, by definition, are customized to roles to ensure only relevant data is shown, thereby reducing visual clutter. The final step is to create an incremental roadmap to make the digital twin richer over time. This can be done by either adding more relevant data sets to the existing imagery or by providing access to a wider set of roles within the business. A great example of how to build an incremental digital twin is Google Maps. Google Maps today emulates location and traffic data in much more detail and more accurately than it did a decade back. It has constantly evolved over time in terms of richness of data and hence utility.
The benefits will be worth the preplanning that a digital twin requires. Industrial companies that have digital twins will be able to create sustainable competitive advantage due to better products, higher efficiency and faster release cycles (from product ideation to market). The key, therefore, is to start even with smaller projects and keep reinvesting benefits/ ROI to create better or more complete systems in the near future.