The internet of things, big data and machine learning technologies are shaping the next generation of business applications. These smart applications power innovative digital enterprises by being:
- Intelligent: They make recommendations and guide users to take the next best action;
- Proactive: They predict what is likely to happen and trigger workflows telling users what to do when; and
- Context-aware: They are personalized, aware of users’ location and embedded in their processes.
What we will see over the next few years is that the app user interface as we know it will slowly disappear and the interaction with the app will be through push notifications, messaging systems and conversational UIs like chatbots, and plug-ins in existing productivity software.
These complex smart applications are in high demand from the business, looking to differentiate and learn more about their users, and from users, who expect personalized and contextual user experiences. However, building these applications is challenging and requires many different components and building blocks. Let’s break down the anatomy of a smart application.
The anatomy of a smart application
When thinking about what makes up a smart application, each building block fits into one of the following columns:
- Compute: Smart apps compute information, this translates to behavior.
- Communicate: Smart apps communicate information, this is the messaging.
- Store: Smart apps store information, this translates to the state of information.
We can apply this categorization on each layer of the anatomy stack that is necessary to build a smart app.
The real endpoint for IoT is obviously the thing that should be connected, whether physical products, like cars, jet engines and lighting systems, or other “things,” like livestock, crops, human beings or spatial areas like rooms or outdoor space. In order to build a smart app, you need to be able to connect to things. This is the first layer in the anatomy of a smart app.
This layer is made up of sensors that collect and report data on the actual status of things to which they’re connected. Sensors could be mounted on or embedded in things to monitor temperature, pressure, light, motion, location, etc.
Along with sensors, this layer is made up of actuators that control the physical or logical state of a product through signals they get from IoT apps or other systems, like opening a valve or turning a camera, motor or light on/off. This includes commands sent to embedded software, e.g. to reboot or update configurations.
Lastly, this layer includes the local storage in case the device is not connected when it receives new data.
Next, it is necessary to feed the data into the cloud, requiring a layer to ingest all of the data. This layer includes agents, components that mediate between a set of IoT devices and act as a bridge between the sensors/actuators and the cloud, deciding what data to send and when. In reverse, they also process commands and updates coming from the cloud.
The layer is where, for example, Amazon IoT, Azure IoT and Bluemix IoT live. They can filter, transform and act on events, as well as provide device management.
Within the communication column of this layer, it is necessary to have protocols to actually bridge the physical and digital worlds. For IoT device communication, the physical layer and communication protocols are distinguished. As far as the physical layer is concerned, gateways, mobile devices, mesh networks, and direct- or broadcast device communication are alternatives that may or may not be suitable depending on the use case. The choice for the physical layer will determine which communication protocols are most suitable (e.g. MQTT, CoAP, HTTP(S), AMQP, Zigbee, Z-Wave, etc.).
The next layer is the cloud infrastructure, made up of containers to run services and applications, messaging middleware and traditional database platforms as a service.
On top of cloud infrastructure comes the layer of app services that are crucial to building smart applications. These app services include predictive analytics, machine learning and cognitive services, along with the rest APIs to communicate with the things and trigger actions, and the data warehouses and big data stores to store all the information from the things.
When looking at all of the previous layers, it is evident that it is quite complex to build smart applications. There are many different components and building blocks necessary to build smart apps, most likely from a range of different providers. This means it is necessary to master a lot of skill sets to be able to build smart applications. In order to fill this skills gap, another layer must be added on top of these IoT, big data and machine learning services.
Model-driven platforms, also known as low-code platforms, provide a model-driven environment for collaborative, visual development of smart apps. In addition, core services for software configuration management, as well as branching and merging, are needed for development teams to commit their work, and create builds and application packages.
Model-driven platforms support omnichannel UI and the development of cross-platform, responsive and multichannel apps optimized for specific form factors.
These platforms provide a range of out-of-the-box connectors to connect with all the underlying smart services from different providers. They also include the domain models and data mappings to connect all the incoming data in a visual way to your application.
Last but not least, these platforms provide seamless integration with enterprise back ends and third-party services needed to manage workflows and make smart apps contextual.
The elements that define the anatomy of a smart app may come across as overwhelming. The type and level of sophistication of the system will determine how many of the elements and services described are needed to create an end-to-end solution.
Nevertheless, it’s clear that the diverse set of endpoints, network technologies, protocols, IoT software and application development services pose a challenge for enterprises planning to adopt IoT to transform their business operations. The question is: How do you make IoT technology development manageable? The answer lies in adopting a platform approach and including the additional model-driven platform layer.
Adopting a model-driven, low-code platform will significantly simplify the process of connecting, managing, getting insight from and building apps for IoT-enabled products and services. These high-productivity platforms are uniquely suited to help IT leaders and their teams address the challenges of lack of agility, technical complexity and skills shortage by facilitating rapid, iterative development essential to smart app success.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Carmakers are definitely in the running for pole position in new mobility. Their core challenge? To master the new digital technologies now taking root within the vehicle itself, as well as in the secure delivery of new digital services directly to the consumer.
The digital and connected car already poses significant security challenges for the automotive industry. The electronic backbone of most cars — the CAN bus — was designed decades ago, well before people thought about cybersecurity.
Since then, innovations such as connected HD displays, 4G connectivity, assisted driving and a myriad of comfort features have each added their own electronic control unit (ECU) to the CAN bus. Today, upwards of 100 ECUs can be found in the typical new car, all listening and talking to the CAN bus.
Connection extends beyond the vehicle
Connected cars now extend both services and data beyond the vehicle itself. Most new cars already communicate with their drivers through remote keyless systems and smartphone apps controlling features like programmable heating, GPS and in-vehicle infotainment.
Connected cars also feed data on location, RPM, speed, mission-critical ECU failures and diagnostic functions back to the automaker’s back end. Full internet coverage, including streaming music and video, will soon be widespread in the market, and new standards like automotive Ethernet are on the horizon.
There’s much more to come. Assisted and autonomous driving will add more ECUs and data-generating sensors to the vehicle’s architecture. Cars will exchange data with other cars and surrounding infrastructure such as charging stations, traffic lights, stop signs and even pedestrians crossing the street around the corner. Autonomous cars allow time to be spent watching movies and listening to music.
Moreover, with the rise of the internet of things, there will be even more opportunity for connection between vehicles and drivers’ personal devices, whether smartphones, wearables, health monitoring devices, sensory augments like smart glasses or other devices we can only speculate about right now. The types of interactions between automobile and driver that may become feasible in a decade — or less — will put even more burdens on connected cars.
In early 2017, Intel reported that autonomous cars will soon generate 4 TB of data each day, including inter-ECU traffic as well as external communications. That kind of high-volume connectivity creates tremendous potential for hackers to read or manipulate data. The ideal solution would be “security by design,” rebuilding the electronic backbone of the car around today’s knowledge and experience of cybersecurity.
Assembling a digital technology ecosystem
Digital technology is far from the traditional core competency of an automotive OEM. This includes not just the technical enablement of the connected car, but also its relevant services platforms and KPIs, as well as the entire commercial and technical framework for new mobility services (car, payment, connectivity, media, etc.).
For an OEM undergoing transformation, the question is: Shall I rebuild my entire ecosystem with all the needed services infrastructure and build an OEM-specific walled garden? Or shall I focus on my own core competencies and experiences, and build the APIs needed to interface with third-party services that already existing on the customer side? In either case, a perfect, seamless user experience requires several services categories to be combined:
- Connectivity to bring connected cars online — not the traditional connectivity of telematics, but the customer-focused connectivity to enable user services from voice calling to video streaming that that turn the car into the third living room.
- Payment to activate m- and e-commerce services to pay for everything from onboard digital content to tolls, energy, parking and drive-thru food.
- Public services to seamlessly allow a consumer to drive or use a specific category of vehicles at a given moment, including public transport.
- Energy to recharge or refuel a car, or let the car recharge or refuel itself.
- Travel- and event-related services beyond pure mobility that integrate seamlessly into the purpose of the trip, such as digital event tickets, digital hotel keys and digital ambient tour services.
Services in all of these categories are already available, accessed through digital, user-specific service credentials or logins — and they’re already used and paid for by consumers in their daily lives.
Digital identity drives the ecosystem
Considering the complexity of this environment — especially in light of the fact the user is in motion, transitioning constantly between networks and nodes — the need for robust digital identity technologies is all too apparent.
Without them, the entire ecosystem described above isn’t simply at risk, but is impossible to imagine as a viable reality. User identities and the services associated with new mobility, from financial to communication to even social media — will depend on portable yet defensible digital user identities to allow any integration whatsoever to succeed.
In our next article, we’ll delve into how new mobility will impact the very business models that carmakers and other technology contributors will need to evolve in order to deliver this new model. Without that evolution, they’ll be under a very real threat of extinction.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Companies are doing a lot of interesting things in the IoT and industrial IoT space, but it often seems to me that while there are a lot of exciting and flashy features taking place at the edge, not a lot is being done with the collected data afterwards.
For example, we are being inundated with smart retail, smart manufacturing, self-driving cars, robotics and other amazing use cases. These things are incredible, but we don’t see a lot of trend analysis afterwards or real-time reaction to collected data. We should be seeing a lot more insights from the edge. We see basic things, but the advanced analysis we have come to expect from machine learning and predictive analysis is missing. Furthermore, we are not seeing real-time data analysis and reaction that is possible.
This, however, was just my general consensus for the industry based on many conversations I’ve had with developers, customers and partners; however, this article — “Internet of things data proving to be a hard thing to use” by Sharon Shea — takes a much deeper look and confirms my suspicion. It states, based on research by Forrester, that only 1% of IoT data collected is ever used. That is staggering, but should not be surprising.
Why are companies collecting so much data, yet using so little? I believe the answer to be very straightforward — traditionally this has been a very hard problem to solve. This is compounded by the fact that most of the people solving these problems, while incredibly smart, are first and foremost electrical engineers and do not have strong backgrounds in big data.
Most engineers implementing IoT projects, with good reason, are heavily focused on hardware challenges like battery life, device size and selecting the right manufacturer. They are also obviously focused on solving their primary use cases and getting to market quickly, and not as concerned with value-added analytics, which normally come later.
They do not have time to design and build the traditional big data architectures required to consume IoT data and transform that data into valuable insights, identify patterns and make it actionable. They don’t have the time, skill set or incentive to fully optimize their data value chain.
How could they? The technologies available on the market either offer very basic functionality, are very complex or very expensive. Often a data value chain optimized for trend analysis and analytics is comprised of five to seven different products. These might include a caching mechanism on the device, an IoT gateway, a NoSQL database, an SQL database, a map reduce technology and middleware to tie it all together. This requires a large investment in licenses, infrastructure and technical resources in addition to already expended capital on IoT hardware.
As a result, they often turn to technologies that provide the most basic of functions and allow them to move forward with the least resistance with the plan that, after reaching market, they will circle back toward optimizing their data value chain to monetize the other 99% of the data they are collecting. As we know, this rarely to never happens.
As I mentioned above, solving big data problems is really complicated. Database vendors have taken the path that offloads that complexity onto the developer instead of internalizing this complexity and presenting simple solutions to complex problems. Furthermore, the people innovating within the technology landscape are a lot different than they were 15 years ago, yet the most popular database technologies, like MongoDB and SQLite, are 11 and 18 years old respectively.
Folks driving innovation today are electrical engineers, data scientists, artificial intelligence researchers and quantum physicists. These are all incredibly smart folks, but they are not first and foremost computer programmers. Furthermore, most new developers do not have four-year degrees in CS, but rather graduated from code school. As a result, the data management industry needs to focus on providing solutions to massive big data problems that empower these folks to focus on building incredible innovations, and not worry about overly complex and expensive big data architectures that leave 99% of data unused.
As programming becomes a more cross-disciplinary skill, it is unreasonable to have an expectation that database end users will have 10, 15 and 20 years of experience implementing data management systems. These technologies need to be easy to install, easy to maintain and easy to use. This is especially important in IoT. In other verticals and workloads, the solution to the above has recently become DBaaS. That does not work in IoT, where your data value chain begins far outside the cloud.
Adoption of IoT, in turn, is going to continue to drive an adoption of hybrid cloud. As a result, the solution to these problems cannot simply be providing a managed service for an incredibly complex product. The product itself needs to be simple and easy to use, yet highly scalable, so that it can be deployed and managed directly on the edge and in the cloud with the same level of effort and skill set. In order to continue to drive and support innovation the data management industry will needs to transform to support a new generation of innovators.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Organizations around the world are often flooded with “big data” buzzwords that detail how to better use customer data to drive business decisions. But what does this mean for the industrial sector, where data issues are often less understood by the general public? How are emerging technologies legitimately addressing industrial challenges such as preventative maintenance, machine-asset uptime and other real-word industrial problems? Lastly, what can engineers and analysts do to get better answers from their data?
As industrial operations continue to mature, customers in manufacturing, oil and gas, transportation, energy and utilities are challenged with making sense of industry trends and determining how and why they apply to their day-to-day activities. Here are five key technology trends Industry 4.0 is talking about today that are making manufacturers rethink planned versus unplanned maintenance.
Internet of things
The commercialization of IoT is rapidly decreasing costs and opening up the opportunity to not only create, but also correlate new data streams with existing industrial data. By retrofitting machine-assets with inexpensive sensors, then correlating sensor data with traditional industrial control systems, operators are able to monitor the conditions of assets in real time, shifting their maintenance strategy from reactive to proactive.
Big data comes in many different shapes and forms and for most, it is challenging to discern real business value from large volumes of data across IT, OT and IoT systems. Data is often isolated across islands of information with no shared understanding. Emerging big data technologies are simplifying the process of correlating and combining data from alarm management systems, for example, with real signals coming from equipment, providing a holistic view across multiple sources. This new wave of big data for industrial operations enables engineers to troubleshoot issues in a fraction of a second, and moves the organization closer to predictive maintenance and zero unplanned downtime.
Machine learning and advanced analytics
New statistical methods and algorithms allow engineers and statisticians to use large volumes of historical and real-time information to calculate and predict outcomes in ways impossible or impractical for humans alone. Data science and machine learning are opening new opportunities to provide objective investigation of asset data in order to detect novel conditions, make previously unknown connections between assets and data sources and, as a result, forecast future performance with astounding accuracy.
Artificial intelligence is no longer science fiction — it’s real. Manufacturers can use AI to move beyond predicting maintenance needs and into forecasting trends and business requirements. Beyond accurately predicting the time, location and reason for future failure, AI holds the promise of being able to recommend the most effective mitigation of potential failures and provides the foundation for autonomous maintenance and repair.
Augmented reality (AR) has the most impact on the way maintenance technicians and operators interact with assets in the field. In the past, operators were required to bring static information like technical and procedure documentation to the job site. Now, mobile technology gives the mechanic real-time access to the same information, but the hands-on interaction is prone to human error. AR technologies give mechanics hands-off maintenance-related information such as audio and video. This real-time data provides a new dimension of problem detection, changing maintenance strategies on the fly.
As industrial businesses continue to embrace machine data, they should look at technology that is most relevant to their business outcomes. With minimizing unscheduled downtime and advancing maintenance strategies top of mind for industry, analysts and engineers will better serve their business if they are able to communicate these trends and clearly demonstrate the long-term business value each provides.
There has been lots of hype recently about the increasing use of biometrics to transform our lives, especially in the area of face recognition for identity verification. To be fair, Samsung and other device makers have offered 2D unlocking technology on mobile phones for a while, but unfortunately, the results have been inconsistent at best.
The latest approaches to face authentication use today’s tiny 3D cameras and AI-driven software to provide a much better technology. Apple, as it so often does, raised the bar again when it introduced FaceID on its latest handset, the iPhone X. While it is certainly convenient to be able to unlock an overpriced smartphone using the relationship between your eyes, nose and mouth as credentials, this is still a very localized and comparatively low-impact use case.
But the next wave is coming. Imagine being able to recognize people using a 3D camera and AI-driven software, regardless of lighting conditions and not impacted by environmental factors like weather, body angle, positioning and location.
Welcome to the exciting new world of in-motion 3D face authentication.
We are starting to see the application of 3D and AI software in the domain of transparent automated gates. More and more 3D cameras will be deployed to support in-motion settings, enabling security personnel to more easily monitor access to large public venues. Airports, train and bus stations, malls, popular tourist attractions, government buildings, hospitals, ports and more could potentially benefit from this new surveillance technology. 3D face authentication in these settings has the potential to deliver a transformative effect, way beyond facilitating access to your latest selfies.
The use of 3D cameras removes variables caused by environmental conditions, such as camera location and subject size, as well behaviors including walking speed, face direction and sight line. The resulting high-definition 3D images are being used by security services to harvest valuable information captured in realistic outdoor environments regardless of the time of day — in bright sunshine at noon or total darkness in the middle of the night.
By analyzing multiple images from different directions and extracting key information, the tools can create a 3D version of a person’s face, which can then be rotated and viewed from different angles. Skin reflectance data is also captured and can be used to produce synthetic poses of any face captured by the device.
While using images from video cameras for face recognition has been around for a while, we are about to see powerful, tiny 3D cameras and AI-enabled software transform in-motion security. Virtually spoof-proof face authentication is the next phase in high volume people identification, making the jobs of security personnel easier and citizens safer.
Earlier this year, botnets attacked networked security cameras, shining a light on the vulnerability concerns around industrial systems and the industrial IoT. As with any device connected to the internet, these types of devices have characteristics that make them a compelling target for botnet authors, as well as other types of malware. These devices typically have full-time, high-speed network connections, run embedded Linux, and lack monitoring systems and screens or logs that might alert a user to a hack. Additionally, many of these systems are designed for limited rollout, or come from a company that has paid limited attention to hardening or security. This combination of powerful networked systems with easy ability to be breached, allows for botnets to thrive. In the last few years, malware such as Mirai and Bashlite has taken advantage of vulnerabilities in these IoT devices, and these weaknesses should be kept in mind as the industry designs the next generation of IoT and IIoT devices.
The typical Linux system embedded in IIoT devices uses dozens to hundreds of open source packages. While these components are typically high quality, all software contains defects and, over time, vulnerabilities in these components are discovered and eventually taken advantage of. Many of these devices aren’t designed to be auto-updated and depend on software from commercial and open source organizations that have vulnerabilities discovered every few weeks to every few months.
It’s becoming a best practice to pay attention to a device’s software bill of materials, with special attention to components with known vulnerabilities as seen in places such as the National Vulnerability Database. For IIoT software providers, keeping track of the list of components used in the operating system as well as the application itself is a necessary precaution to stay ahead of malware authors — especially when implemented in tandem with a rigorous patching system.
The irony is that sometimes update systems can be used by malware authors to spread their malware. This occurs when secrets, such as hardcoded passwords, are shared across multiple devices or device families. Many current malware systems use this trivial vulnerability to spread themselves, but as this vector gets locked down, many are moving to taking advantage of common vulnerabilities — such as those seen in OpenSSL, Bash or shared commercial firmware, as seen in DVRs or camera boards.
Today, products and services are available that are designed to help IIoT system designers keep track of their use of open source and commercial dependencies, as well as get alerts when new vulnerabilities are discovered in the components they’re using. This allows them to create products that don’t contain known vulnerabilities when first shipped, and to stay on top of components as they age out when deployed in the field. This type of scanning and management software is known as software composition analysis software.
Developing and maintaining a software bill of materials, alongside ongoing monitoring and frequent patching, are two crucial steps IIoT software manufacturers need to take to ensure their products are safe from hackers. Software composition analysis software can help developers manage these requirements and ensure companies are shipping a device that respects the open source community, as well as protects the company’s users from attacks.
For many people, the cloud has come to represent the backbone of the industrial internet of things. But, enterprises really making progress with their IIoT visions are starting to realize that cloud is only one part of their IIoT universe. Operations that need their computing done in real time are discovering that there are certain things that cannot or should not be pushed to the cloud — whether it be for security, latency or cost concerns — and are therefore beginning to push more and more computing to the edge of their networks.
The growth in edge computing has not only created more data, but also a greater need for speed in making that information available for other systems and analytics. Cloud computing is convenient, but its connectivity often just isn’t robust enough for certain industrial situations. Some computing will always need to live at the edge, such as real-time processing, decision support, SCADA functions and more. There’s no sense in limiting these functions when 100% cloud adoption just isn’t necessary, and can instead be utilized for non-real-time workloads like post-processing analytics or planning.
A real-world example
Consider an example from the energy industry that demonstrates edge and cloud playing their most appropriate role. Companies can have hundreds of oil drilling rigs dotted across a region, with the company headquarters where the data center or cloud resides being hundreds or even thousands of miles away. At each of the oil rigs, or the edge, it’s necessary to have systems that provide continuous monitoring and analysis of key parameters — like well pressure levels — with the ability to identify when critical thresholds are at risk of being exceeded, allowing operators to take immediate action to mitigate them. It could pose an unreasonable risk to wait for this data to travel back to the data center, undergo analysis and direct actions back to the rig.
In this instance, the cloud would be better suited to support planning and trend-spotting by collecting metrics from all of the oil rigs and periodically sending them to the data center or cloud where they can be aggregated and analyzed.
Operational technology teams have managed these systems for years. They understand the network edge and what it requires. But there’s a cultural divide between OT and those IT professionals pushing cloud-based IIoT. The IT teams often equate the approach necessary for industrial automation with that of enterprise IT deployments.
But even traditional IT enterprises have figured out that it’s a hybrid cloud world. An industrial automation engineer I spoke with recently told me that 15% of their data generated from the plant floor needs to be sent to the cloud to be immediately available to other systems. What happens with the other 85%? That portion must be aggregated and analyzed to determine how it can be valuable. If it’s all being pushed up to the cloud, however, industrial businesses are now paying for all that capacity when they really only need a fraction of it. That’s a major cost issue.
For those beginning to implement an industrial IoT strategy, what’s important to remember is to not make an investment decision before first carefully evaluating your workloads and flow of information. The cloud is absolutely a necessary piece of IIoT deployments, but that doesn’t mean you should abandon edge computing systems that can continue to keep valuable and mission-critical information safe and quickly attainable. Finding the right balance will help operators find success with IIoT.
Predictions about the emergence of “smarter” cities are gaining momentum in popular conversation. Envision a network of city sensors that captures data to better optimize processes and react to events. These sensors identify things like potholes, gunshots, air quality, traffic patterns, water leaks and a whole host of other municipal issues.
The end result is a better living experience for city residents — but that’s not the only driver. Businesses have generated a plethora of ideas to compete for municipal budget. Cities see the opportunity to gain efficiencies and further stretch precious tax dollars. Together, they are working to best implement the city of the future.
What’s driving cities into IoT?
Building smarter cities meets a lot of critical requirements for civic leaders. Smart cities enable them to:
- Identify, quantify and remove operational overhead,
- Improve the safety and happiness of citizens, and
- Increase the population and tax base.
Embracing IoT isn’t just about making life better for citizens. It opens up new opportunities for cities to gather, process and act upon events happening locally.
And once cities start ingesting data in a “smart” way, they can continually improve processes and further extend tax revenues — giving citizens more for their money.
And that virtuous cycle of improvement furthers growth in forward-thinking cities. By optimizing processes and increasing citizen satisfaction, smart cities are transforming themselves into the ideal location for exactly what they need: more tax-paying, technologically-savvy citizens.
We see it happening already, as technology hubs entice more residents each year — versus their counterparts that fail to innovate. Smart cities can enable ecosystems, entice technology pioneers and provide gravitational pull for diverse arts and ethnic communities.
A better approach
Municipalities that take a proactive approach to IoT are learning sophisticated adoption techniques and the resulting benefits. They view IoT as an enabling technology, rather than just deployed and managed software. Cities are beginning to treat IoT like a utility — a service that should be provided to all citizens.
Take electricity, for example. Rather than installing lights in homes, the city simply provides an agreed-upon voltage and wattage to each dwelling. The city’s electrical grid provides a standard power supply for refrigerators, televisions and hot, clean water. This standard and the supporting base infrastructure enable healthier citizens, greater productivity and the availability of new lifestyles — with zero municipal implementation required.
Cities can take this same approach with IoT by ensuring that core city services are available for connection and integration. Providing an IoT platform to citizens means they can build their own IoT technologies connected to municipal infrastructure. Budding entrepreneurs can find ways to optimize traffic and parking, while established property management companies can support more safely monitored homes and businesses.
Basic qualities like data availability, integration to city management functions and access to artificial intelligence empowers residents as agents for change. And cities will now be able to meter residents’ activities individually. This innovation will allow municipalities to generate revenue in ways they never imagined.
An IoT city vision
Picture a city that provides enabling technologies to citizens through a variety of vendors. LPWAN providers, IoT software providers, cloud vendors and many more will work together with civic leaders and citizens to achieve common goals.
Cities looking to future-proof themselves will choose integrations over open standards, across many clouds and using any hardware — all with the ability to easily migrate to new providers in the future. These technologies will improve the lives of residents and give local companies a competitive advantage.
Cities have a unique opportunity now to skip the temporary solution phase of IoT and jump directly into empowering citizens to change communities for the better.
I recently returned from the Mobile World Congress in Barcelona and was amazed by the number of new sensors, devices and technologies on offer. Mobile connectivity is no longer just about cell phones. Now, everything from augmented reality glasses to connected cars are improving lives in every industry from healthcare to retail. But all of these new technologies have one common need: bandwidth. Complex technologies often need to move lots of data, sometimes up to one gigabyte per second, even from austere environments. Coming back to Amsterdam, I sat down with a team of IoT leaders from across Europe and asked a seemingly simple question: “How can all these devices be connected?”
It is an exciting time for mobile connectivity, with new technologies, processes and protocols introduced seemingly every day. Given the diversity in devices, it is perhaps not surprising that there are many different options for connecting IoT. There are new protocols for IoT devices built on shared spectrum or cutting-edge processes, such as space division multiplexing. All the while, familiar standards such as cellular are getting exciting new upgrades like 5G that will offer significant new features. But this wide variety of choices makes it difficult for organizations to choose the right connectivity for their specific business use case.
So how do you determine which connectivity technology is right for you? Before making any decision, it is important to understand what you are choosing between, that is, what are the different factors that govern performance of connectivity. These factors include:
- Max range
- Max data throughput
- Power consumption
- Encryption and security
- Scalability (via network topology)
- Cost of manufacture and sustainment
Just reading through this list, it should become clear that these factors are not independent. Rather, they tend to vary together. For example, if you increase data throughput, you may lose range or increase cost. Increase range and you will cause a corresponding increase in power consumption. A change to one parameter induces changes in other parameters. As a result, connectivity options tend to cluster into three main groups: wired, short-range wireless and long-ranged wireless technologies.
Even though wired solutions might seem “outdated” at first view, they can turn out to be important connectivity options in the IoT context. Wired solutions provide very high data rates at very low cost, albeit without much mobility.
Short-range IoT connectivity technologies are used to transfer data over short physical distances. The distance between the sensor or device that collects data and the gateway that processes the data is usually less than 150 meters.
The strength of short-range wireless solutions, then, is low power consumption and small size, but at the trade-off of shorter range and often smaller bandwidth.
Long-range wireless solutions come in two main flavors: cellular- and non-cellular-based solutions. Both of these offer greater range and bandwidth than shorter range options, but often at higher power consumptions and cost.
You define what is ‘best’
With such a wide range of connectivity options, each with different strengths and weaknesses, there is no single best solution. Some options may be very well-suited to one particular use case while being a poor choice for others. Therefore, choosing a connectivity solution — or any IoT technology for that matter — is not a case of finding the best technology, but rather finding the right fit for your business case.
Take the example of precision agriculture. To be able to monitor sensors spread across many acres of fields, this use case needs long range, but it does not need to transmit large amounts of data, so throughput can be small. Finally, the small margins of farming mean that it must also come at low cost. All of this taken together mean that non-cellular, long-range wireless solutions such as low-power wide area networks (LPWAN) would be good choices. On the other hand, take the example of medical devices. These need to transmit only a short distance — from the wearer to a phone for example — but must be small enough to wear comfortably and so require small power consumption and high reliability. This makes short-range connectivity solutions such as LTE NB-IoT or Cat M1 a good choice. Connected cars offer yet another profile. These require high data rates and long ranges. However, they also have access to near unlimited power from the car, so cellular-based long-range wireless would be a good choice.
Technology serves business, not the other way around
Choosing between connectivity options does not have to be terrifying or confusing. The key is to start by think about your business, not by thinking about technology. After that, your business needs can be a ready guide to technical choice after choice.
So what advice did our team of experts in Amsterdam come up with to help you navigate these choices? Three simple ideas to keep in mind:
Think about your business — While finding the shiniest new technology may be attractive, finding the right technology can only begin once you understand the needs of your particular use case and what you want it to achieve.
Think about funding — Once you have the chief business goal you want IoT to accomplish, it is also helpful to consider how to achieve that goal over time and how to pay for it. Generally speaking short-range and LPWAN solutions require more Capex, whereas cellular may require more recurring Opex. Furthermore, it is important for any organization to ensure future flexibility and avoid a lock-in and high cost of change caused by vendor lock-in to one specific set of hardware.
Think about scaling — Finally, don’t forget to consider how an IoT system will grow and change over time. While a current solution may not need high bandwidth now, what about in the future with technologies like AR/VR? While a smart warehouse system may tolerate high latency now, consider what will happen in the future if robots or self-driving forklifts are added.
Choosing the right connectivity technology does not have to be difficult. Understanding the performance factors and a few simple guidelines can demystify the process. But the secret to choosing connectivity — like any part of IoT — is about focusing on your business not on the technology.
The IoT market is not as large, or growing as fast, as once thought. While the internet of things is an undeniable trend with monumental implications for global businesses, industries and governments, the technology phenomenon has a few blind spots that have slowed customer adoption and integration, especially within industrial operational technology (OT) networks.
The missing ingredient
There’s a lot to be gained by adopting connected IoT or IIoT technologies within OT networks and industrial control systems (ICS) environments. By using common internet protocols combined with the cost-savings of using connected terminals, industrial operations can utilize real-time analytics and multisite connectivity to improve efficiencies across numerous industrial verticals. So, why have ICS practitioners and stakeholders not adopted these new technologies? One word: security.
As OT networks begin to integrate more intelligence, such as intelligent human-machine interface and cloud SCADA, ICS practitioners are now unable to reconcile the new security risks that have been created as a result. Since OT networks control critical infrastructure and processes, network failure inherently comes at a greater consequence than in typical IT networks. The potential for substantial financial loss, environmental damage and even loss of human life resulting from a security breach is a real possibility in the industrial realm. According to a 2017 study from Strategy Analytics, the impact of lagging cybersecurity investments is evident.
In the 2017 study, Strategy Analytics interviewed IT decision-makers across nine vertical markets in the U.S., UK, France and Germany, and found that investment and growth in IoT/IIoT systems have been less than once thought or hoped. For example, over 70% of current IoT deployments in the United States involve less than 500 devices. And around 66% of businesses in the survey spent less than $100k on IoT/IIoT projects in 2016.
From a global perspective, 35% of firms with IoT/IIoT deployments, reported less than 100 devices connected. This reality was not a reflection of what has been forecasted by many leading technology companies servicing the IoT market over the last few years. This may lead one to ask — why is there such as large disparity between expectations and reality?
Another very interesting detail in the study was the vertical distribution of Strategy Analytics’ findings. The three largest represented verticals in the study were primary processing, security and utilities — representing about half of all IoT/IIoT market spend in 2016. By 2025, automotive, security and primary processing are projected to each generate $50 billion annually in IoT/IIoT revenue. In a very literal sense, modernizing OT/ICS with connected systems, as well as spend, drives much of innovation activity. In other words, industrial IoT is on the path to becoming the largest market for connected and automated systems under the greater IoT umbrella.
Clearing the path
IoT/IIoT concepts have progressed from experimental to mainstream. Now, general IoT/IIoT technologies must compete for a share of IT/OT budgets, which isn’t always easy to do. Businesses and public sectors are implementing general IoT/IIoT systems, but they’re doing so cautiously due to associated cybersecurity concerns and consequences of systems failures, especially at the OT level. Until investment in ICS cybersecurity technology parodies investments in connected and automated systems, IoT/IIoT growth will be challenged.