As 2018 draws to a close after another year of IoT adoption, deployment and management becomes history, it’s time to reflect on the last 12 months and how far the IoT market, which is still in its infancy, has come.
In analyzing some of the most thought-provoking posts from our IoT Agenda guest contributor network in 2018, it’s interesting to see how the IoT conversation is swiftly transitioning from why organizations should adopt IoT to how to build a successful IoT strategy, how to tackle one of IoT’s biggest challenges (security) and what other technologies can be used in conjunction with IoT to boost its effects and accomplishments.
Without further ado, here is what you were most interested in learning about IoT in 2018 and how the views of our guest contributors are creating the building blocks of IoT in 2019 and beyond.
IoT strategy: Building blocks to success
Imagine walking into a store and being offered a custom-tailored overcoat that will be ready in four weeks for $1,000, or a one-size-fits-all hoody that you can walk out of the store with at that moment for $10. Now translate that to IoT. You can settle for the easy, readily available off-the-shelf IoT platform that may or may not meet your enterprise’s needs, or you can wait a few months (or longer) and pay a higher price to get the perfect platform. This was the scenario Deloitte Digital’s Robert Schmid presented in Turnkey IoT: Balancing customizability and ease of implementation — but, he said, it fortunately doesn’t have to be this way.
An IDC survey reported that while 38% of IoT producers currently make revenue from hardware, that number will slip to 33% over the next two years, while in contrast, the number of producers making money from IoT services will increase from 32% to 38% in the same time period. As IoT revenue streams transition from static hardware to as-a-service offerings, it’s time to rethink monetization strategies. Flexera’s Eric Free discusses this — and the challenges of using SaaS to monetize IoT — in his post, Subscription and pay-per-use IoT revenue models.
It’s no secret IoT devices create data — and a lot of it. For that data to be useful enough to improve business processes, it’s critical that it can be shared across the entire business and not get stuck in technological or organizational siloes. Tata Consultancy Services’ Regu Ayyaswamy outlined the importance of a connected digital enterprise to put IoT data to work and gave real-world examples of CDEs in action, as well as tips to compete in the business 4.0 world, in his post, Get more answers from your IoT data in a connected digital enterprise.
The General Data Protection Regulation landed on May 25 — the EU law on data protection and privacy that affects organizations worldwide. Companies interacting with users and businesses in the EU must adhere to this regulation that mandates how companies collect, process and store the personal data of their customers. In What does the GDPR mean for IoT?, Openwave’s Aman Brar took a deep dive into how the GDPR would affect the internet of things and explained why the IoT community should address privacy and security from the get-go.
2018 was also the year of discussing which specific technologies can help enterprises tackle the ever-present IoT security issue. In Is blockchain the answer to IoT security?, Portnox’s Ofer Amitai outlined the challenges of securing device-to-device communication and providing authorization for IoT devices, and explored why blockchain is one of the most promising technologies to provide a tamper-evident record of everything that happens in an IoT environment.
In Trusted execution environments: What, how and why?, Trustonic’s Richard Hayton wrote that TEE is one technology that can help manufacturers, service providers and consumers alike ensure trusted identification to the ever-growing proliferation of IoT devices. Read his post to find out what trusted execution environments are, how they work and why TEEs can provide the trust and scalability IoT systems require.
IoT’s plus one
Just as IoT isn’t one single technology, readers this year learned why it was important to combine it with other technologies to be successful. Progress’ Mark Troester’s Key considerations of AI, IoT and digital transformation took a look at the convergence of IoT with artificial intelligence and digital transformation and why it will be critical to the future of your business, if it isn’t already. Read on to learn why, if applied correctly, AI and IoT can drive the digital transformation movement.
AI and IoT was a popular duo this year. Aricent’s Ben Pietrabella discusses the promise of 5G and what it holds for IoT, but said it will remain just that, a promise, without all-in commitment from smart mobile operators to use AI in conjunction with 5G. Check out his post, Why smart operators are turning to AI to prepare for 5G, to learn the role AI has to play in 5G and IoT, and why the success of a 5G network depends on the operators’ ability to harness AI in an IoT network.
Another hot topic over the past year was digital twins. It wasn’t that long ago that digital twin use was an out-of-reach goal, said Wipro Limited’s K.R. Sanjiv in his post, Digital twins: The final frontier of VR? But thanks to IoT, he wrote, augmenting the physical with virtual is rapidly becoming a reality. Read on to learn why digital twins are only going to grow in both prominence and power, and get help building a new virtual ecosystem for your enterprise.
As 2018 comes to a close, there are going to be plenty of lists rounding up the trends from the past year. Autonomous vehicles are obvious candidates, but after the non-story of Waymo One’s launch, I expect to be nonplussed in 2019 as well. Here are some big themes we expect will really come to the forefront this year.
The past year was huge for micromobility, a term which barely existed in any significant way before now. Dockless bikes and then electric scooters rained down upon the streets of cities everywhere around the world, presenting both opportunities and challenges. The biggest challenge for cities was how to regulate these new mini-vehicles, which can be parked almost anywhere. Some cities, like Washington DC, established quotas for each company to control the number of vehicles on the road.
In 2019, we’re going to see cities start using open and proprietary data for performance monitoring and to implement performance-based quotas. The number of scooters allocated could be based on how often each is used, how many car trips these electric vehicles are taking off the road or other goals such as equity.
We’re also going to see the continued development of mobility platforms integrating multiple types of transportation — and more experimentation with pricing mobility “as a service,” like how we pay for mobile plans or Netflix. In 2018, Uber and Lyft began piloting new modes (including micromobility and transit) in some of their U.S. markets, but this will expand nationally and globally in 2019.
As mobility platforms expand and adopt service pricing, these are two key steps toward universal basic mobility, the point at which every person has equal access to getting around affordably with ease.
Precise location services
We’ve seen plenty of location-based services already, but 2019 will take them to a new level of precision. If you’ve been in an airport lately, you may have seen a great example of this with table-delivery technology. In Washington’s Reagan National Airport, travelers in Terminal A can sit down, order food from a kiosk and have it delivered to their exact seat.
The way restaurants, retailers and the cities that house them approach pickup and delivery will change dramatically in the coming year, whether it’s brick-and-mortar retail deploying proximity systems such as Radius Networks’ FlyBuy or Amazon delivering orders directly to your car with Amazon Key. Taking this a step further, restaurants will be able to deliver food directly to a customer’s location using only mobile location services — so you can even take a scooter to instantly grab your tacos.
Commercial buildings get smart
The smart building trend has been growing for a while, and the number of building-specific smart apps has grown along with it. This year, we’ll hit a tipping point where it will be more unusual for a major urban commercial office or apartment building to not have an app.
From controlling the temperature of the exact space around your desk at work to making sure your apartment door is locked even after you’ve left the house, apps have been making a splash. Some of these apps also include transportation information to increase the ease of getting around with so many micromobility options available. For property managers and owners, the advantage is clear — your location and the amenities associated with it are as likely to be digital as physical. For renters and tenants, the plus side is having the same level of comfort and productivity at your office that you do at your home.
All of these trends have one thing in common: using smart city technology to increase efficiency and convenience for the end user. A city is only as smart as its citizens, and 2019 will be all about using information to make individuals as smart (and happy) as possible.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Smart home device shipments grew 55% in 2018 over 2017 to total 252 million units with more than 70 million homes worldwide now having one or more smart home device, according to ABI Research. In parallel, research by Nielsen found that almost one in four U.S. homes (24%) now has a smart speaker, and IHS Markit predicted smart home revenues will reach $28 billion this year.
With those kinds of stats, you could be forgiven for thinking that the smart home is doing everything right. But I don’t think it is. In fact, I believe that the potential demand for smart home devices dwarfs the above growth numbers by an order of magnitude. But this growth isn’t happening quite yet, which begs the question: Why?
The traditional home security model…
To address that question, let’s take a quick look at the traditional home security market. Until quite recently, if you wanted to install an alarm system in your home, you’d typically turn to a security company. This meant your security system was installed by a trained professional, regularly tested (so less likely to false alarm) and continuously monitored.
But it would also mean you being locked into one vendor and a hefty, legally binding monthly fee. And if you wanted to add fire detection or CCTV monitoring, say, you’d have to go back to the same vendor and pay them to install an additional set of sensors and cameras of their choosing. Plus, your hefty monthly fee would get even heftier.
…tells us consumers hate feeling trapped
Home security firms have their place and aren’t going to disappear any time soon. But the rising popularity of do-it-yourself security systems is sending a clear message. (Maybe, like me, many consumers hate being locked into a single vendor — and that monthly fee.)
Equally revealing is that the growth in DIY security systems has occurred despite these systems not being particularly easy to set up or use. This situation has improved significantly with the evolution of various smart home ecosystems. But like a bunch of school children who have formed self-selected cliques at school, these ecosystems don’t play nicely with anyone outside their group.
As someone who works for a wireless chip company that supplies to the smart home sector and a keen amateur smart home enthusiast myself, this lack of end-user choice carries echoes of the traditional home security model mentioned above. And that concerns me.
If it’s not easy to use it won’t sell
The second thing that concerns me in the modern smart home market is ease of use. This includes consumers buying a device it if gives them the basic functionality they need, but ignoring any “advanced” features they deem too complex, and the rise of professional smart home installation firms that will come and do all the heavy lifting for you. Neither of these scenarios is ideal from a demand-creation perspective.
That said, smart home manufacturers do recognize that their devices have been too complex for consumers to set up and use. This is why they are focusing their development efforts on solving this. (GE, for example, has introduced some connected lightbulbs that work directly with Google Home using “Actions on Google” for pairing and communicating; users don’t even have to pair the Bluetooth-based bulbs with the hub.)
But all this is still happening within each smart home ecosystem silo.
Yes, any ease-of-use improvement will help further increase sales of smart home devices within each ecosystem. But it won’t be enough to deliver on the full potential of the smart home.
Making a smart home for everyone
The smart home is a wonderful, expanding, highly inventive, new industry sector; but right now, it’s also a mess.
Consumers want smart home devices that just reliably work — regardless of manufacturer or vendor ecosystem. They want to go out, buy any devices they like, bring them home and within minutes have them all up and running. And they want to be able to do it by themselves.
Consumers love to mix and match. And from a technical perspective this means the smart home industry adopting a multiprotocol, common industry standard. One that enables devices from any vendor or ecosystem to work seamlessly together. I think we have some way to go before this will happen.
And ultimately it will be cold, hard, commercial reality that will force the industry to drop the ecosystem cliques and learn to play nicely together.
Who’s going to make this happen is anyone’s guess at this stage. But the day it does, the smart home will rapidly and finally become the de facto feature it deserves to be in every modern home.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
According to the most recent Ericsson Mobility Report (November 2018), the number of cellular-based IoT connections is likely to grow from 2018, by 27% annually, to reach 4.1 billion by 2024. The challenge for CIOs, CTOs and their operational teams is how to access a connectivity infrastructure and ecosystem that can support all these devices.
When figuring out the best approach to take in building an IoT connectivity strategy, there isn’t necessarily a perfect single strategy — it really depends on the use case and the nature of the devices to be connected. To help break it down, consider the three W’s — what, where and why. What is the device to be connected?; why do I want to connect it? (in other words what data do I need to extract); and where will these devices be deployed?
Let’s examine these questions and some other key elements, a little closer.
Device tracking and data extraction
You’ll need to understand what type of data is being collected and how often the device is communicating back to a monitoring or analytics application, which will increasingly be in the cloud. If it’s a large amount of data that is needed in near-real-time, then edge computing may be required to process information more quickly than sending it to the cloud.
Next, consider the location of the devices, both from a geographical and physical standpoint. For example, are the devices going to be deployed in a specific site — like a factory or campus – and will they be in the same country or possibly be deployed in multiple countries? Also think about the device or asset itself. Does the thing to be connected have access to a power source (like a car via the battery or a vending machine that’s plugged in) or does it require its own power/battery pack? Is the device easy to access and replace the battery, or will it be in a remote location, where battery life becomes critical?
The location and type of data that is being passed back and forth also determines the type and level of security that is needed. Use of near-field communication (NFC) or some types of low-power wide area network (LPWAN) can mean that security is compromised. Cellular connectivity brings a legacy of secure device access, identity management and authentication techniques, combined with security techniques that have been developed to protect IP-based communication services. This creates a secure foundation for end-to-end IoT communications. Of course, cellular connectivity is not always appropriate for IoT deployments. A fitness tracker that is connected with an app on your smartphone only needs an NFC connection, such as Bluetooth, to become connected, while a car or truck may have a range of connectivity requirements that define the need for a pervasive, cellular connection. It is important to identify the best type of network access for the IoT system, whether cellular, satellite, Wi-Fi, private LTE, Bluetooth, LoRa or some other type of LPWAN connection. It is clear that the level of security, coverage requirements, communication costs and impact to the hardware and battery design should all have equal consideration when designing an IoT connectivity strategy.
Relationship to the network
Another important factor is the interdependency between the device and the network. The more basic a device is in terms of the data being recorded and the data transmission rate, the more likely it is not to be fully secure and potentially not to work across multiple network types.
On the other hand, the more flexibility and capabilities that are built into the device (or the device-side application), the more capabilities can be meshed between the device, application and network. But this typically requires more memory/OS on the device and higher data transport over the network — which means higher battery usage.
Network technology and connectivity decisions are very important factors to consider as an integral part of the overall strategy for any IoT project. Will the devices rely on one type of connectivity, or will they need to be able to interact with multiple connectivity options?
Partner, partner, partner
Because there are multiple elements to delivering an IoT system, it is imperative that an enterprise either works with a systems integrator to ensure the various components work together or selects providers with a clear reference model with other partners and have experience in cooperative work.
At Tata Communications, we are partnering with semiconductor companies, chipset manufacturers and other OEMs to ensure our platform works with a variety of devices and applications. We are working with vertical solution SaaS providers and public cloud service providers (including Microsoft Azure and AWS) to ensure secure connectivity all the way from the device to the cloud and back again. An example is the work we are doing with Thailand-based fleet management specialist DRVR, helping transportation companies in Asia better manage their vehicle fleet activity and save on fuel costs. DRVR’s connected sensors can identify location, driver behavior, engine idling time and other parameters to alert drivers or fleet operators to take action.
Expect the unexpected
It is in the operational details where connectivity strategy and network selection are critical in helping to build a successful IoT project. However, there are many considerations, such as unexpected delays in testing and deploying a system are common. In what remains a very fragmented market, it is vital to ensure end-to-end interoperability of the various components — device, network, cloud, application, analytics — work together, to ensure that an IoT project can be successful from proof of concept all the way to growing at scale.
Large-scale, multinational IoT projects are still relatively new and immature. As a result, nobody should assume that connectivity “just works.” The key is to have a technically complete plan, consider the key factors and, perhaps most importantly, be patient and work through the problems with qualified and like-minded partners in order to optimize the business case or the return on investment.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
In today’s fast-paced world, people expect personalized information and services in the moment across almost every aspect of their lives — and especially when it comes to technology. Modern applications need to be fast and smart to stay ahead. Yet, despite this growing need for speed and personalization, many companies are clinging to legacy data infrastructures built on traditional relational database management systems (RDBMSes) that do not provide the necessary scale, data management at the edge, and virtual/cloud deployment capabilities required to keep pace with digital transformation strategies.
The internet of things is one of the biggest market drivers for retiring these legacy systems. Most legacy data infrastructures simply cannot meet the stringent requirements to power IoT applications and respond to the explosive growth in connected devices. In Gartner’s most recent IoT report, it predicts that the number of connected devices in use will hit 14.2 billion in 2019 and grow to 25 billion by 2021. Organizations are dealing with millions, if not billions, of connected assets that are streaming vast amounts of data into the enterprise every second.
Simply capturing, processing and analyzing all this data poses a huge challenge on its own — but more importantly, organizations need to then make that information useful by applying the insights to their business. The most powerful IoT applications need to be able to ingest the data and then make decisions on that data in real time. Smart meters that monitor usage and conditions, such as weather, for example, must be able to adjust the price of services according to usage and conditions that can change in an instant. Another example would be if a consumer forgets to turn off his oven at home and that drives up his utility costs unnecessarily; the subscriber must be informed in real time to turn off the oven so he can save that money. In order to get the most out of these meters and IoT applications, utilities need real-time streaming, monitoring and decision-making capabilities.
Looking specifically at industrial IoT, organizations are deploying sensor technology that provides automation and drives long-term cost efficiencies. However, many companies are sticking to their legacy data infrastructures built on traditional RDBMSes that simply don’t provide enough bandwidth to bring all the data back to the legacy systems, analyze it and send the results to the edge. In those cases, it’s often too late to make important decisions as the moment of opportunity has passed. Some organizations try to temporarily “fix” the problem by adding NoSQL caches to mask the challenges. However, these legacy systems simply don’t address the additional data volume and velocity issues every digital organization is now facing.
When it comes down to it, legacy systems do not have enough bandwidth to push all the data from every single sensor back to the central database, especially in a timely manner. This is not just cost prohibitive, but also legacy RDBMSes cannot grow to handle such astronomical scale and simply are not fast enough for real-time decision-making when an event like a natural disaster occurs. For these reasons and more, data must be aggregated at the fog layer — closer to the edge in a distributed, in-memory database.
It is clear that legacy systems have no place in the growing IoT ecosystem. Complementing them with real-time alternatives allows organizations to use the most current data when making critical decisions around operations and the customer experience. Manufacturers and other organizations must be able to stream data from their connected devices and equipment to help improve operational efficiencies and productivity, and reduce costs, while also gaining important insights to help shape business decisions and assess risks. As the amount of data generated by IoT devices continues to climb, having an agile, end-to-end IoT architecture that can both support real-time streaming analytics and scale to handle large volumes of data will only become more vital. In the end, all of the insights being driven by IoT data mean nothing unless organizations can apply them to their businesses before opportunities are lost.
The implications of IoT on crafting products that outperform their predecessors has been a great boon that has revolutionized the way that people look at the internet. And with a change in the making of such products comes the change of how the machines are tailored around the consumers, particularly for user experience.
Creating the perfect customer experience for IoT products has been both a challenge and a subset that has become more intriguing with time. With more tools and platforms at the disposal of companies, IoT products have been enjoying a rapid progression to the next best thing — automation and UI.
Why customer experience matters
The product is the sum of all the company’s work in creating something exceptional and useful. Now, thanks to some principles of ergonomics, or how people behave with products, it can be stated with certainty that not all consumers are proficient in running machines the same way.
The goal of any good customer experience is to ensure that when a consumer gets the products in his hand, he can use it with simplicity and ease. It is to make it a natural action/reaction with the customer, rather than forcing him into it.
It is also important to factor in that even when the consumer gets confused or overwhelmed with the product at hand, there’s always room to avoid mishaps and mistakes. Doing this will ensure the customer relates with the product and the brand. He will use it frequently and ultimately it will become a part of his lifestyle. For businesses, this will mean more trust, stickiness, barriers to entry for new products, opportunities to cross and upsell, and ultimately more revenues.
Creating a phenomenal customer experience for IoT products
There are numerous steps to making a superior IoT product:
- Making it smart. First, start by integrating smart components like sensors and microprocessors, along with the best data storage techniques tailor-made for the right operating systems. Always invest in devices that can handle multiple streams of data and can process them simultaneously.
- Clean design. Start bringing clean design standards to the setup. Start small but start strong. What this pertains to is the use of user experience (UX) functionality only if it complies with the core values of the product. Keep reasonable white space and avoid cluttered texts with a proper call to action.
- Automate and innovate. Bring in automated technology when the time feels right with IoT-enabled functionality that can detect a range of data such as facial features, motion tracks, air content, temperature and so on. One way to integrate such systems is by finding a correlation between them and the actual conditions. For example, a smart refrigerator that can learn the difference between the surrounding temperatures and the designated conditions can automate them quickly.
- Personalize, but factor in scale. All this can be done by personalizing the system to how the user would best consider operating the device. Personalization thus becomes an important step in keeping that link established. Since most users feel confident in touching rather than other interactions, having the machine respond differently based on the number of clicks is a smart way to infuse UX.
- Accommodate for immersive technology. Bringing in capabilities for augmented reality as well as virtual reality can further enhance the experience for users who may not be too tech-savvy or perhaps seem to be overwhelmed by smart technology.
- Make them feel comfortable. Sometimes it pays to think outside the box. Instead of sticking to the same buggy interfaces that are based on screens, expand to more advanced concepts such as systems that respond to gestures, sounds and movements from the users. The point here stresses getting users to feel comfortable with technologies that might otherwise be tough to grasp and use.
Platform the experience: Understanding the consumer
While smartphones and other applications may have a certain difficulty associated with them when it comes to remote access, using them as user experience management tools may prove to be too complex. The key should be to design for scenarios that enable automatic remote access as per the example that was discussed before getting smart systems.
The best possible thing to do would be to get an IoT platform that is properly centered toward what the end consumer is. Not all devices are designed for the average man to be precise, or even for humans for that fact (there are even smart devices that cater to pets, specifically).
This is again all possible if the system has a personalized and automated interface that’s simple to use and easier to adapt to, even when shifting from other architectures.
Some of the key takeaways should always be to look at product development from the perspective of the consumer and decide what could make things more convenient. But it again rests upon the idea of making actions too simple such that users forget commands and finding what’s comfortable for the average mind.
This is barely scratches the surface of what can be considered best practices for engaging customers for IoT products. Having some good data to work with is perhaps crucial to start with as it will allow you to gauge what makes consumers tick and how they would best use any device. The next step would involve bringing those data concepts to reality in the UI and UX part.
Always keep the customers at the center of the design and build around their needs. They are, after all, a company’s best asset.
A few days back, I was at the sixth IoT Summit Chicago, an annual IoT event hosted by the Illinois Technology Association. Being part of a panel on AI, machine learning and industrial IoT, one of the pertinent questions directed at me was how enterprise IoT pilot projects can be managed better to minimize the IoT adoption cycle. Interestingly, this topic dominated most of the sessions at the summit!
IoT providers are rightfully interested to shorten the purchase cycles for their IoT products and systems and accelerate business. However, when we add buyer’s sentiments to the equation, many barriers and bottlenecks pop up that make IoT adoption slower than what we want it to be.
Anyways, while discussing that question at our panel, many useful strategies surfaced which could address the problem head on. Let me share some of those in this blog, with a caution to the readers that there’s no silver bullet to this; as an industry we still have work to do to make IoT pilots faster and smoother.
Businesses care about the bottom-line
Those of us on the technology side of the fence have indeed fascinating things to tell about IoT and digital — about how it can transform the face of businesses and industries. However for an enterprise, it is not so much about embracing digital; digital is only a means to achieve the end. The goal typically is (and should be) to solve a business problem, such as opening newer revenue streams, cutting Opex, improving customer satisfaction and so forth.
Now, when IoT is offered to solve a specific business problem, (for example, adopting predictive maintenance to reduce factory downtime), are we able to define a how-to roadmap to achieve the intended results? Identifying a clear view of the steps and challenges upfront to achieve the results is a number-one requirement of any adoption strategy. However, given the fluid nature of most IoT systems, right now that’s the piece missing in many pilot projects. In a frenzy to get ahead of the market or to do cool stuff with a trendy technology, we lose sight of the road that’ll take the enterprise to its intended goal.
Clearly defining a roadmap at the beginning of a pilot project that identifies various technical nuances, dependencies and tools is crucial to successfully reach the finishing line. This needs to happen irrespective of the size of the pilot.
Enterprise IoT is not the same as buying a smartwatch
We don’t need a pilot or proof of concept before we purchase consumer products, such as a smartwatch. But enterprise IoT products are quite the contrary as they involve considerable investment and risk. The investment is more than just in terms of Capex; people are a huge piece of the puzzle. Adopting digital and its successful deployment can impact organizations in a big way. Are we prepared to absorb and skillfully navigate through organizational barriers? The importance of communications is often underestimated.
A pilot project is not just about technical roadmap, it’s also about a people roadmap. In the case of adopting enterprise AI systems, most often workforce augmentation is part of the equation. Unless that’s addressed well at the planning stage, the pilot may never successfully execute.
Similarly, key performance indicators and scorecards need to be well-defined, not just as final exit criteria, but also for the intermediate milestones of the pilot execution.
Adoption of any digital technology impacts the organization in different degrees. A pilot must factor those in for a faster and more successful transition to production and long term success.
We’ll continue to discuss some more successful strategies in my next blog.
What are your thoughts on accelerating IoT adoption by fast-tracking pilots?
If you’ve been following the news around autonomous vehicles lately, then you know the tech industry is moving rapidly to make cars fully driverless. The advancements are happening — and they’re happening quickly.
But autonomous vehicles, or AVs, wouldn’t be nearly as advanced today if it weren’t for the data currently being collected from cars. Automakers and innovators are using driving data to better understand real-life driving scenarios, which helps them improve products and make the technology safer.
What data is being collected?
If you’re driving a car manufactured after 2014, it’s collecting data and sending it back to the car manufacturer. According to a study by Esurance, this is happening constantly, potentially at a rate of 25 gigabytes per hour. With the average American driving more than 13,000 miles each year, that’s plenty of data to go around.
If you’re in a vehicle that predates 2014, your car is likely still collecting info on your driving habits and storing it in a black box, similar to the devices used on commercial airplanes. Some of the data collected from older vehicles includes the speed of a vehicle, whether seatbelts were being used and conditions of a vehicle before and during a crash. New technology may even take this a step further, and we may start to see cars that record your eye movements, hand positions on the steering wheel, smartphone connections and even whether you’re texting while driving.
How is the data being used?
People are more concerned about their privacy than ever — and rightfully so. In the case of automobile-collected data, which is generally owned by the automaker, there are few regulations about how the data can be used.
Think about how you currently use the data your car gives you. It can monitor your speed so that you drive safely, let you know when you need to fuel up and tell you when your tire pressure is running low, just to name a few things. If you have a navigation system, it’ll even help you get to where you need to go.
Now, layer that data on top of the data collected by today’s newer vehicles: how a car is driven differently in inclement weather, when the average driver is turning on their lights, if using a blinker decreases lane-change-related accidents and even if there’s an imminent collision. These are all elements that an autonomous vehicle needs to track, understand and use for improving processes and making everyone on the roadways safer.
How is this data helping build the auto of the future?
As the Society of Automotive Engineers illustrates with its five levels to autonomous vehicles, we are currently at the tipping point for AVs — between level 2 (limited automation, such as cruise control and lane-centering) and level 3 (partial autonomy, such as advanced lane-change assistance and applied braking).
In order to get AV tech to levels 4 and 5 — where our vehicles function fully on their own — engineers must use machine learning on all the data that’s being collected on drivers in order to inform AI-based systems.
To inform AI-driven systems, incredible amounts of data are required. Data scientists are working closely with automotive designers and engineers to build the car of the future. From AV-related patent licensing programs from Microsoft and Toyota to the marriage of data science and AV development, we are beginning to see the way that data helps build driverless cars.
There are many predictions on the expected growth of IoT, but what is clear is that there are already billions of devices connected worldwide and that number is expected to increase significantly in 2019. Devices are becoming smarter and easier to manage, accounting for an associated rapid increase in IoT-related data, and there has been an increasing number of IoT-related platforms and services being offered in the marketplace. While the devices and applications are the initial point of focus for organizations, cities and municipalities considering IoT systems, careful consideration must be given to the networks chosen to host these devices and run these applications.
Given the sheer number of devices, the data they generate and especially the increasingly important tasks and services being based on IoT, network performance must be high on the list of considerations for those developing, launching and managing IoT systems. Within the healthcare IoT context, network performance monitoring is especially critical. If a device becomes disconnected from the network, or is simply switched off, monitoring can act as an early warning system and help avert what could be a potentially life-threatening incident. It can also act as a dashboard that brings together traditional IT and, in this case, medical devices.
LANs, low power-wide area networks, cellular IoT and cellular LPWANs, including newer wireless networks that support low-power machine-to-machine (M2M) communications, such as narrowband-IoT (NB-IoT) including Sigfox and LoRa, are all currently being used to host IoT applications. Each has its strengths and weaknesses, with cost, data transmission and related power consumption being key considerations.
Sigfox, in particular, is one dedicated IoT network getting a lot of attention due to its ability to radically decrease the costs and energy consumption associated with IoT connectivity. The network uses the 868 MHz wireless radio spectrum in Europe and 902 MHz band in the U.S., and features a wide-reaching signal that easily passes through solid objects and covers large areas. Using a unique approach to wireless connectivity that draws on the highly reliable and interference-resistant ultra-narrowband frequency to provide an exceptionally wide range while simultaneously requiring very little power, Sigfox has radically altered the economics associated with the internet of things.
IoT network monitoring specifics
IoT device managers and especially systems administrators should have the ability to monitor and visualize the functionality and measurement data emanating from Sigfox-enabled IT infrastructure sensors, as well as from other objects, devices and machines that are equipped with or have adaptive Sigfox connectivity. Sensors in the context of network monitoring are the basic monitoring elements. One sensor usually monitors one measured value in the network (for example, the traffic of a switch port, the CPU load of a server or the free space of a disk drive); on average, several sensors are necessary per device — in some instances, one sensor can be used per switch port.
When it comes to monitoring an IoT network, network administrators should check for the availability of sensors designed to display data in messages received from IoT-capable devices — such as Sigfox devices — that are pushed via a HTTPS request to the network monitoring tool. In addition, the IoT monitoring system should provide a URL that can be used to push encrypted messages to the monitoring probe system via HTTPS. These capabilities are especially useful when it’s necessary to push data to a hosted (cloud-based) IoT monitoring system.
IoT network monitoring technologies should be available in both hosted and an on-premises versions. They should also be highly flexible and include all the functionality required by the stakeholders to see exactly what is happening in real time across their IT infrastructures, pooling data from networks, systems, hardware, applications and devices together in a single view.
The best of these technologies will offer multiple methods to initiate messages from the network to the network monitoring system regarding overall functionality and the status of any measurement data coming from sensors and devices. Advanced IoT network monitoring systems should generate alerts or notifications immediately, once predetermined performance thresholds of the user’s choosing are met, ensuring that IT is always the first to know when a problem arises. Event triggers to automatically launch applications that provide a fix are a huge bonus in any good monitoring system.
Additionally, those using IoT network monitoring tools will find highly customizable dashboards very useful as they can be configured to show exactly what is important, from the overall health of the network to granular details like the speed of server cooling fans.
The Sigfox network is designed to connect billions of devices to the internet via its LPWAN and cloud services while dramatically decreasing the cost and complexity of the systems involved. It’s doing so using a unique approach to wireless connectivity that draws on a highly reliable and interference resistant ultra-narrowband frequency to provide an exceptionally wide range while simultaneously requiring very little power. As such, the company has radically altered the economics associated with the internet of things.
Sigfox is enabling organizations across many industries, including those associated with supply chains, smart cities, manufacturing and automation, to realize the promise of the internet of things. The traditional definition of the “network edge” has now become fluid, and it’s more important than ever before for sysadmins and IT teams to know what is happening on their changing networks and be in control at all times.
Technology moves fast! Sometimes you have to move to adjust to conditions. When it comes to travel and technology, I have learned a few things. I have also had to make some additional adjustments that made me rethink how to address my technology practices. In late 2018, I had shoulder surgery which made me make a sudden change to how I do many technology and travel things. Typing wasn’t the same (or possible), phone usage wasn’t the same and everything was more difficult. Here are few technology tips I have incorporated over the years with my travel and in spite of the temporary setback.
Voice recognition has come a long way — and it is easier than ever. One of the things that I do a lot is writing, including long-form writing of blog posts and white papers. After the shoulder surgery, I was toying around with purchasing voice recognition software, but it turns out that built into Windows 10 is speech recognition. You simply display the keyboard on the system tray, then click it, then select the microphone. I was completely amazed how accurate Windows was with voice recognition without any voice “learning.” So much so that I wrote an entire white paper with this engine. This technology made typing much more accessible. The figure below shows how to enable the microphone to transcribe what you are saying:
I feel this capability is more advanced than what we see in our mobile phones, as it is more accurate in my experience. A pro tip is to use the microphone for transcription as if you were presenting to a large group (speaking natural, but a bit slower) and put in pauses and breaks as needed. Truth be told, this was the easiest white paper I’ve ever written with this engine.
Get the mobile app. If you travel a bit, get the mobile application of your airline. This is one of the best ways to keep up to date on travel plans, including when things change. Additionally, if you are the type that tracks luggage, most airlines will give you tracking accuracy of your luggage on par with that of your favorite parcel delivery service! Don’t fly much? Install it for your trip, add your trip information, then delete it when your travel is complete!
Plan on portable Wi-Fi. Wi-Fi hotspots to take with you on your daily travel! I’ve had this in only one international hotel (in Hong Kong), but hotels are starting to provide portable Wi-Fi hotspots. This was a great way to get high-speed broadband while on the go, and in this example it was no cost. You can also rent or purchase plans for portable hotspots at destinations you visit, which is a nice approach if you plan on sharing with multiple phones, tablets or computers. This can also provide higher speed access (and possibly cost savings) than enabling a roaming plan on one’s phone if it is not already in place.
Know your ABCs — always be charging. You have to be charged, and finding an outlet in the corner of your lunch venue to charge up is not a good idea. In addition, you may forget to take your phone or cable! This is helpful from an accessibility standpoint as you can just charge when you can as it may be difficult to source charging later (or even move to a proper position to charge).
I recommend having a good portable power pack. Ideally a pack that is over 10,000 mAh and can be charged by conventional USB cables versus a brick power supply. I am not wild about units more than say 15,000 mAh, I’d rather carry two smaller ones. I recommend over 10,000 mAh as it can charge a typical smartphone around two times fully and it usually can be charged itself overnight. Some of the larger battery packs can take more than a typical overnight to charge fully.
Travel light. A messenger bag works for me. I see many tech professionals carrying the backpack almighty and I’ve reassessed the practice. After shoulder surgery, I could not put the same weight on the bad shoulder; so I had to adjust. I made myself migrate to a smaller messenger-style bag — and I love it. I can fit a laptop, mouse, clip headphone case to the side, power, cables, sunglasses, business cards and a few papers in there. What more do we really need?
Set up Android Auto or Apple Car. This is part safety, part ease of use. I love these services as they make driving with navigation, calling, music and a few other services feel safe. If you have not looked into this, you should. And in many situations, your car does not need to support it. For example, in my car I install my phone in a universal mount and the screen changes to the Android Auto display with simple controls for music, phone and navigation.
A bonus tip here, especially for those who drive on trips, is to download offline map data. Recently during my vacation to Iceland, I downloaded much of the country map data so I had full navigation services, even if I did not have Wi-Fi or data service on my phone. Add starred destinations and the navigation process is a snap.
Secure a “white noise” setup. I sometimes travel to hotels (or am in an airplane) and want to sleep, but the noise level dictates otherwise. To solve this issue, I’ve taken the approach of a white noise application. My favorite is the “brown noise” option on the application, in fact. Couple that with a proper set of headphones and I can sleep anywhere.
What tips do you have for travelling and technology? Share your tips below!