This is the first of a five-part blog series.
So, one of my New Year’s resolutions was to blog more, but to date I’ve been blogging about as much as exercising. That said, as my IoT strategy and planning team grows at Dell Technologies, it’s freeing up a little time and there’s no time like the present. This is the first in a five-part series about the importance of digital transformation, why I like to say we’re in the “AOL stage” of IoT, and how we get to IoT advanced class, including through enabling a cloud-native edge.
Let’s kick it off with a buzz
I get it, digital transformation is a buzzword. Nonetheless, it’s a very important concept to embrace and one that will likely determine the future growth or demise of your business.
In fact, Pierre Nanterme, CEO of Accenture, has stated that digital is the main reason just over half of the companies on the Fortune 500 have disappeared since the year 2000. If you want to know the reasons why, grab a coffee, get comfortable and read on.
It’s about rapid innovation
Here’s my short prediction of how most businesses will operate going forward: They’re going to develop applications to power a smart service or device, which will drive an entirely new customer engagement and, in the process, transform their business.
That business will generate data, which will be analyzed to improve the performance of the application and identify what new features are required. In turn, this will provide new insights to generate a whole new set of applications, which will lead to additional customer services. And so the cycle of innovation will continue.
It’s about rapidly delivering outcomes
Critically, the speed that the business can react and reiterate this cycle represents competitive advantage. I believe that this is the way every digital business in the future is going to operate. It’s all about pace of innovation. Think about it as shifting focus to outcomes and services rather than just producing widgets.
Power by the hour
Remember the pioneering 1980s Rolls-Royce initiative “Power by the Hour” — an arrangement where Rolls-Royce maintained ownership of the jet engines, managed maintenance and repair, charging customers on a fixed cost-per-flying-hour basis? Over the years, we’ve seen many industrial manufacturers follow this path by offering performance-based contracts tied to product availability or usage.
Heating things up
Nest is a great early IoT success story, where it redefined the thermostat to keep your home comfortable while using less energy. As a result, other thermostat makers were forced to redefine their products to compete by offering similar outcomes.
In another well-known example, many years ago, Xerox had the vision of creating Managed Print Services, where it managed fleets of printers and multifunction devices, even those made by other companies, for enterprise clients using a simple click-charge pricing model. Xerox took care of maintenance, upgrades, paper and toner replenishment, with clients only paying for pages printed. All in all, a better customer experience at a lower operating cost.
Air as a service
When the 100-year old company Kaeser Compressors began building intelligent air compressors, it was initially about offering predictive maintenance as a service to customers. This has since led to an entirely new business model — “air as a service,” with pricing based on the consumed quantity of compressed air. Interestingly, performing maintenance on customer equipment used to be a revenue generator for Kaeser, but in this model, if one of the machines fails not only does revenue stop, but it’s a cost center for them to fix it.
New business opportunities
So, now there’s new commercial pressure to build reliable products and minimize maintenance downtime, using predictive capabilities. While this type of transformation requires a major change in mindset, it opens up exciting and completely new business opportunities to drive continuous value from these products.
A new mindset is required
If you think about it, the “old way” of developing products has been reactive to customer needs, whereas the “new way” is about trying to predict customer needs and adapt through continuous improvement. This is increasingly possible because we have a more intimate understanding of user behaviors and context based on continuous telemetry from connected products.
Value is cumulative
The value is no longer just in the product itself at the time of purchase, but the perceived future value from feature updates and a growing ecosystem.
So here are my three takeaways for the new product mindset:
- The new control points are personalization and continuous connection with customers;
- The network effect is a critical consideration — either find the ecosystem your product participates in or build one yourself; and
- It’s key to break down large capital investments and create stickiness to ensure recurring revenues. You have it with your smartphone today.
Nurturing existing products
Putting some of this in context, Sonos is a great example of a company that believes in continuing to nurture its existing product line through regular software updates. When the Play:1 launched five years ago, it required a device called a bridge to hook into your home router. Today, updated software allows these same Play:1 devices to stream songs without a bridge. And if you happen to already have a bridge, a software update repurposes them to act as network extenders for Sonos speakers to be used in the far reaches of a large house.
Ecosystems — build one or join one
More recently, Sonos released Trueplay to breathe new life into users’ existing products. This feature, which arrived via an update to the Sonos mobile app, helps map a room’s unique acoustic properties to optimize the way each speaker plays. This is all about tweaking your business model to adapt to user needs and building an ecosystem in the process.
Another example of this constant customer connection and delivery of continuous value is how Tesla updates its cars over the air with new features. For example, the company recently pushed out the automatic ride-height feature to clear known bumps based on location.
It’s all about results
Let’s switch to the goal: business results. Consider a handful of stock gains since the recent low point in the U.S. market in January 2009. Netflix has grown over 9,000%, Domino’s Pizza over 6,000%, Amazon over 3000%, Apple just shy of 1,500% and Google just over 600%. [Insert scratching record sound here.] You might notice that one of these companies is not quite like the others!
Let’s talk pizza
The Domino’s story is great. At the end of 2008, Domino’s was threatened by declining sales and distressed franchisees. For starters, it realized its pizza just wasn’t good anymore. Domino’s fixed that (and being partial to pizza, I must admit it’s now pretty tasty), but to accelerate winning customers back, the company recognized that data was a huge asset and decided to embrace digital transformation.
Digital transformation for a bigger piece of the pie
So, in recent years, Domino’s has been finding all the digital ways possible to allow you to order pizza with minimal human effort. Fast Company named Domino’s one of its “Most Innovative Companies of 2017,” in part because of its “zero-click” ordering app that, when opened, automatically sends in the customer’s preloaded preferred order after a 10-second countdown, not to mention its cloud-based AnyWare app that lets users order from their Samsung Smart TV, PlayStation 4, Amazon Echo and even their car (using Ford’s Sync service).
Get this — you can even tweet a pizza emoji via Twitter or open a chat on Facebook Messenger.
All in all, Domino’s offers at least 10 ways to order pizza online. The company now gets more than 60% of its orders through these various digital platforms. However, Domino’s isn’t just leading the way in ordering innovations. The company has also partnered with Ford to test autonomous delivery vehicles that also feature an actual oven. There is even a wedding registry for Dominos — you couldn’t make this stuff up! (And where was this when I got married?!)
As a result, the Domino’s Pizza share price has since increased over 6,000% — more than a bunch of other foundationally high-technology companies. The company is now worth $12 billion and went from having a 9% share of the pizza industry pie (pun intended) to 17% today, recently surpassing its biggest rival, Pizza Hut.
Let’s not forget, this is a pizza company. So, if you think you can’t or shouldn’t digitally transform, think of Domino’s!
Key to this new mindset is the speed at which you can build and release software. Think about it — modern software companies are releasing features many times a day, globally.
This quote from James McGlennon, CIO of Liberty Mutual and a Pivotal customer, sums it up well: “One of the things we’ve learned is that if you can’t get it to market more quickly, there is no doubt that the market will have changed.”
Cloud-native as a key enabler
I believe that the concept of cloud-native is a big foundational enabler for this necessary scale and agility. There are different interpretations of what cloud-native means out there, but here are some key attributes, borrowed from the Pivotal website within the Dell Technologies family:
- DevOps is the collaboration between software developers and IT operations with the goal of constantly delivering high-quality software that solves customer challenges. It has the potential to create a culture and environment where building, testing and releasing software happens rapidly, frequently and more consistently.
- Continuous delivery, enabled by agile product development practices, is about shipping small batches of software to production constantly, through automation. This means that organizations can deliver frequently, at less risk and get feedback faster from end users.
- Microservices is an architectural approach to developing an application as a collection of small services. Each service implements business capabilities, runs in its own process and communicates via HTTP APIs or messaging. Each microservice can be deployed, upgraded, scaled and restarted independent of other services in the application, typically as part of an automated system, enabling frequent updates to live applications without impacting end customers.
- Containers offer both efficiency and speed for deploying individual microservices, but they can of course be deployed via other means as well.
A primary tenet of cloud-native is platform-independence so you’re not reliant on one infrastructure, especially a public cloud that can get very expensive when the data meter really gets humming. Teams retain the ability to run apps and services where it makes the most business sense — without locking into one vendor’s cloud.
However, don’t let the “cloud” in the name fool you — the term “cloud-native” is about how software is built and deployed, not where it’s run.
Tune in for my next blog, which will talk about how I like to say that we’re in the “AOL stage of IoT,” which leads to a paradigm I like to call “Pi and the sky”. In the meantime, I’d love to hear your comments and questions.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
As IoT becomes an established part of everything from advanced manufacturing to home security, it is easy to forget that it’s not done evolving. New use cases are coming online every day to solve some of the most difficult problems facing companies and people. IoT systems in remote medicine, augmented reality and drone delivery are tantalizingly around the corner. All that these systems need to take off is widespread, low-latency connectivity to devices spread out throughout the world.
That is exactly what 5G wireless aims to bring. 5G is a set of protocols and new technologies that link different wireless communications networks together into a “network of networks” capable of greater speed, bandwidth and lower latency than ever before. Those new capabilities, in turn, allow IoT to take on new challenges that were previously too difficult or too costly. For example, the increased bandwidth of 5G can allow firefighters and ambulance crews to make use of augmented reality, pulling up data about the model of a car involved in a crash or care procedures for a patient with a rare condition. Similarly, the low latency of 5G networks can allow doctors to perform delicate surgeries remotely, bringing advanced medical care to isolated communities for the first time.
New opportunities, but also new confusion
While 5G can allow IoT to do more than ever before, it also presents some unique business challenges both to communication providers and IoT users. Previously, communications could be a neatly circumscribed layer to any technology. With 5G, that becomes more complicated. As Dr. Usman Javaid, head of digital products and solutions at Vodafone, described it, “While it was clear in 4G, in 5G the boundaries between what is infrastructure and what is an application is less clear.” For example, getting the low latency necessary to conduct remote surgeries not only requires new infrastructure from wireless carriers, but application developers must handle data in certain ways as well, otherwise the application could end up slowing down the fast communications from the larger network. The flexible software-defined nature of 5G makes this sort of integration possible, but it also demands wireless carriers and solution developers work together in unprecedented ways to make an IoT system work.
Approach IoT on a use-case by use-case basis
Every new IoT technology will likely have slightly different communication needs. Some will need high bandwidth, others long battery life and still others ultra-low latency. Since these communication needs drive how wireless carriers and solution developers should work together, the relationships between carriers and developers will be driven by each particular use case. Carriers “need to think about the use cases that 5G enables and find the players involved in those ecosystems to make it happen. Carriers have an opportunity to be a part of the service solution,” Javaid said.
That level of cooperation can be a big change in approach for both carriers and IoT users who are used to a more typical seller-buyer relationship. However, adapting to a new approach is crucial for the long-term success of carriers in a 5G-for-IoT world. While there is value in pure connectivity, the real value 5G creates will be in the new IoT use cases it enables. If carriers want a slice of that value, they need to be involved in the IoT solution development.
That means that future IoT use cases will increasingly be the product of ecosystems and not just lone tech wolves tackling problems. As Pim Den Uyl, VP of business development at Ericsson, said, “No one can do all of this themselves, so what role do you play and who do you work with?”
The start of many beautiful relationships
So how should wireless carriers and IoT solution developers approach these new relationships? Den Uyl provided some key insights:
- From an operator perspective, the first challenge is to understand the different industries — what are their pain points and what value can you create? Then consider with which industry you want to prioritize working.
- The second challenge — for all players — is how does the ecosystem evolve and what is your role? With whom do you partner?
While this use-case driven approach to partnerships may feel unusual, it is not wholly new. Deloitte’s research into IoT ecosystems has uncovered a few considerations that can help turn a traditional strategic planning process into one that cultivates the right ecosystem of partners:
- Understand where there are gaps in the ecosystem. In any industry, there can be gaps and imperfect connections between participants. Identifying where the gaps currently exist in a market or where customer needs are not being fully satisfied can help uncover the use cases where a company should focus.
- Identify the ecosystem partners needed to fill those gaps. Once a use case within an ecosystem is identified, companies should begin to assess what is needed to close that gap. In IoT, rarely is any single company able to provide all of the technology and expertise necessary to make any system work. Therefore, identifying the right mix of carriers, technology providers, developers, integrators, customers and others can be a critical first step toward platform development.
- Add capabilities to fill that gap. With an initial set of partners in place, a potential platform provider can begin to assess what organizational and technological capabilities it needs to make the new IoT system work. Considering the relative strengths of each partner is key to understanding what role each should play.
- Attract participants in the race to scale. As a new system begins to bear fruit, understanding how different partners capture value from it is key to scaling and long-term success. If only one partner benefits from a new technology, it is unlikely to last very long.
5G communications are set to take IoT to new heights, enabling heretofore undreamt of applications and use cases. But with entirely new technologies comes a new way of doing business — one built on partnerships and shared expertise. While that may seem new and uncomfortable, a few easy considerations can help any company — whether wireless carrier or IoT solution developer — grow into the new world of 5G and IoT.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Almost daily we hear about face recognition. From the contested accuracy of Apple’s FaceID on its latest iPhone to testing by NIST or even apprehension at the ACLU, there are numerous active discussions. As a technologist and entrepreneur involved in the biometric space for over 20 years, these discussions often become simplified to the point of being potentially misleading. I feel compelled to weigh in because the waters have been somewhat muddied around the value that accurate face recognition really can provide in 2018.
For perspective, machine vision has been evolving for several decades. Many consider the father of facial recognition to be a mathematician named Woodrow Wilson Bledsoe who developed a system to classify photos of faces using what’s known as a RAND tablet nearly 60 years ago. This device let people input horizontal and vertical coordinates on a grid using a stylus that emitted electromagnetic pulses as a way to capture and register a facial profile.
Fast forward to 2018 and we are seeing lots of face recognition use cases gaining traction, from accessing mobile devices to identifying people by the police.
This is all good news, as face recognition — and particularly a more accurate version that provides true face authentication — has the ability to improve our daily lives without invading our privacy. What gives me pause is that many of these articles promoting a technology meet narrow and specific research-oriented criteria rather than demonstrating systems in real-world use cases. They often get great press, but are disconnected from how face authentication can deliver value along with preserving people’s privacy across a myriad of potential applications. When the in-field pilot trials fail, it generates a lot of confusion, mistrust and false perceptions. Just look at the recent ACLU test with Amazon’s online face recognition technology.
As another example, Shanghai-based company Yitu Tech recently achieved a notable success in what is called the Face Recognition Vendor Test, a competition conducted by NIST. While impressive based on the limited set of conditions tested, a careful read reveals the best test results are done using mugshots and visa pictures — under controlled lighting conditions and with views of tested faces. Even the “in the wild” data set is far more controlled from a lighting and position perspective than is possible with the view of a tested face in the real world.
These evaluations were conducted in a highly controlled environment that is great for academia, but this approach has limited predictive ability when it comes to trying to verify live people under many real-world conditions. For pragmatic simplicity, this type of testing does not take into account many critical factors that impact accurate face authentication results, such as lighting, motion artifacts and physical position relative to the camera to name just a few. For highly controlled conditions, such as recognizing someone at a passport machine, this might be fine.
In many respects, this testing is an exercise in “fun with numbers.” The results focus on false match non-match rate or FMNR, also known as the false rejection rate, which is of limited practical use given the fact that the tests were conducted in highly controlled conditions.
While some may get excited over these types of test results, I am passionate about the fact that in the real world, face authentication is poised to initiate a whole new paradigm for secure, frictionless and convenient interactions while creating practical new markets and supporting innovative applications. To do this, you need more than just great FAR (false acceptance rate) and FRR (false recognition rate) scores. You need technologies that work in all the environments that it is being asked to perform in. This often requires critical supporting technologies such as 3D recognition and authentication to accommodate the challenges of real-world conditions.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Machines are increasingly taking over tasks traditionally conducted by humans, often working independently to improve business productivity. But if we want our security measures to keep pace as more and more machines come into existence, we need to be able to reliably determine which machines should be trusted and which shouldn’t.
In order to identify the trustworthy machines, first we must, well, give them identities. In the recent past, this wasn’t necessary. Tom’s hammer or Joe’s plow didn’t really need to prove they really were what they appeared to be. But then things turned digital and now we have all sorts of devices — from cars to wireless routers to medical devices to home and industrial IoT components — making autonomous connections and making machine identity suddenly very important.
One of the most secure ways to establish a machine ID is by assigning a unique certificate or key to it. This identity is then checked against a central authority each time it connects to the network in order to establish a chain of trust. It’s sort of like when you cross the border and an agent scans your passport. You get permission to cross only when your passport details are checked against a central authority that validates your “key” or identity, confirming you are who you say you are.
The challenge with assigning keys to every single machine is that there are so many of them, and their numbers are exponentially growing. When organizations start to accumulate keys, they need to be able to keep track of where they are stored and who controls them. They also need to rotate the keys periodically and revoke keys when machines are decommissioned in order to maintain good security hygiene. In short, key management becomes a very serious undertaking.
Staying on top of this is tremendously important. Recent data breaches have demonstrated that hackers can compromise machine identities to conduct an attack either by stealing a trusted identity to get onto a network, establishing a fake identity of their own or, in the case of an unsecured network, get in without one at all. The infamous Target breach was a great example, where the attackers compromised machine identities associated with an HVAC system at a facility in Texas and used those credentials to gain access to a part of Target’s network where customers’ credit card information was stored.
Despite all the challenges, it’s not impossible for security teams to keep pace with the ongoing rise of the machines. They just need to apply the same foundational components that they would in other types of information security — confidentiality, integrity, availability, accountability and auditability. Issuing machine identities securely is the crucial first step for executing on these concepts and ensuring that a secure foundation is in place for IoT.
Nearly every week, we hear about a new cyberattack or security breach. In fact, more than 1.7 billion identities have been exposed in data breaches in the past eight years. As the world becomes increasingly interconnected, organizations become susceptible to more risk. While IoT presents new opportunities, it also creates new security risks that enterprises must address. Having a high number of new endpoints due to IoT requires an increased focus on security.
As organizations accelerate their digital transformation plans and processes, many companies will choose to stop updating or supporting old versions of their software, exposing themselves to cyberthreats, system failures, increased costs and future planning limitations.
On top of increased emphasis on advanced technology and systems, organizations are now accountable for protecting, structuring and identifying data, and complying with new regulations like GDPR. Preventing hacks and managing technology risk can be a challenge for organizations of any size. The consequences of technology hacks can be detrimental, with organizations facing legal liabilities, penalties, costs and fines, not to mention the impact on brand reputation. Luckily, there are easy steps organizations can follow to successfully manage IoT security and assess risk in the enterprise.
Avoid technology obsolescence
Actively assessing the technical fit of applications is one of the first lines of defense when taking steps to prevent risk in the enterprise. Technology obsolescence can be one of the biggest factors that leaves your organization exposed to hackers and outside threats. Unfortunately, many companies do not know the true lifecycle of their technology and software, and fail to run updates which leads to risk. Many enterprises run on complex, legacy technology and applications and ignore the risks of end-of-life technology. The technology risk for applications and business capabilities needs to be evaluated based on the underlying IT components. By identifying underlying IT components that are putting applications at risk, you can mitigate those threats to security. Regular and frequent software updates re-engage existing users, fix bugs or issues, and patch problems before hackers can exploit them. Using a dashboard to track and assess the risk of your organization’s application landscapes and IT components is an easy way to plan and manage lifecycle information and retire technology in the enterprise as needed.
Up-to-date technology product information is a critical and efficient resource for enterprise architects to assess internal technology risks. Setting up a basis to manage and automate updates and application lifecycles through a standard catalog provides a single view of vendor and application information, which can help enterprise architects visualize and proactively update and prevent technology obsolescence. Using up-to-date and easy-to-read reports, enterprise architects can quickly analyze the business impact of each application and understand the severity of risk, should an outage or breach occur. Understanding the impact and dependencies of each application can help determine when and where risk lies.
Determining how device networks will communicate, how data will be processed, which applications or systems to invest in, and which teams will oversee IoT endeavors will be vital to managing IoT risk in the enterprise. IoT generates massive amounts of data, and organizations must be prepared to maintain the high-volume influx. From devices to cloud platforms and analytics platforms, transparency throughout the infrastructure is critical to connect to all data, process it and deliver relevant pieces to the business owners. Microservices enable organizations to quickly deploy, maintain and account for the volumes of data that IoT brings, while breaking down silos.
The demands of IoT alone would slow any monolithic architecture to a snail’s pace, dealing with integrations of heterogeneous connections, devices, applications, sensors, protocols and servers necessary for an enterprise to digitize in the age of IoT. Not only are microservices a cost-effective solution for enterprises, but they enable agility and innovation separate from monolithic, legacy architecture. With applications being developed and deployed independently, enterprise architects can also easily maintain each lifecycle and the security of every application. Microservices help organizations better pinpoint internal bugs and areas of vulnerability, and quickly patch and revamp without shutting down the entire system of a monolithic architecture.
Firm up physical and digital security processes and standards
Ensuring end devices and physical technology are safely stored is a simple step to protecting your organization from curious minds who want to tamper or tinker with new hardware devoted to IoT endeavors.
On the digital side, there are several measures that should be taken to combat risk, including deploying firmware updates to every IoT device, as these updates may contain important security patches that protect your organization from unauthorized access. Assess the security and strength of your authentication process when it comes to accessing IoT devices in your network. Avoid using default logins and simple passwords that hackers can easily guess or steal to manipulate your IoT devices. Strengthening authentication processes and implementing firewalls in order to limit access and better monitor devices is an important layer of IT security and key to risk prevention.
From a network perspective, consider isolating your IoT devices using virtual LANs, routing or creating separate networks for devices to run on. Secure data by deploying end-to-end encryption, protecting it as it crosses the network and while it’s stored on the back-end server. If embedded IoT devices cannot perform natively, use infrastructure techniques such as encrypted tunnels to properly secure data.
Create a plan
Even with the most stringent security protocols in place, breaches may still happen. Creating a plan for how to respond is quite possibly the most important step of all when preparing for and being proactive when it comes to managing risk in the enterprise. Having a plan in place will ensure your team knows what to do in the event of an IoT security breach, and can ensure things don’t get worse in an emergency situation.
Such a plan requires transparency and accuracy about what you have regarding applications, processes and IT components, as well as their relation to each other and their context to projects, user groups, business capabilities and services offered by IT or any other department — and this is exactly what we bring to the table.
In a world where it’s nearly impossible to avoid digitization, IoT is uncovering opportunities and insights at an unprecedented pace. As organizations eagerly adopt new technology and applications, it’s important to ensure your enterprise architecture can handle the demands of IoT and that there are protocols in place to provide total data security. Being proactive and preparing can save organizations from paying the price — both financially and in loss of consumer trust — when it comes to security breaches.
The internet of things has advanced technology in many global industries, but one sector in particular that has benefited from IoT is healthcare. Global study, “The Internet of Things: Today and Tomorrow,” published by Aruba, a Hewlett Packard Enterprise company, found that roughly 60% of healthcare organizations have already introduced IoT devices into their facilities.
Healthcare industry leaders are using IoT for patient monitors (64%) and energy meters (56%). The results are already very positive, with 57% of respondents noting increased workforce productivity and 57% saying IoT helps save costs.
As more healthcare facilities invest in IoT upgrades, it’s interesting to note the improvements felt by patients, providers and stakeholders alike. Furthermore, IoT is bettering individual health through IoT apps and monitoring that give people more knowledge and control over their daily health. Here’s a look at how IoT is revolutionizing the healthcare industry as a whole, from hospital upgrades to at-home apps.
IoT launches smart hospitals
A smart hospital uses connected assets to improve maintenance, procedures and capabilities. Though many hospitals have not yet upgraded their technology, it’s likely that the next five years will see the launch of many smart hospitals around the world. Connected assets improve the functionality of all hospital equipment. Maintenance managers’ jobs will be much easier as they can remotely monitor the hospital temperature, humidity and air regulation.
Managers will also have access to predictive analytics, which means far better regulation of all machines in the hospital. For example, if something happens to an MRI machine in a hospital, it can mean delays for patients that do not have time to wait. Predictive analytics means managers can fix a problem before it happens.
Smart beds will be able to monitor a patient’s blood pressure, temperature, heartbeat and more. This technology can ease the burden off of already overworked nurses who continuously monitor this information. If there is a sudden change in a patient’s vital signs, an alert will be sent to a supervisor who can regulate the situation.
Connected wearables automatically track patients
One problem healthcare providers often face is understanding what happens to their patients after they are discharged. Patients have to come in for multiple follow-up visits and report any incidents, feelings or problems after they leave the hospital. Wearable technology can make this much more manageable for both patients and healthcare providers. For example, a wearable device can measure a patient’s heart rate, blood pressure, temperature, breathing and more, which can then be tracked remotely by doctors and nurses. If a patient is having breathing problems, an alert can be sent to the person monitoring the device, who can then take necessary action, such as sending an ambulance. Imagine the implications of this technology for elderly patients or patients who live alone.
Wearables mean that patients do not need to go to the hospital when it’s not necessary because doctors and nurses can track progress remotely. This allows hospital staff to work on more pressing and urgent cases. Furthermore, when the patients do go to the doctor for a follow-up, staff can review data collected by the wearable for an accurate diagnosis.
IoT revolutionizes patient data collection and records
Connected devices mean endless patient data collection. Imagine a patient in a smart bed — the amount of data that medical staff can view will be extremely useful and recorded automatically. Again, electronic, connected records help nurses because it means less time chasing down information on each patient. The data collected from the patient’s wearable device can also be stored safely in their electronic health record, making diagnosis easier when all information is in one place.
Individualized health assessments
Patient care will completely change over the next five years, but what’s also changing is that now individuals can improve their health themselves with accurate assessments. Many apps have sprung up to give people more control over their health.
Sleep apps monitor sleep cycles to make sure everyone gets enough rest. Fitbits track exercise routines to help people stay in shape. Calorie planner apps help with meal preparation, with many apps offering a tailored nutrition plan so individuals know what to eat and what foods to avoid. On the horizon for the individualized experience are ingestible cameras and internet-connected sensors that can monitor whether or not patients are taking their medication.
A bright future for IoT and the healthcare industry
By 2019, 87% of healthcare organizations plan to implement IoT technology, which is slightly higher than the percentage of businesses that plan to implement IoT by this time (85%). Connected technology can revolutionize the industry and generate products, such as smart inhalers, smart pills and more, all of which will help individuals become healthier. Multiple stakeholders, and especially patients, will all benefit from the expansion of IoT in healthcare, which is why many are looking to invest in this sector.
Pharmaceutical shipping is a particularly compelling application of IoT technologies because of the special challenges and stringent regulations involved. Pharmaceuticals have special requirements with regard to temperature, making precise control critical during transport. An increasing array of pharmaceutical products requires cold chain transport, including all vaccines, many drugs and a significant proportion of biological samples and diagnostic tools.
Most vaccines require storage temperatures of 35 to 46 degrees Fahrenheit and lose potency or physically change after exposure to excess heat, cold or if they are left out too long. Over the last 20 years or so, new vaccines have emerged with different temperature requirements that make their storage even more complex. The potency of most vaccines can be affected by heat during transport or storage. In many cases, vaccines that are shipped to third-world countries are rendered useless due to heat exposure. Patients who have been injected with spoiled vaccines are actually put at greater risk, thinking that they have been immunized when they have not. Some vaccines are more sensitive to heat than others.
Heat is not the only environmental condition that can affect these types of medications. A number of vaccines must also never reach freezing temperatures, especially those that use adjuvants such as aluminum. According to the World Health Organization, between 75% and 100% of vaccine shipments are exposed to freezing temperatures. When a vaccine is damaged by freezing, its loss of potency is permanent and cannot be restored.
These requirements are supported by comprehensive, often global, regulations governing the handling, shipping and packaging of medical products. Deviation from temperature, light, humidity and other prescribed conditions can result in significant fines for noncompliance in addition to the loss of effectiveness of the products. This challenge is especially acute in small or personal shipments from a doctor or medical facility to one specific patient. Depending on the distance, the shipment might travel via several transport modes handled by different logistics companies, including small companies or independent contractors during the last-mile segment of the shipment. It is especially difficult and important to ensure and record the correct temperature when a shipment is handed off from company to company or from mode to mode.
The addition of a smart sensor tracking device enables the shipment itself to provide visibility into its whereabouts and environmental condition. The key benefit of always-on, real-time data is that it validates the unbroken integrity of proper temperature control from point of origin to destination or patient. Smart IoT devices continuously log data and transmit it via a cloud-connected gateway to enable shipper and logistics partners to make decisions while the shipment is in transit and save time from having to download the information after it has reached its destination. Power requirements can be low, as the devices can communicate via Bluetooth or another low-power network with cloud-connected gateways on a truck, shipping container or other conveyance. The gateways collect and process raw data compiled into actionable reports or send alert notifications to the carrier and its partners.
Regardless of transportation mode or carrier, adding real-time visibility into individual pallet or package shipments using IoT technology allows shippers and customers alike to monitor the location, condition and predicted arrival time of important and sensitive cargo such as pharmaceuticals. When shared among supply chain partners, this information empowers collaboration that helps improve shipment efficiency, optimize operations, lower costs and ultimately provide superior customer service.
Ask any company these days if they are making the internet of things a priority and a clear majority will answer with a resounding yes. Ask them exactly what they are doing to make it a priority and the answer gets murky.
It’s an interesting phenomenon and one that foretells a growing problem. The IoT industry is so overhyped these days that virtually every company claims association to it; far fewer have found a business case with repeatable economic benefits that customers are willing to pay for. But that’s exactly what needs to happen for companies to rise above the IoT white noise and plot a course to business success.
And that begs the question: How does a company make the leap from just talking about IoT to deploying IoT products that consumers want or need? As it turns out, it’s not that easy. Here’s why. Often, the decision to pursue an IoT product strategy is made at a management level. As that desire filters down to engineering, however, it takes on another context altogether and therein lies the problem.
Many research and development engineers don’t consider themselves IoT designers. For most, IoT concepts weren’t part of their educational lineup. Instead, what they learned in college, and perfected in the workplace, was how to design widgets of all shapes, sizes and functionality. So, when they get a mandate from management that their widget now needs to communicate wirelessly with other things, they don’t necessarily consider it an IoT device. To them, it’s just a new design requirement.
This disconnect over modifying a widget to communicate wirelessly versus designing a fully functioning IoT device is a crucial misstep and a potentially costly mistake.
Virtually any mundane item can be converted into a wirelessly connected device. Creating one to stand the test of time and onslaught from competing products is much trickier. To make devices “smart,” advanced technologies must be utilized and that introduces complications. A radio must be added to the device to enable its communication, and it must be able to send and receive information. The device must work in an environment with other devices sending and receiving information at the same time, and potentially interfering with its communication. It may even have to operate unattended for long periods of time. Standards must be considered and industry regulations complied with.
The device’s operation in the real world must also be carefully thought through and optimized to avoid issues — a prime example of which is a smart oven (see the Figure). What happens if there is interference from an unintended source, or if its communication circuitry generates unintentional electromagnetic interference? The oven’s operation might be compromised or behave in an unexpected or even hazardous manner. What if its communications function is not properly secured? A cybercriminal could easily exploit this vulnerability to take over the oven’s operation or, worse yet, gain access to the home’s secure network.
Suddenly, creating a device that wirelessly connects to things becomes a highly complicated undertaking. It’s one that demands complete company alignment, starting with management and trickling down to the engineers, and it requires a great deal of heavy lifting. That heavy lifting is essential for any company wanting to do more than just add to the IoT hype.
For any company looking to make that transition, here are five helpful tips to start you down the right path:
- Learn what you don’t know. Fully research and understand the market where your product will play. Who are its consumers and what are their expectations? Are there any specific design requirements that must be met? What do competitive offerings look like and are there any gaps between the functionality they deliver and what consumers want or need?
- Create a strong IoT foundation. Ensure management’s IoT goals and the engineering team’s understanding of those goals are fully synced. Make sure all parties are aligned on the nuances of developing an IoT product and how it differs from previous products the company developed.
- Build a strong team. Make sure the development team is staffed with the right skill sets and ensure each designer has access to critical training and IoT educational resources. Designing a widget that connects to the internet does not make someone an expert in developing IoT products. Consider hiring IoT-specific designers if needed.
- Utilize the right design and test technologies. Choosing a technology because it has a low price often costs a company more in the long run, especially when discovery of bugs during manufacturing or product recalls are concerned. Instead, look for precision systems with a range of functionality that can be scaled as needs change. Precision measurement can make all the difference between, say, a battery that runs days versus hours on a single charge.
- Ask for help. You may not have the IoT expertise you need in-house to tackle an IoT project and that’s okay — if you aren’t afraid to ask for help. A trusted test and measurement vendor can be a great place to start. To develop tools to test IoT, they’ve had to be intimately involved in the IoT industry, the various standards bodies and industry organizations. Their expertise and insight into how to avoid common IoT development mistakes can be a huge boon to your efforts.
Despite all the overhype, IoT is clearly not going away. It would be a mistake for companies to think that succeeding in this market requires little more than ensuring their products can wirelessly connect to things. Developing and deploying IoT products that consumers will pay for requires hard work. That work must start with building a strong IoT business plan that’s supported by the right ideas, people and tools.
The purpose of bringing industrial-connected equipment to market is to embed your products and services inside the customer value chain and break the spiral toward hardware commoditization. To accomplish this goal, you must first understand each link of the chain, mapping the requirements, skills and constraints of each actor from development and delivery through to sales and consumption.
In many cases, this mapping activity will touch actors in the value delivery chain with which your company has not previously had a direct relationship. How will you determine the appropriate levels of investment and capabilities required at each system layer to ensure each actor has what they require from the beginning, avoiding costly rework and market failures after your production release?
Over the past decade, Bright Wolf has helped some of the largest companies in the world to understand and solve these and other challenges of industrial IoT and connected product systems.
Using heavy machinery human-machine interface (HMI) development and operation as an example, there are likely to be the following roles:
- Software developer
- Internal engineer
- Machine system integrator
- Service technician and dealer
- Owner and operator
At each level there are more individuals, starting with the core development team and ending with the individual consumers of the products produced. When you combine the roles by size and activities by frequency into a single chart, you will see clear patterns emerge for guiding your product strategy.
Right away this tells you there will be a lot of users wanting specific layouts and display widgets on their HMI dashboards. If you don’t build in the ability for individual owners to easily configure their own screens, then either your small team of software developers is going to be incredibly busy following up on support tickets to make necessary code changes for each HMI operator — or your product is going to fail in the market due to a fixed one-size-fits-all user interface that nobody wants to buy.
On a related note, there are certain configuration changes that your service techs and dealers are going to need to be able to make that end users should not be allowed to access. How are you going to restrict different levels of configurability for your equipment in the field? This ability to enable or disable functionality based on user role (and the methods for properly authenticating each user) must be part of your overall system architecture or this simply will not be possible. IoT product managers must plan accordingly and include this as part of the initial specification and requirement documentation, budgeting for sufficient development resources to accomplish these tasks.
We’ve seen another scenario occur in connected product systems that is equally problematic if not accounted for upfront. How will the system enable customers to configure I/O during installation? In these situations, manufacturers typically include a set of open I/O ports in addition to an initial set of supported sensors, such as humidity and altitude. When new data types are desired in the future, for example, temperature or vibration, operators are able to plug the appropriate sensors into these channels and begin immediate collection — at least at the physical level. Critical information will be lost, however, unless the digital system has been architected to accept, pass along and process this new data and its associated configuration.
Beyond this, the architecture will also need to account for displaying these new values inside mobile apps and web applications. While new data types as a concept may be anticipated, the specific types of data are unknown until the customer identifies a new requirement or opportunity. Therefore, the architecture must allow for system operators to configure the application user interface and data reporting after deployment. Otherwise, the new sensor values may appear in the list of metrics for each asset in the field, but with fixed, unhelpful names like “DataPoint8” and “Item9” with values shown in generic number formats that must be deciphered manually.
The specifics around how much configuration to make possible for actors at each point in the value chain will vary across industry verticals and the products themselves. IoT product managers must consider all of the actors involved in creating, operating and using the system from end to end so they can provide sufficient specifications for the IoT development teams designing the system architecture. Each actor must be well-understood upfront — their requirements, goals and the tools they’ll need based on their likely skill sets — in order to deliver a commercially successful connected product system.
In my last article, I discussed how the onslaught of billions of connected devices coming in the next few years necessitates the creation of a naming structure which can accommodate the eventual shift toward IPv6 addressing.
The internet of things in general will drive the adoption of IPv6 on a wide scale. According to a Gartner report, 25 billion “things” will be connected to the internet by the year 2020. With this many new IoT devices entering the market each year, connectability — i.e., allowing network-connected devices to “speak” to each other — is vital. In this article, I’d like to expand upon that concept and offer some predictions for how IoT will evolve in an IPv6 internet environment.
Every device will have an IP address. Currently, we have enough IPv6 addresses to give every human in the world 65 million addresses of their own. This is one of the main reasons why IPv6 is such an important innovation — it gives IoT products a platform to continue operating on for a long time.
What people might not realize is that IPv6 has been around for 20 years. What brought it more to the forefront more recently was the need for additional security measures. (IPv6 is natively ready for IP security. More than a namespace, IPv6 offers a way to securely transport data natively versus with an additional protocol.) Recently, many enterprises and service providers have doubled down on IPv6 for this reason. You might compare this sudden adoption to that of air bags being added to vehicles. General Motors tested air bags in vehicles as an experiment in 1973. It wasn’t until the 1990s that they became mandatory. Why? For people’s security.
An average home router gives the user about 255 addresses within namespace. All devices that are talking within the home network use the same IPv4 or IPv6 host address to get to the internet. As the name implies, the router is responsible for routing traffic within the network to the appropriate devices. A home router is the gateway to all the devices within the home, and the business router is the gateway to all devices within the business.
Although IPv4 addresses will continue to work, what may eventually happen is the notion of the home router will go away. Devices might not rely on a local area network for internal addressing, but rather have their own connection to the internet. Local management of the devices might operate more like a mesh network within the home where all devices can communicate to one another — along the lines of what software-defined networking has done in other areas of networking. For the thermostat to “talk” to the smartphone, the smartphone must know the thermostat’s address. These types of technologies and connection requirements will make IPv6 more necessary.
The domain name system (DNS) will become more relevant in the world of IPv6 because no consumer, no matter how tech-savvy they are, will want to remember every address of every device in their house. Think of DNS as serving a role similar to that of a password manager which helps a user keep track of all their passwords. The trends we see today will take on new and interesting forms as IoT becomes more pervasive and as we gain more devices to control. Again, we can compare the current state of technology to what was happening in the ’70s and ’80s when there were many personal computer players, and new iterations of hardware, software and operating systems. It took Microsoft and Apple to step up and say, “this is what we’re doing” to move adoption along in a standard way. Right now, no one is ready to say when exactly we’ll cut over to IPv6 exclusively.
When someone figures out how to license IoT, and its protocols, they’ll be the next titan of industry. But that’s more challenging than it was in the era of the PC and Mac. The first word letter in IoT stands for internet, which is owned by no one and everyone at the same time. For the first time in history, we have a protocol not owned by any one person, entity or organization, and we’re using it to power devices that we never could have imagined having remote access to. Who would have thought we’d be able to start a car from 35 miles away, or set a home thermometer from another state or country?
The golden age of computing has evolved from hardware to software to online, and it’s only going to get more amorphous. What was merely a hobby has become a business necessity in five years. The internet turned a purpose-built machine that did specific tasks into a much more widespread, general-purpose machine that could do many things. Charles Sun, technology co-chair of the U.S. Federal IPv6 Task Force, said, and I agree, “Without the extensive global adoption and successful deployment of IPv6 as the primary version of the Internet Protocol, the IoT won’t be possible.”