The Wi-Fi bubble
With homes and businesses becoming smarter, the number of IoT devices operating in these areas is on the rise. In most cases, Wi-Fi is the perfect connectivity option, offering high bandwidth and very low cost due to using existing infrastructure and common tried-and-tested technology.
For most noncritical applications, such as smart appliances, Wi-Fi is fully capable and reliable enough to get by. If the bubble bursts and devices are left with no connectivity, it’s no big deal. However, for applications such as fire alarms, security and healthcare applications, Wi-Fi’s occasional unreliability is potentially dangerous. These devices will need their own connectivity option that is guaranteed to send the necessary data, whatever the conditions.
Bigger areas call for bigger bubbles
For a business that operates over large areas, such as a mine or farm, Wi-Fi is not usually the best option due to being fundamentally designed for short-range operation. Essentially, a wider area needs a bigger connectivity bubble. For these applications, low-power wide area networks like LoRa, Sigfox and cellular-based LPWAN’s come into their own. These networks allow for a much larger connectivity bubble and are ideal for operating sensor-based devices across a wide area at a low cost. With this, devices optimized correctly for these kinds of networks use very little power to send and receive data, meaning batteries can last a very long time in the field without recharging.
These networks give devices a great deal of freedom with regards to moving around inside the coverage area of the network, but unfortunately, that might be as far as it goes. A LoRa network, although capable of covering very large areas, is still essentially a localized solution. If devices are expected to leave base, they will also be leaving the network.
A sea of Sigfox bubbles
Sigfox is a good option for moving devices, offering coverage in over 50 countries worldwide. Like LoRa, Sigfox can offer a bigger connectivity bubble, meaning devices can operate over a wider area. On top of this, Sigfox is a global network, meaning that unlike LoRa, devices will connect in London just as well as they will in Paris or Berlin.
There is, however, one drawback with Sigfox: Your devices will only work where there is a Sigfox network. Presently, Sigfox offers good coverage across many major cities in the countries covered, but not so much in rural or remote areas where there is less investment in technology and infrastructure. If your business case requires your devices to be connected and contactable everywhere they go, Sigfox might not be the answer for this reason.
Is there such a thing as a ubiquitous global LPWAN?
If the business case requires devices to move around and staying in contact is mission-critical, a truly global connectivity option is required. At present, there are a couple of ways to achieve this. One option is cellular data. Due to the ubiquity of GSM mobile, you can now find 3G or 4G LTE data coverage almost anywhere in the world. However, you may need to use and manage multiple SIMs and contracts, and relying on mobile data can lead to spiraling roaming data costs. On top of that, there are still some areas of the world where there is nothing more than 2G mobile coverage, meaning devices set up for mobile data just won’t work.
The second option is to use a global IoT connectivity, such as that provided by Thingstream. Using the global GSM network, devices in more than 190 countries worldwide can connect to it as long as they are within reach of a GSM cell tower. Connected devices are ensured ubiquitous connectivity by connecting to the strongest GSM signal available. The network is accessible via 2G, 3G and 4G masts, making it possible to connect almost anywhere in the world, from big cities to the tiniest backwaters.
Happy trees: A perfect use case for Thingstream
When South African farmer Robert Carlton bought his 100-hectare macadamia nut farm in 2011, it presented him with a challenge: getting the right amount of water to each tree as and when required. To achieve this, he deployed moisture probes across the farm to collect the necessary data to inform the decision of how much water to use.
Despite having accurate data, this solution still presented Robert with problems. To know how much water was needed, he would have to drive out to the sensor, download the data and process it before deciding how much water was needed. This process in itself proved time consuming and costly. A wireless system was needed.
In remote areas such as the location of this farm, businesses are limited by what is available. In rural South Africa, that leaves very few options. Cellular data was tried first, but the process of topping up and managing multiple SIMs was still too inefficient and expensive.
A final working solution was found in systems integrator Pylot, who equipped each device with a Thingstream SIM card and software, allowing them to stay connected and report environmental conditions back to base in real time.
“I can now efficiently manage the irrigation of a 100 hectare farm and optimise machinery workflow, all from one cost effective and reliable IoT dashboard.”
– Robert Carlton-Shields, owner, R&K Estates Macadamia Farm
In finding the right solution, testing is key. It may seem obvious to test before choosing, but we’ve seen many cases where it has happened the other way around, often at a large cost. Other LPWA options were also considered and tested on the macadamia farm, but when it came to the critical topics of cost and coverage, this was the only immediately practical solution offering coverage of the whole farm.
“For IoT solutions out in the field there were previously two options. A LoRa network or a Sigfox solution. Both of these have significant challenges. With LoRa there’s a long and costly process of setting up a bespoke network. We’ve run into planning issues for the masts and this can be just to provide a trial of the service – it’s simply not quick enough or cost effective. With SigFox we’d be reliant on their network coverage – if you are lucky this is can be available in densely populated areas, but not a viable option for a remote farm covering hectares of land. Thingstream has a ubiquitous coverage footprint, is low power, cheap and can be up and running in hours. Combine that with the data collection and reporting platform they provide and this is an IoT revolution.”
– Trevor Hart-Jones, CEO, Pylot
Choose your IoT connectivity wisely
IoT connectivity is not a decision to be taken lightly. Every installation is different and must be assessed to find the best fit for the business case. Connectivity comes in many flavors and each has its merits; from the many available options, very few will actually be a perfect fit. The key takeaway here is to choose wisely based on research and testing. Making the wrong choice could bring the whole project back to square one.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
A new report by the World Economic Forum found that the value of keeping things “within” our economy, rather than letting items pass “through” our economy could unlock a global growth potential of $4.5 trillion and also address the global challenges we’re faced with today, ranging from resource depletion to climate change. This is the core tenet of the circular economy, an ecosystem in which sustainability is embedded throughout a product’s lifecycle in the global economy. In a circular economy, products are designed to be made with recycled components, are produced with longevity in mind, are made to be refurbished at maximum and can ultimately be broken down into reusable metal and plastic parts at end of life.
Increasingly, tech companies are employing corporate sustainability programs that follow the principles of a circular system. Popular tech buyback programs and refurbishments remind us of the importance of the circular economy toward the end of a product’s lifecycle, but what many don’t realize is that the process begins at a product’s inception and is increasingly facilitated by emerging technologies such as the internet of things.
How IoT powers the circular economy
According to a European Commission report, 80% of a product’s environmental impact is influenced during the design process. Here, product designers and engineers make the production and manufacturing decisions that determine the energy efficiency, refurbishment costs and reusability factors that end users will face years into the future. By deploying IoT sensors, manufacturers can maintain a record of each component part’s makeup and origin to determine its environmental impact, and to help end customers make ethical decisions about the products they purchase and utilize. This attention to detail in a product’s initial stages will be critical throughout the life of the product as IT leaders use IoT sensors that are intuitive enough to enable visibility into real-time conditions, resource demands and future maintenance needs by analyzing data and performance.
From day one, end users are using IoT trackers on assets in the field to monitor key functions and essentially manage, diagnose and repair their enterprise tech products both at the plant or remotely. These assets in the field can include anything from machinery on a factory floor to wind turbines in an energy grid to IT equipment in a data center.
While the average lifespan for a piece of enterprise technology is about three to five, IoT sensors can read product performance and match it with data intelligence throughout its lifespan, alerting IT leaders to potential future systems issues, necessary system-wide updates or maintenance needs before anything goes haywire. This level of intelligent and autonomous diagnosis via IoT can save millions of dollars in repairs, replacements and failure costs in the long run. What would have traditionally required downtime or technical support is simplified through IoT sensing, minimizing downtime and unnecessary equipment overhaul.
Data collected through IoT sensors is also providing valuable business insights to IT and finance leaders alike. Valuable data analyzed is unveiling needs and inefficiencies in energy use, underutilized assets, materials consumption and stock inventories to help leaders make informed, data-backed decisions while fueling sustainability.
Refurbishment and accountability
As a product runs through its lifecycle, it naturally may pass through a number of users, undergoing refurbishments that allow the product to remain in use in the circular economy. As security threats continue to evolve, so must the technologies to defend against them. IoT systems, combined with emerging AI and blockchain technologies, can also defend against cyber-risks throughout the product lifecycle by ensuring secure authentication and advanced scanning and detection. Increasingly, value chain players are utilizing “product passports” that are generated by IoT which give a comprehensive view into the product as it passed through the economy, providing a complete view into past service tickets, recent software updates and potential future hiccups or security breaches. The product passport can also include repair and disassembly instructions to facilitate reuse and recovery in the hands of a future owner. These passports are a great alternative to shredding hard drives to protect information.
While the prospect of IoT has delivered unprecedented connectivity and data integrity, it has also helped pave the path for a more sustainable product lifecycle. This not only spares the environment from unnecessary waste and production byproducts, but it also helps organizations and IT leaders maximize their technology and trim costs associated with product repairs, unnecessary downtime and underutilization.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
With the rapid growth of connected IoT devices, device management strategies have become increasingly important for industrial IoT deployments. In fact, Gartner is predicting that the number of IoT devices will surpass 20 billion by 2020. This explosive growth across industries such as manufacturing, healthcare and more has generated greater demand for scalable, turnkey IoT device management technologies.
Industrial organizations are looking for better ways to monitor and control fleets of intelligent devices, with seamless connectivity and the flexibility to accommodate an ever-changing device landscape. Some organizations have anticipated this growth and developed a correspondingly robust device management strategy. For these organizations, this rapid proliferation — both in number of devices deployed and their diversity — isn’t daunting. But if you fall into the category of an organization that has yet to adopt a device management strategy, then it is time to act.
Before diving headfirst into developing a skunkworks solution or asking IT to allocate budget, set yourself up for success by first addressing the basics. These most commonly asked IoT device management questions can help you to evaluate the structure of your organization and determine the best device management strategy to achieve your IoT goals.
1. What is the nature of IoT devices?
IoT device management often deals with mission-critical, industrial equipment, where uptime is crucial. These are the types of devices that perform functions that are key to a business’ core operations. So, if one is compromised or fails, the repercussions are felt throughout the entire organization. The variety and complexity of IoT devices is also extremely broad. The spectrum stretches from two-dollar temperature sensors to million-dollar wind turbines, which is why IoT device management systems are designed to control a range of device types with varying levels of sophistication.
2. What is the focus of IoT device management?
Organizations that turn to IoT device management are looking to take their IoT to the next level and enable advanced functionality. For example, some companies use IoT device management to get more from their digital twins — virtual representations of physical objects in the digital domain, whose information is typically stored and updated in a device registry. Advanced digital twin design allows companies to analyze devices as a group and model their behavior as a collective.
IoT device management can also help businesses use predictive capabilities by extending its reach into the field. They can analyze historical data across devices such as device state, telemetry and prior failure information, which can be matched against current fault data and other devices for root cause analysis. For example, an oil refinery can use data insights about a pump’s health, as well as like assets across a population, to foresee an upcoming failure.
3. How much can IoT devices scale?
As the population of connected devices grows exponentially by the day, there’s seemingly no limit to how far the ratio of devices to people can skew in favor of the former. In a modern IoT deployment, it is not uncommon for the scale of devices to grow to hundreds of thousands, millions or even tens of millions of devices. The sheer number of devices generates countless extra details and issues, which can lead to a host of scalability problems that only IoT can solve.
4. How frequently do IoT devices need to be updated?
In a modern IoT deployment, cloud-based systems need to update IoT devices often. Although devices vary broadly, many types include human-facing aspects used for marketing or training purposes. For example, a connected vending machine may require content updates that can include images or videos, which go well beyond the capabilities required for your average configuration change or software update. It’s not inconceivable that this content will need to be updated as frequently as daily.
As connected devices become more prevalent and IoT adoption continues to spread, the challenges surrounding device management will only grow. The best way a business can equip itself to navigate the changing landscape is to approach digital transformation efforts with careful planning and the right strategies. To set yourself up for success, continue to innovate on features that keep you focused on business goals and make sure key enablers are in place to support the long-term scalability of your device management strategy.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Connected cars are moving past commonplace and becoming standard. According to data from Statista, by 2020 connected cars will account for 98% of the global car market, with 100% market saturation by 2025. That’s just six years from now. The implications for connectivity at this scale are immense, especially on the security infrastructure side.
Within the connected transportation ecosystem, the actual vehicles are the most visible part. But behind the scenes, the most important part of such an environment is the underlying electronic standards that make it possible. With any distributed and connected system you have to get everyone working on the same page. Connected cars are no exception; the engineers, auto manufacturers, maintenance personnel and regulators need to all be on the same technical page so they can work together seamlessly.
Making standards standard
There’s a precedent for useful and connective standards in the automotive industry. For example, there’s the on-board diagnostics II (OBD-II) standard, which became mandatory for all cars made or sold in the United States since 1996. This standard specifies the diagnostic connector (typically located near the driver’s left knee), its pinout, signaling protocols available and the messaging format. Vehicle parameters and guidance on encoding the data are also included.
The results of the OBD-II standard were and remain profound. Drivers were no longer tied to the dealer for repairs; they now had the flexibility to take their car to any experienced mechanic, at any repair shop, to have their car serviced and maintained using OBD-II standard diagnostic tools. Within IT there are analogous standards. For example, the standardization of the Transport Layer Security (TLS) protocol enables users of any browser — Chrome, Firefox, Safari, Edge, etc. — to reasonably expect that they can securely navigate to any webpage hosted by any webserver — Apache, NGINX, Tomcat, IIS, LiteSpeed, etc. — without worrying about compatibility.
Open standards are necessary for connected cars to provide enhanced safety and better use of data. Manufacturers, regulators and service providers need such standards to build vehicles independently yet still create interoperable products. The vehicles need to seamlessly and securely communicate with not only with each other, but with other elements in a smart city, regulatory agencies and other outside services.
How do we get these standards? It doesn’t require a lightbulb moment of discovery, but instead diligent and patient work. Teams of people will need to collaborate together in a similar fashion to the Internet Engineering Task Force (IETF), whose members generously donate large amounts of their time developing the standards that make the internet work.
Current vulnerabilities and potential safeguards
While there are stories of engineers hacking connected cars, it’s important to remember these attacks in non-lab conditions are statistically insignificant. There’s no epidemic of breaches, as we’re at the very nascent stages of the smart car era. This means the industry still has time to step back and assess. It’s during this period that the industry must get things right. This means working with the government and organizations such as the IETF and the Society of Automotive Engineers (SAE — whose J1939 standards collection defines the CAN bus). Academia needs to jump deeper into automotive IoT to help create meaningful and verifiable standards for connected vehicles. The connectivity, security, maintainability and reliability of the vehicles all warrant protection through smart design and universal standards. Without such agreement from manufacturers, subcontractors, OEM vendors and regulatory organizations, the connected car promise is no longer practical and certainly not economical for those involved.
There’s a convergence of operational technology (OT) and IT within the industry. We have OT-related technology such as the CAN bus, a standard allowing manufacturers to build vehicle control systems that enable microcontrollers and devices to communicate without a host computer. Cars are essentially moving industrial control systems, and the requirements for these types of systems are reliability and safety. Contrast that with IT, such as that found in mobile phones. Manufacturers can accept them to work flawlessly 96% of the time in exchange for occasional restarts and better graphics. Automakers can’t take that remaining 4% risk of brake or transmission failure.
Problems arise with current automotive manufacturing when they take shortcuts. For example, they might converge IT and OT onto a “super-bus” single network. IT must be open to communicate with the outside world. Consumers want apps such as Pandora or Waze within their cars. The problem is the open interface making the apps possible opens the system to bad actors. When IT and OT systems are converged, then the avenue of attack goes beyond audio controls and navigation into steering, braking or engine control.
Securing device-to-device communications and interoperability presents multiple challenges in the IoT environment. Without a prevailing standard for automotive security, many industry and technology experts struggle to navigate a variety of protocols, processes and compatibility issues. The auto industry outpaces many others in the IoT space with a comparatively high number of in-automobile devices (all requiring security technologies) that are now being connected. In cases where traditional encryption methods simply don’t fit well, such as public/private key certificate management between small components, a reliable lightweight commercial encryption mechanism is needed.
A non-PKI framework for exchanging strong symmetric encryption keys between devices (within the car), while also capable of scaling to cloud and global services levels, would be a real game-changing and disruptive technology that meets critical security needs head-on. Integrating scalable security systems in the automotive and telematics spaces is still a loosely defined practice, ready for a next-generation solution.
Mundane and memorable at the same time? That’s how I felt on my virgin self-driving voyage at CES in Las Vegas earlier this month. Lyft, the rideshare company, and Aptiv, the mobility company building driverless technologies, offered me the chance to ride the Strip and nearby highways in their new retrofit BMW Series 5. My ride was both underwhelming (I’ve had more clutch moments in a Vegas cab) and overwhelming (all that technology) at the same time.
Turns out that I’m not so special either. There are 30 of these cars in Las Vegas that are now part of the Lyft fleet. Lyft riders in Vegas can actually opt-in for a self-driving ride right on the app. If there’s one nearby, it’s yours. And Lyft has already completed 30,000 of these rides around the country.
The self-driving experience turns out to be a bit oxymoronic. Driverless cars in Vegas have drivers. Actually two. I had a human driver behind the wheel and an Aptiv representative in the passenger seat. Plus, I had a Lyft PR person and my photographer. That made for a crowded ride.
The human driver is there as a precaution should the system fail. It’s also still illegal to have driverless cars on private property in Vegas, which meant that we were on human control while entering and exiting the hotel grounds. Once we got into the driverless mode, I had to peer over the driver’s shoulder to make sure that there was no hanky-panky with a foot on the brake or a hand on the wheel. The driver was pretty chill and into the experience, especially when you consider he’s on his way to obsolescence.
You have to be a bit of a geek to appreciate the exciting part. The driverless experience is like a bunch of IoT devices that forgot to include you in the party. There are 21 different sensor devices. The car’s Lidar, which is like radar but based on laser light emissions, are mounted at the front beneath the grill and on the mirrors, making it look like any other normal car, unlike the bizarre Lidar-topped Google cars that were so popular a few years back. There are also radar sensor mounts on either side of the vehicle.
State of the art GPS, along with IMU (inertial measurement unit) sensors, work to know the location of the car to within a quarter of an inch at all times. Altimeters, gyroscopes and magnetometers work in unison to constantly transmit the car’s exact position. At the same time, the cameras are surveying the landscape for lane-changers, crazy pedestrians and other obstacles. The combined effort of these thousands of dollars is basically digitizing the world around you and making it actionable using your car.
You get to watch the whole driving show on a visual display that looks a lot like SimCity. Buildings appear as squares, cars appear as moving car-blobs and roads show their traffic stops, turn lanes and more. It’s by no means realistic, and most of us, including our guides, couldn’t quite explain what the color-coding meant on the display. In the back seat of the car, however, was a full-screen GPS display showing where we were at all times.
Of course, there are lots more features, some of them already found in semi-autonomous cars. Real-time data can analyze the fastest route through Vegas traffic, and they’re even beginning to predict potential problems faster than a human could — like a driver cutting lanes.
My ride was short and totally uneventful. We kept within the excruciatingly slow speed limit. We passed one car (yeah!) and had zero close calls. Any bets on when I’ll be taking my first self-driving ride without the human driver backup? Not sure when, but I’m pretty sure it will be at CES.
Blockchain has been encroaching into all areas of life for more than five years now. Financial services are at the forefront of blockchain development, but logistics, utilities, security and government services are quickly gaining ground. An ever-increasing focus on efficiency and security make this the technology to watch. However, the interaction between the all-encompassing IoT and blockchain has been the subject of much debate, and this uncertainty is the reason many companies are reluctant to invest in blockchain technology.
A look at the challenges
Centralization vs. decentralization
The central values of both IoT and blockchain seem incompatible. IoT gravitates towards centralization, ensuring all IoT devices are connected to one main server or network for all of the benefits this affords, while the premise of blockchain is decentralization. Blockchain ensures that not all participants in an interaction have to trust each other by utilizing a disparate network of nodes, thus creating the legendary security for which blockchain is known.
IoT devices rely on speed, one of the major benefits of the centralized processing system. Whether it’s a phone, a smart card or some other customer-oriented device, timeliness is central to the usefulness of IoT devices. However, blockchain is inherently slower. It takes time for a decentralized system to come to a consensus, and this kind of delay would be impractical for many IoT devices. For example, you can’t have a smart car waiting 10 minutes to make a decision that needs immediate attention.
Blockchain is widely lauded for its security benefits achieved through asymmetric cryptography. Users hold a private key to create a digital signature of every asset on the blockchain, proving ownership. If anyone gains access to this key, they effectively own your property. In IoT, each device performs its own business, and no human interaction is needed between the device and the network it connects to. How to marry these two concepts is one of the major hurdles of any IoT blockchain technology: Would each device store its own private key? What happens when that device is hacked?
Finally, there is the disparity caused by the age of the blockchain market versus the age of the IoT market. Many IoT sensors are designed to be operational for years at a time, but blockchain technology is in its infancy, constantly changing and, at times, quite volatile in its development. Any IoT device incorporating blockchain technology must take this into account to ensure that it doesn’t immediately become obsolete.
A look at the possibilities
If the logistics can be managed, blockchain implementation in IoT devices promises a solution to the difficult problem of running secure, efficient and independent IoT systems. Privacy and security in IoT is a fundamental concern, especially given the vast amounts of personal data IoT collects and transfers on a daily basis. Ensuring a secure process between validated, legitimate stakeholders is of primary importance, and blockchain ensures this through user verification and provenance checks.
Blockchain also allows for consensus and agreement models to detect hackers and unverified users, thereby mitigating cyberthreats. And on a system with tens of thousands of IoT nodes, the possibility of hacking the network is remote at best.
Blockchain allows for automated interactions to occur between different nodes in the system, predicated on preset embedded criteria, without the need for communicating with the central network. This means that business logic can be executed automatically, without human intervention, significantly streamlining processes and increasing efficiency of IoT devices while still maintaining high levels of security.
Model tracking allows a system to record metadata and the results of logic executed by the network, creating an immutable history of what has happened and what (and why) certain decisions were made during IoT processing. This allows regulatory compliance, troubleshooting and model improvement to become an integral part of every system.
IDC estimated that by 2019 as many as 20% of IoT deployments will include blockchain technology. This growth isn´t casual since many large companies have been exhibiting their IoT blockchain technologies for several years.
In 2017, SAP and IBM partnered to demonstrate how IoT and blockchain can automate a pharmaceutical supply chain. Individual units needed to be refrigerated to specific temperatures which were measured via sensors, and that data was fed back to the main network. SAP’s Leonardo software platform was combined with IBM’s blockchain cloud service to create a system that could track these sensors, manage unit refrigeration and ensure unit security from shipping through to final retail delivery, with the transactional history of the process visible to all parties.
This same methodology is being replicated in IBM’s newly launched IBM Food Trust, which is already being used by Walmart to track vegetables from farm to table. IoT provides the monitoring and tracking infrastructure, and blockchain provides a trusted way to audit whether the movement of goods complies with policy.
The promise of IoT blockchain benefits, unfortunately, does not mean solutions to the challenges listed above. A lot of infrastructure, integration, technology and governance work are needed before integrated IoT and blockchain technologies are widely available. Small steps taken by companies like IBM are the first in a long journey toward designing an IoT system with in-built blockchain capabilities.
Digital twins continue to gather steam, with Gartner predicting the use of digital twins to triple by 2022. A digital twin is a digital representation of a physical object or system. They are being widely used within supply chain management to track items as they move across companies, also using blockchain technology, and within automated factories to track wear and tear and perform predictive maintenance to reduce downtime.
Digital twins are also becoming synonymous with IoT as people are increasingly modeling “things” so that analysis and operations can be done with minimal calls to the outside world (e.g., reducing power consumption). For example, creating a digital twin of a thing like a freight truck allows you to have all your different fleet management, supply chain visibility and vehicle maintenance operating on the digital twin rather than all having to talk constantly to the physical vehicle with different requirements.
This digital twin can test and monitor the performance of the IoT ecosystem. However, its unique characteristics need to be considered to ensure your testing strategy is effective. Digital twins are relatively uncontrolled systems, unlike a typical software system where inputs and outputs are well established, so simply defining what is and isn’t a failure can be difficult. Also, a digital twin mirrors the physical world and, as a result, there’s a lot more variability. From temperature sensors to tire pressure sensors, there’s a vast magnitude of possible combinations of possible inputs, coupled with the business logic, and lots of independent, different parts to the digital twin.
The complexity of the digital twin means that traditional testing strategies that rely on hundreds of test cases simply will not suffice. You need to move beyond traditional script-based testing to intelligent testing driven by a combination of AI and machine learning, with auto-generated tests and learning algorithms to determine the pass or fail.
Another consideration is that digital twins are not static and move between software systems. Going back to the freight truck management example, the digital twin of a motor part may move across several companies as ownership of the motor changes. This means you need to test the compatibility of all those different software systems interacting with the digital twin. The complexity of a digital twin requires not only AI and machine learning to cover the vast number of complex scenarios, but a test automation tool that is able to test the entire ecosystem.
Digital twins are here to stay, and have the potential to transform the way products are designed, manufactured and maintained. As digital twins become more commonplace, the adoption of intelligent AI-driven testing will accelerate to understand the entire IoT ecosystem and maximize outcomes.
It was an exciting year in the IoT space. And if we’ve learned one thing, it’s that our only barrier to innovation is our own imagination. In 2018, IoT connected the unexpected — like prosthetic limbs and tectonic plates — to improve daily life with smart systems. In the coming months, I believe we’ll see 2019 continue leading and delivering IoT’s potential. But, as we kick off the new year, here’s a look back at a few unexpected things IoT made a bit smarter:
You better believe IoT connected bees. More specifically, their hives. Using AT&T’s DataFlow capabilities, YHY Group is piloting a technology that tracks a beehive’s humidity levels, temperature and other aspects to ensure optimal hive health. Our capabilities provide a new way to track data for YHY Group and it anticipates continued enhancement of the system in 2019.
2. Robot couriers
Looking for reliable delivery? Call your friendly neighborhood robot. We are providing the connectivity for Marble Robot, which is creating a fleet of robots to deliver groceries and package goods in ultra-urban environments. While still early in overall rollout, these robots securely transport your goods and allow workers to focus on longer distance deliveries.
3. Office furniture
Are you tall and constantly hitting your knee against your desk? Or small and in a never-ending battle with your chair to achieve an optimal height? Well now, whether you’re at your desk or another work station, your office space can be automatically tailored to you. We are helping Herman Miller create smarter workplaces with connected office furniture all around the world. Employees can not only improve their health and posture, but get data on how desks, tables and chairs are used to help organizations maximize their asset usage.
4. Porta potties
Did you know that one of the biggest deterrents to keeping construction crews staffed are bathrooms? That’s why the AT&T Foundry went to work with a major construction company to develop a battery-powered IoT device that helps ensure the onsite porta potties stay clean. A sensor in the device detects movement inside the restroom — movement means a clean facility, but a hastily slammed door means it’s likely time for maintenance. It’s helping porta potties pass the smell test on a daily basis.
5. Airport luggage dollies
The holiday season is one of the busiest times of the year for airports. And while many travelers don’t think about what happens behind the scenes, a lot goes into ensuring everything runs smoothly. Today, ground handlers drive around the tarmac in search of missing luggage dollies. But, Springshot and AT&T are providing a solution. With the help of our solar-powered devices and DataFlow capabilities, Springshot has deployed a system that allows airline operators to track utilization and location of dollies and, more importantly, get travelers’ luggage to their destination. The eventual goal? Provide a one-stop shop for tarmac management.
6. Tectonic plates
Yes, you read that right. We are making mountains and islands smarter. UNAVCO is a non-profit university-governed consortium that uses sensors placed on geologic plates to detect geologic and seismic activity across North America, the Caribbean and South America. Using our global connectivity, UNAVCO’s sensors on geologic plates assist with early earthquake warnings and overall seismic activity research and analysis.
7. Snow removal and mowing robots
Imagine a world where you don’t have to go outside to shovel the snow from your driveway or mow on a hot summer’s day. That world might be a little further away for consumers, but for commercial customers like property management companies, property owners and schools, that reality is within reach. We provide connectivity for Left Hand Robotics, a company producing autonomous robots. Using advanced GPS technology, the robots follow a predetermined path to mow or remove snow, and are equipped with sensors to avoid obstacles during operation. The weather outside might be frightful, but these robots won’t think twice about facing the elements.
8. Prosthetic limbs
Running, bathing, dressing, driving — many take for granted how essential mobility is to our everyday lives. Now imagine losing a limb. Those experiencing limb loss face an unfamiliar journey to finding their way to restored mobility. Some who don’t know where to begin might not communicate their challenges with their prosthetic clinicians, potentially leading to lack of use of their prosthesis. So AT&T and Hanger, Inc., a leading provider of orthotic and prosthetic patient care services and offerings, worked side-by-side to prototype the industry’s first standalone, LTE-M network-connected device for prosthetic limbs to improve visibility and communication with patients. This device syncs directly to the cloud via our network, allowing Hanger Clinic, the patient care subsidiary of Hanger, to receive data on patients’ prosthetic usage beyond the clinical setting. Equipped with these insights, clinicians can proactively contact patients to address potential issues impacting prosthesis usage, such as fit and comfort, to in turn increase mobility. And ultimately, they can help patients get back to navigating everyday life and doing what they enjoy.
You won’t find the tech valley of death on a map, but it’s real. It’s how startup and R&D communities describe the perilous gap between new technology and adoption, a place littered with the bones of good ideas that didn’t make it.
Industries like defense and aerospace, where new technologies are table stakes, face the valley every day, and have done a lot of research into how to successfully get to the other side. Here’s what IoT strategists can learn from them.
The IoT proof of concept
Crossing the valley means balancing patience with strategic action. Patience is important because even the best ideas take time to develop adoption and revenue; action because new technologies need continued investment during that time. For perspective, the typical time for new defense technologies to move from research to application is more than a decade. That’s an awfully long time to support a new technology with no guarantee that it will produce the results you want. That’s where proofs of concept come in.
These small-scope projects can quickly prove out key parts of a new IoT technology, informing whether a wider implementation is worthwhile. Proofs of concept should be kept as simple as possible, focused on testing a single, pivotal point of value. Get that proof, learn from it and then move on to the next. Of course, some of these proofs will fail. In fact, according to some surveys, most IoT projects — 74% of them — end in failure. This is where it’s important to understand why.
Why promising proofs die
IoT proofs of concept typically fail for two reasons: technology and market fit.
Unlike aerospace, IoT failures aren’t particularly dramatic; they’re usually about data processing issues. IoT relies on complex, AI-enabled analytics, so misses in input data or training for machine learning can produce startlingly bad results. In other cases, net new proofs of concept may fail to account for integrations with real-world legacy systems. One IoT proof of concept couldn’t even work in its own office as the walls were too thick and curved for a good Wi-Fi signal.
Even when technology doesn’t just fail outright, leaders can underestimate the complexity of the IoT projects. This can quickly put projects behind schedule and over budget, likely resulting in cancellation and a plot in the valley.
Though technology can be a vexing issue, many IoT projects fail because of the market. This can mean target consumers simply won’t use a new IoT service, or that enterprise customers remain stubbornly attached to legacy business processes despite new, efficient IoT technologies. In either case, market failures can be tracked back to common sources:
- Poor knowledge stakeholders. Research continually suggests that the chief factors in the success of IoT systems are deep knowledge of end users and market needs. IoT proofs of concept planned at a purely technical level are unlikely to succeed.
- Value isn’t understood. This scenario happens when the business case for a proof isn’t fully articulated, or when successes aren’t communicated to the right levels within an organization, leaving key leaders to guess at what they’re funding. Clearly communicated value is especially important when spending by one business unit accrues benefits in another.
- Lack of plan for scaling. IoT leaders need to be ready for failure, but they also need to be ready for success. A two-person team may be fine for running a small proof of concept, but is there a plan for scaling to wider scopes and geographies, with the talent to run that expansion?
What to do about it
The good news is that the potential solutions to each of these pitfalls are pretty straightforward. The same common sources of failure can also become blueprints for success.
Fixing technology failures:
To minimize the risk of technical failures, start on the edges of the architecture. Begin with limited dependencies on the core systems and business processes that you can’t control. This can help ensure that a proof of concept isn’t stalled by technical issues it can’t influence.
Next, be clear about the value to the organization you’re trying to prove. Is it efficiency? Customer satisfaction? New revenue streams? How will that value be measured and what does success look like?
Ultimately, there’s no way to eliminate all technical risk. So accept reasonable risk, then test and iterate quickly. If you’re successful, scale (best practices for scaling pilots for digital supply networks can be useful here); if you’re not, move on to another proof of concept.
Flipping market failures:
Many market failures stem from a lack of stakeholder alignment. This can be internal (wrong sponsorship) or external (wrong partners). One solution is to bring that collaboration up front. The planning for any proof of concept should include identifying and engaging the end users, enablers and other stakeholders outside of the IT organization.
Next, think about success in terms of value, not just cost or revenue. Proofs of concept can add value by demonstrating something new or by unlocking performance not easily described in dollars and cents. Carbon fiber and other advanced materials in America’s Cup boats and Formula 1 cars didn’t immediately save or earn money, but they did make for lighter, stiffer boats and cars, and set the stage for massive performance gains down the line.
Finally, don’t just blindly expand scope or geography when scaling successful proofs. Rather, think about scaling as another opportunity to improve by fixing things that barely worked and smoothing out rough edges.
You may not be building a stealth aircraft or a space-based laser, but success is no less important to your organization. With these simple guidelines, you can approach your IoT proof of concept with confidence — and avoid the valley of death.
The rapid growth of the internet of things has seen a subsequent rise in the adoption of APIs. In IoT, most devices communicate to the cloud via RESTful API calls as they are ideal for physical device communication. Standard user interfaces, on the other hand, are meant for user consumption.
While APIs are better suited for today’s business model, they present their own security challenges to address. One of those challenges is access authorization.
Since session cookies are impractical for IoT deployments, developers will often use different authorization methods, such as issuing a JSON Web Token after successful device authentication. Unlike users, who log onto a web portal, execute a few tasks and log out, IoT devices require constant communication to their cloud infrastructure. In order to simplify the implementation, some developers will opt for so-called “long-term tokens” that can be valid for days or even weeks, instead of adopting the best practice of rotating the access token every few minutes.
Unfortunately, simplifying implementations also has its risks as an attacker has the opportunity to hijack and reuse a token for days after it’s issued. The situation can escalate further if other security practices, such as TLS encryption, are not properly implemented.
Moreover, even when transport encryption and authorization techniques are correctly implemented, there are still potential attack vectors that can result from a lack of security protocols in the application design. Attacks, such as SQL injection, can be executed against a vulnerable API endpoint much like they would be against a regular web portal. And there simply aren’t many good options to mitigate these attacks as most web application firewalls (WAFs) still do not parse or examine JSON inputs.
The solution? Design, build and connect to your APIs from the start with security in mind. There are five main points to consider when doing so:
- Strong authentication, ideally using asymmetric encryption and a private key stored securely on the device;
- Relatively short-lived tokens. Rotate them at least once every 30 minutes;
- TLS transport encryption;
- Standard authentication/authorization needs enforced for every endpoint. Avoid publishing unprotected endpoints; and
- Sanitize user input, even if it’s coming from a device. A malicious user could manually manipulate requests to exploit faulty business logic.
Keeping the above in mind is a critical step. In addition, implementing an API-aware WAF can also go a long way in mitigating security issues and providing visibility into flows.
APIs will continue to grow in lockstep with the rapid growth of IoT, so you need to start thinking about adopting a robust security strategy now to reduce risk and protect your customers.