The internet of things provides businesses with an incredible opportunity when it comes to analyzing customer data. With the right technological infrastructures, businesses can now capture customer usage data at levels previously unimaginable, helping to inform new market strategies and product innovation.
While being able to collect immense quantities of customer data is proven to help companies facilitate the buyer journey and improve customer experiences, it shouldn’t stop there. Instead, companies should find ways to monetize it as well.
Customer data is among — if not the most — important asset of any enterprise. The best customers will be identified by data; the customer experience will be enhanced by data; and new products and services will be developed based on data.
That probably explains why industry analysts have labeled enterprise data everything from “the new cash cow” to “the new gold rush.” IoT companies, in particular, are constantly under pressure to turn usage data into new sources of revenue, and that pressure is only poised to intensify in the years ahead.
But how do businesses transition from simply collecting data to monetizing it too?
It starts by looking at usage trends. With usage trends, businesses can identify the best customers to target for cross-selling of additional products or services. They can additionally predict, with a high degree of accuracy, which customers are likely to accept an upsell to a higher level of service.
Usage data also can reveal customers with patterns of low usage who may be unsatisfied or who may not be fully utilizing a particular product. By making use of this type of data, companies can reduce customer churn and ultimately improve the average lifetime value of their customers.
In order to fully embrace the value of data at the enterprise level, companies need a data monetization platform that can benefit the entire organization and marry front and back office infrastructures. The system should be able to collect, monitor, measure and record real-time, digital customer activities, such as application usage or consumption of power, data, storage and bandwidth. Businesses also should be able to be easily bill customers based on those activities.
In IoT, this is particularly important. The amount of granular data generated by IoT deployments creates a demand for dynamic billing solutions that can turn connected products directly into profit from the get-go.
As businesses continue to compete on the ability to leverage customer data alone, here are some factors to keep in mind when it comes to deploying a data monetization engine:
- Flexibility and scalability: As customer preferences change, so will the way businesses want to use and monetize their customer data. Having a platform that’s flexible and scalable with fluctuating customer and market demands is key to monetizing changing quantities of customer data.
- Usage preprocessing: As the amount of usable data explodes, it can be easy to overwhelm back-office software systems and bog down servers with data that is not needed to monetize products and services. That’s why monetization platforms should use a usage rating engine to preprocess usage data externally, sending only the relevant data to a company’s system of record.
- Rules-based rating: The usage rating engine should use rules that can be configured easily to control entitlements, services and containers. Being able to leverage a powerful, agile rules-based rating engine enables businesses to streamline billing operations and keep costs under control.
- High-value real-time usage: As stated previously, collecting usage data at a high degree of quality and granular detail is key to success. The right monetization engine will leverage this real-time usage data to automate critical events within a bill cycle — that could include anything from sending upgrade offers to charging for overages.
- Service identifier mapping: The tracking of physical components assigned to or used by a customer is a crucial step to leveraging usage data. Cloud application and infrastructure providers have particularly better visibility into customer usage when multiple device identifiers can be applied to client accounts and used to customize billing.
- Flexible pricing paradigms: Most ERP systems are limited by the pricing paradigms they can support, but a multidimensional data monetization platform has almost unlimited flexibility in packaging and pricing. Building rules logic in a monetization platform should be as easy as creating financial modeling using a formula in a spreadsheet.
- Product bundling: Many systems are limited by the product bundles and packaging strategies they support. This inevitably leads to product catalog creep; sometimes exploding to hundreds of thousands of needless variations created because of system limitations. The right monetization solution should allow businesses to bundle products or services together and create sub-allocations for cost basis. The monetization platform should also be able to manage discounts and pricing tiers, accurately and at scale.
As the internet of things continues to grow and reach new heights, the need to analyze customer data and bill for new products quickly and easily is more important than ever before. To do this, companies should leverage the customer data they’re already gathering — or should be gathering — and implement a flexible, agile and scalable data monetization platform that can support their broader business strategy. Then, they’ll be empowered to turn their customer data directly into an enterprise-level cash cow.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Launching your first enterprise integrated IoT solution is a monumental moment for any organization. The initiative represents a first step towards an unprecedented level of automated industrial and integrated business processes. We can now create tailored user experiences based on real-time information leveraging the emerging technologies of edge processing and machine learning, along with enhanced strategies for product design and IT digitalization.
IoT integration required
No matter how you run your first IoT project (relying on a service provider, leveraging a software platform or hacking away on your own), the key enterprise success criteria will depend on the ability for your IoT application to integrate with existing business systems. Like a mobile app that doesn’t engage users with push notifications or an e-commerce site that doesn’t have a shopping cart, IoT cannot deliver its promises without integrating with existing and legacy systems.
So how do you start on this IoT integration challenge?
IoT integration: Thou shalt…
I. Secure everything
It’s tempting to build IoT rapidly and start moving data between systems. A beta version demo showing a connected factory floor immediately sends your boss, marketing team and executives into an excited frenzy of a Jetsons-like future. The last thing anyone wants to think about is the consequences of those devices being compromised, remotely monitored or even captured as part of a botnet. Sadly, this is outcome very likely and proven to be happening all around us. When doing your IoT Integrations, always ask yourself:
- Do I have authenticated trust with this device or user?
- Should the information being transmitted be encrypted?
- Should this trusted device have authority to do what it’s doing?
II. Pick up where the existing system left off
With the mobile app wave, many companies spent years to be mobile ready before attempting to build an app. This was a massive inhibitor and falsely implied that old systems needed to be replaced. Instead, recognize that the vast majority of existing systems already have integration points built into them. There is a method for accessing them even if the method isn’t currently in “tech-vogue.” SOAP is still a viable integration option even if Reddit dismisses the idea. Whether you are integrating with a mainframe database, a Java archive, a massive service-bus or a DLL, understanding the architected communication protocol for your legacy systems is an important place to start.
III. Speak standards
Standard-based protocols aren’t a magic bullet, but they represent advancement when it comes to giving yourself a leg up on integrating and future proofing your enterprise. Choosing MQTT or REST as your IoT standard may still impose challenges as you communicate into MTConnect or OPC-UA, but by picking a standard, you make a whole community of libraries and resources available. Ultimately, the benefit of selecting an open standard for your solution is the ability to leverage the vast number of historic and open projects.
IV. Adapt and advance
The need to use legacy protocols with open standards creates competitive software designs. To overcome this conflict, an adapter layer is critical. This adapter will perform the task of relaying data across one protocol to another as efficiently as possible. This means the adapter shouldn’t perform any complex logic or refactoring. Instead, it should be as lightweight as possible and simply act as a cross channel for communication. There are connector software vendors that can help with this task, especially if your solution is cloud based. Alternatively, if you have a subject matter expert, the adapter can quickly be built in-house.
V. Not duplicate
Over the years, many enterprise integrations have been built by simply copying the database nightly. “DO NOT MAKE COPIES OF YOUR DATA TABLES.” Excuses for this terrible practice are based around security, performance or general lack of skills to do the actual integration. As far as IoT integration goes, this is one of the worst things you can do for several reasons: it immediately creates questions about what is the system of truth, it doubles the work effort to constantly sync conflicting rows, and creates developer confusion. When you’re tempted to copy data, it’s time to review your architecture design and the technical debt you will incur.
VI. Not duplicate
Another temptation every integration author feels is the desire to duplicate business logic. For the enterprise this is terribly risky, as business rules become ingrained into many different code bases and incoherent to understand. Many companies struggle for years to understand how their own processes work and how they can modify them safely going forward. Protect yourself now and approach your IoT with a reusable API focus. A single point of interface into your systems of record will allow you to be sure you know where and what is running your business
VII. Cache where you can
The biggest argument for duplicating data is performance, and it is a fact that many data stores of the past just aren’t fast enough to handle the IoT workload. Think caching rather than falling prey to duplicating. Caching is the best of both worlds, allowing you to put information in a location that is highly available and scalable, and at the same time does not represent truth in the system. By using middleware like redis or etcd, you can take the read load off the enterprise system and then use well established queuing systems for keeping your cache in sync.
VIII. Hold the refactor
Often, an IoT integration task turns into a system-wide refactor initiative as developers get lost in the rabbit hole of the legacy system. Strategically, if the system was important enough to be refactored before your IoT initiative, the burden of updating it shouldn’t be part your IoT project. Instead of attempting to fix the vast number of other things that are wrong with that system, hold your nose and just get at the information you need. The technical debt of the old system should not impede the success of your IoT.
The internet of things is clearly the most exciting opportunity for enterprises to optimize and grow their business since the inception of the internet itself. To create actionable insights and achieve the expected ROI from IoT, integration will be critical to connect user, machine and device information into the systems our enterprises use every day.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
We like to hack stuff. So much so, that we organize events to galvanize the security research community to hack stuff right alongside us. Stretching back to 2013, when we published a piece of security research showing that all of the 13 most popular routers were vulnerable to remote attack, local attack or both, we’ve organized well over a dozen hacking events all over the United States. The purpose of these events is to shine spotlights on areas that may need security improvement, and organize a volunteer army of some of the brightest minds in the security industry to collaborate on addressing these many, often complex, security challenges.
Two such events we organize are IoT Village and SOHOpelessly Broken. IoT Village is a community of security research featuring talks, workshops, hacking contests and press events. SOHOpelessly Broken is a hacking contest that started as the first ever router hacking contest at esteemed security conference DEFCON and has since expanded scope to include all connected devices.
Among many, one of the great benefits of organizing hacking events is that we get a first-hand glimpse into themes across some of the most salient security issues of our time. One such issue pertains to the security considerations introduced by connected devices. During DEFCON 24, which happened in Las Vegas over August 4-7, 2016, we hosted both of these events, which produced some fairly eye-opening results, including a new wave of security findings: 47 new zero-day vulnerabilities across 23 different device types and 21 different manufacturers.
Abstracting from those success metrics, we observed several pronounced themes:
- Fundamental issues persist. During its inaugural run last year at DEFCON 23, IoT Village uncovered 66 new zero-day vulnerabilities across 27 device types and 18 different manufacturers. Many of those vulnerabilities were design level violations of well-(mis)understood security principles, leading to issues with privilege escalation, remote code execution, backdoors, runs as root, lack of encryption, key exposure and many more. Fast forward to this year and many of the same basic design flaws persisted, including use of plaintext passwords, buffer overflows, command injections flaws, hardcoded passwords, session management flaws and many more. These were all found on a new crop of devices beyond those investigated last year, suggesting that the scope of the issue not only continues to be systemic, but is expanding as IoT adoption accelerates.
- The scope of IoT is expanding. Last year the emphasis of research was focused heavily on the smart home. This continued to be an area of importance this year, however we also saw similar issues across connected transportation and even the energy grid. One harrowing example is where one security researcher showed how an attacker could shut down the equivalent of a small- to mid-sized power generation facility by accessing the flaw in solar panels manufactured by Tigro Energy.
- Interest in IoT security is increasing. IoT Village doubled its overall floor space, and yet was still standing-room-only for all of the talks. The CTF track of the hacking contests grew from 11 competing teams to 51 competing teams. DEFCON awarded a coveted “black badge” to the contest winners, which is an exclusive designation only given out in extremely rare occasions, and is DEFCON’s version of the Hall of Fame.
- Manufacturers are starting to get more proactive. This year, two different manufacturers (FitBit and Canary) got involved with IoT Village, donating devices for researchers to investigate. Both FitBit and Canary hoped to engage the community in helping make their products more resilient against attack. Another manufacturer (Trane) created a new vulnerability disclosure process across its enterprise, as the result of research into one of the thermostat products. Trane is trying to make it easier for researchers to report security flaws, so issues can more quickly be remediated.
- The government is starting to take notice. Top executives from both the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) delivered speeches on the IoT Village stage. Rear Admiral (ret.) David Simpson, the bureau chief of the FCC, spoke to a packed audience about how security research in general — and events like IoT Village in particular — are doing a good job of “making things harder” for malicious hackers. Terrell McSweeny, commissioner of the FTC, presented about the law enforcement actions that her organization is pursuing related to IoT.
It is our hope that by fostering a community of research, we can be a catalyst for change in ways that benefit consumers, business and entire industries. If you are a security researcher, a device manufacturer, a member of the law enforcement community or anyone else with even a passing interest in addressing these challenges, please contact us to get involved.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
It doesn’t take long working in the world of IoT to feel the pulse of the clock. From idea inception to implementation, there is a sense of urgency to move fast. Each tick is a reminder that the competition is working diligently to provision a patent first, get to market first or be the first to scale.
Fast pace development is nothing new to software development teams big or small. Quick and nimble development — often called “agile” — is at the heart of the startup culture that has driven the IoT boom, and it’s resulted in an expectation that these products can be delivered fast — especially if you work with a small team. On the other hand, the typical process that ensures enterprises deliver a controlled, predictable and consistent product — often called “waterfall” — can get in the way of fast innovation. As a result, many enterprises work with small teams or create internal innovation teams to spearhead research and proof of concept efforts. Enterprises leverage smaller teams because they aren’t tightly coupled to the extended business processes that support everyday business like regulations, quality assurance, billing and customer service. The smaller, focused team aims to fail fast, iterate, improve and repeat, quickly honing in on delivering a minimum viable product. In the case of IoT especially, it is extremely helpful, I’d even say critical, to get out in the field and learn what you couldn’t experience in a lab before getting too far down the design road. IoT, unlike software, includes a hardware element that may not be nearly as flexible due to manufacturing physical product as part of the core solution. You may not have the luxury to make an iterative improvement on a sensor once it’s been manufactured and is sitting in a warehouse.
Certainly, this is the nature of innovation in general, but IoT introduces a software component and culture which is often new to traditional product manufacturers. In many cases, IoT is digitally transforming products that have had no technical dependencies other than parts procurement or resource planning. The transformative combination of hardware and software creates new value beyond the widget itself, but it also creates dependencies which disrupt long-standing workflows in manufacturing and product lifecycles. It’s often at odds with the enterprise processes supporting the product itself outside the innovation garage.
Blending the manufacturing processes of things with software
Even enterprise software development teams familiar with agile may have a hard time with IoT projects because IoT isn’t comprised of a single app. IoT solutions are truly distributed applications spanning several layers of hardware, software and transport borders. These layers have different lifecycles but have tight dependencies on each other. It can be difficult to make changes to one layer without having to consider the full stack. Here are, for example, are some high-level layers you can expect in an IoT solution, all relying on firmware and software to some extent:
- Device and sensors
- Local area transport
- Wide area transport
- Data ingestion and back-end integration services
- Dashboard/reporting interfaces
Ideally, there are clear delineations between dev, QA and production, but early on it may not be practical or cost effective to have completely segregated environments across all of these layers. Some low-power, low-cost devices may sacrifice over-the-air update functionality. This means they may be permanently tied to a specific build pointing to a specific environment limiting iterative improvements that are core to the agile process. This can have a significant impact on feature validation and testing timelines for early iterations. Transport layers such as cellular or LPWAN may have to be shared, early on, precluding testing that may risk stability. Some of the back-end services or integrations may be too complex or time consuming to set up in parallel across each party involved. While these aren’t best practices, they are the practical reality that enables IoT projects to get off the ground. That’s not to say the product is reliable or, more importantly, safe. The checks and balances expected in manufacturing automation, financial transactions and medical device require due process control, but I’d argue getting off the ground is just as important early on.
Embrace agile failure in pilot and respect waterfall quality for commercial rollout
The transition from pilot to production is where attention to detail becomes top priority. No matter how flexible or progressive an enterprise may be, some level of process and control will become necessary. Arguably as important as innovation, enterprises have an underlying core responsibility to brand reputation. Failure, which is a core tenant of the agile process, is counter to brand reputation. That’s not to say enterprises can’t be innovative, it’s simply a salient reality that everyone involved in commercializing a product must appreciate. Enterprises passionately want to avoid failure when the product hits customer’s hands. This is why waterfall still plays an important role. It’s most apparent when an IoT product moves beyond version 1.0.; this is when agile starts to take a back seat to waterfall and the integration into all of the supporting business systems begins. All of those integrations collectively make up the end customer experience and the profitability of the business. Therefore, the pace slows, scrutiny increases and the infamous red tape is put up. It’s at this point where the agile teams feel the jarring affect because they’re still under pressure to commercialize fast in an environment that expects hard delivery dates.
The fact is, it’s hard enough to manage resource planning for manufacturing. Adding dependencies on a software layer or several software layers, especially for teams unfamiliar with software development styles like agile can be absolutely jarring. It’s natural for software teams to update the product as soon as an improvement is available, but it would be unthinkable to pull a completed product sitting in a warehouse to change the physical shape of a widget. Enterprises leverage rigid, gated processes because that’s what ensures they get it right the first time and deliver a quality product, on time at scale. Orchestrating and planning across the entire spectrum is just plain hard when things are changing on the fly … but changing on the fly is what makes software great.
Getting agile and waterfall cultures to work well together takes an extra level of commitment to communication and appreciation for each team’s responsibilities. Ironically, one of the most popular agile software development methodologies called “Kanban” came from the large-scale manufacturing world. It’s certainly a testament to the fact that common ground exists. The key is to acknowledge there will be a transition as the solution matures and work to compromise wherever possible, ultimately leveraging the strengths of both approaches. My recommendation is to stop thinking agile versus waterfall and embrace agile and waterfall.
Every day companies are hearing about the internet of things — maybe it’s inquiries from their customers, their board, their investors or the like. These groups are hearing the market projections and seeing examples of how IoT is changing business for the better, and they want to get in on it. The problem is that for the most part, IoT feels only achievable to those companies with unlimited resources to make it happen. Unlimited research and development budgets, unlimited resources, unlimited ability to make mistakes and try again. These requirements leave out 90% of the companies out there. And even the remaining 10% that are ambitious enough to embark on an IoT journey quickly encounter technology and development hurdles that stall or even cancel their IoT projects.
This is a big problem for our industry. IoT has been this futuristic concept for years. We’ve been dreaming about a connected world ever since George Jetson pulled up that first video chat with Mr. Spacely. Nearly half a century later, we are finally to a point where some form of the Jetson reality can become ours. We have the know-how to create a connected world, but adoption is growing slower than any of us would like. And there is a good reason for that: It doesn’t feel accessible for companies looking to start connected product projects, and it doesn’t yet feel like a necessity for people buying them.
Let’s dig in on that a bit deeper.
Because of the small number of companies that have resources to put toward IoT projects, there are only a few examples making their way onto store shelves. We see smart products — like Nest Thermostats or the Tesla car — and think “that’s cool,” because connected products aren’t everywhere yet. Right now, there are so many “gadgets” making their way to market, but they don’t all work together. Because companies are willing to connect anything — whether the product adds value or not, IoT is still viewed as a novelty rather than a way of life.
In order to deliver on the true promise of IoT, more companies need to be able to see their IoT visions come to life. There are still way too many companies sitting on the sidelines looking at IoT as an insurmountable challenge. It’s up to the vendor community to democratize IoT and make it more available and accessible to companies of all sizes. The more brainpower we have out there connecting products, working with standards and showing the real value of IoT, the quicker we’ll see real mainstream adoption from consumers and businesses alike. Then, larger concepts like connected cities can become real-life.
IoT is making distributed computing cool again. The distributed computing lexicon has historically been relegated to conversations within the walls of military organizations, tech enterprises and the halls of academia. ARPANET technology in the 1960s begot the internet. Salesforce helped make “software as a service” a household term in 2000. Researchers have talked about distributed computing for years. Today, those distributed computing concepts will be critical to the success of internet of things initiatives. Investments like Ford Motor Company’s $182.2 million into Pivotal, a cloud-based software and services company, signal distributed computing’s migration from the halls of academia to the boardroom.
Enterprises are starting to place their bets on how they will capitalize on the significant IoT opportunities that are starting to emerge. These investments will have ramifications on a company’s ability to function and deliver the experience its customers demand. The applications that result from these multimillion dollar bets need to provide an always-on, reliant, accurate and cost-effective service. In order to do this, it will be essential that the C-suite understand the distributed computing lexicon.
If you’re not yet familiar with terms like “eventual consistency,” “vector clocks,” “immutable data,” “CRDT’s” or “active anti-entropy,” you should ask yourself the following questions to ensure you’re approaching distributed data properly. These are all terms familiar to those involved in the science of distributed systems. This two-part series will examine the answers to these questions, and help illuminate how organizations can develop cost-effective distributed architectures that ensure resiliency, usability and accuracy.
How can you architect to ensure your data is available?
The distributed world’s guiding principle is Eric Brewer’s (tenured professor of Computer Science at UC Berkeley) Consistency, high Availability and tolerance to network Partitions (CAP) Theorem. The CAP Theorem suggests that a distributed computer system can have, at most, two of those three properties. In a distributed system, availability refers to the idea of independent failure. When one or more nodes fails, the rest of the system continues to function so that the information the system processes is always available to the user. Though it predates the CAP Theorem, ARPANET is an example of a distributed system architected for availability. It was constructed to link smaller networks of computers to one another to create a larger, latticework network that researchers and scientists could access even if they were not located near a mainframe or network hub. If one of the network computers went down, researchers would still be able to access the data crisscrossing the network. Availability has been thrust to the forefront in the internet age. Highly trafficked sites such as Facebook and Amazon have favored availability over consistency. After all, it’s not as if you won’t get annoyed with Amazon if the latest product review isn’t available within subseconds. You are likely to be annoyed if you can’t log onto the site, however.
In today’s customer-centric business world, IoT initiatives are bringing back the idea of high availability and architectures built to withstand failure. A city government may choose to implement an IoT-enabled traffic grid. Each traffic light (equipped with a number of sensors) must communicate with the other traffic lights around it, smart vehicles in the vicinity and a local computing node that processes or reroutes the sensor data depending on its use. The system will likely employ a number of nodes throughout the traffic grid to collect the data and make it available to the applications. If one node fails, however, the data it collects and processes must still be available to the rest of the system and possibly to other central applications. Boardrooms typically assume their data will always be available to the application that needs that data, even in a complex distributed architecture. If they wish to implement IoT-enabled systems, they must understand those systems have to be built with failure in mind.
How do you minimize latency and performance degradation to achieve usability?
Distributed systems fight physics. A system can only move so much data before the system slows down and latency grows to an untenable point. E-commerce websites were some of the first to use distributed architectures to achieve usability. They keep product information for each item in their inventory in centralized data stores. They’ll also take the most-used portion of their product assortment — the top 25% best-selling items, for instance — and cache that information in the cloud at the edges of the network. Replicating and storing the most-accessed data in a distributed location helps keep the website transactions from overwhelming the central database and helping ensure their users get fast response times. Distributed e-commerce websites are designed with end users in mind. If the central database becomes overwhelmed and the site slows down, customers will leave before making their purchases.
Today’s IoT initiatives have adopted distributed computing concepts to ensure the data they generate and analyze remains usable, even when the data must traverse large geographic distances. Companies must also design their IoT initiatives with the end user in mind. A weather company’s sensor network generates data from each sensor. The company must analyze and send some of that data in real-time, to the weather application on users’ local mobile devices. Weather sensors take readings frequently at local sensors. It sends some of that data back to the core for analysis but must process some of the high-frequency readings near the sensor. These are the readings that look for conditions like sudden barometric pressure drops that warrant weather alerts. To ensure usability, weather companies institute a distributed infrastructure with nodes that facilitate data analysis for a cluster of sensors. They also perform edge analytics to determine which data is worth sending back for further analysis.
Your data is available and usable. Now what?
Organizations must architect their systems under the assumption of failure to achieve availability. They must architect their systems under the assumption that data analysis in one centralized location could render data unusable for distributed end users. Even if organizations are able to architect for availability and usability, other issues remain.
With so many different applications pouring data into, and pulling data from, distributed infrastructures, accuracy will be an issue. How do you know that the data you use to generate predictive insights is giving you a useful picture of the future? How do you know all of your applications are running smoothly?
The next part of this series will discuss how to architect for accuracy. And, most importantly, it will examine how to develop a distributed data system that is cost-effective. Boardrooms are making multimillion-dollar investments in today’s infrastructure tools because IoT is making distributed computing cool again; those tools must assure a strong ROI if a modern infrastructure is ever going to receive approval.
The U.S. exports about $1.5 trillion worth of goods and imports about $2.2 trillion. That is a total of $3.7 trillion in goods moved by logistics. The total value moved globally exceeds $17 trillion. Logistics is as real as death and taxes.
Knowing where the money is spent in logistics helps identify the opportunities for making money. We learned many lessons replacing the global procurement pricing system for the #1 supply chain company in the world. The three key lessons we learned:
- Bad data is used to make decisions in the best of companies
- Logistics costs are not visible
- Cost savings have unintended consequences
Bad data is a major hidden cost. Bad data includes missing data, late data and wrong data. Users and systems compensate for bad data with “versions of truth.” The perpetual recurring cost of bad data is too embarrassing to admit.
How can IoT reduce bad data?
Automated data collection using IoT creates a clean new source of data at digital speeds 24/7. The speed and accuracy of IoT data provides the ability to automate process decisions. Keeping IoT data and processing separate from bad data is key to better decisions. Subscribing to information instead of data is smarter and eliminates non-value add costs.
Invisible logistics costs
Procurement focuses on material costs and not logistics costs. Logistics in most companies is outsourced. It is complex and involves many third parties. Unraveling the costs and accessorial charges is another business function that is outsourced to billing audit specialists.
In a global logistics scenario there could be many custody transfers of goods in transit. For example, transporting a container involves a truck and a chassis to carry the container. Two different service providers may be involved in each transfer. The Port of Long Beach handles about 7.2 million twenty-foot equivalents (TEUs) per year. The Port of Shanghai handles about 35 million TEUs. That is about 28 million chassis moves at two ports. How efficient do you think chassis management is?
How can IoT make logistics costs visible?
IoT in logistics captures data of physical movement of goods. This is a structural change in the level of data granularity for supply chain visibility. Every service touch point incurring cost can now be digitally visible. This is also a basic building block for blockchain technology.
The harsh reality of cost savings is that someone loses revenue. This creates friction and resistance to change with unintended consequences. Cost savings do not trickle down to people at the bottom of the supply chain. Compare your spend on resources and systems (ERP, S&OP, WMS, TMS, etc.) to fulfill customer orders with the income of a driver who makes the deliveries. What benefit does the driver realize from the investments? Even Uber faces the same challenges and is responding with driverless vehicles to address driver shortages. In emerging markets, national interests in economic growth favor increase in employment of drivers for transportation.
How can IoT reduce unintended consequences?
IoT in logistics can deliver value on first use. No waiting on promises of ROI. IoT deployments in logistics are at the ends of physical supply chains. The benefits of IoT can be accelerated if the people working at the ends realize direct benefits first. For example, consider the hours a driver spends waiting to pick up or drop off a delivery. The reimbursement process for hours-waiting is a constant source of friction and distrust. IoT eliminates friction with visibility and exposing process inefficiencies. Drivers can make more trips, earn more, increase equipment utilization and operational efficiency. A win-win for all.
Where’s the money?
Look for big repeatable opportunities in global exports and imports. Think simplification to eliminate cost and create affordable IoT based solutions across industries. Examples of rapid growth or change are e-commerce, food supply chains and 3rd party information logistics (3PIL).
Global e-commerce double-digit growth rates cannot be ignored. Amazon, Alibaba and Flipkart are examples of e-commerce on different continents. The success of Amazon and the rapid growth of Alibaba are undeniable. They recognize that logistics controls the path to profitability.
India presents one of the largest new opportunities from initiatives promoted by the Indian Government such as “Digital India,” “Make in India” and the most comprehensive tax reform by passing the Goods and Services Tax Bill. Supply chain logistics can play a key role in ensuring the success of these initiatives to achieve GDP growth targets of 7.2%. Think logistics and not just tax accounting.
The food supply chain
A McKinsey & Company article on food wastage states that in emerging economies, 32% of total loss occurs during production. In developed countries, 38% of loss occurs during consumption. Agri inputs can represent 60% of the cost of goods sold. If doing good is in your thoughts, this is worth solving. The African continent presents the largest immediate opportunity in the world to manage the food supply chain lifecycle with affordable IoT. Emphasis is on affordable IoT. Agriculture-based economic transformation of the African continent has increased commercial farming. The impact of climate change raises the level of urgency to address this issue for all food companies sourcing from Africa.
3rd party information logistics (3PIL)
Supply chain problems are common global problems. IoT solutions in logistics have global market applicability. The largest combined global opportunity is in transforming 3rd party logistics (3PL) services to 3rd party information logistics companies. 3PLs are constantly challenged to deliver faster, cheaper and better service with visibility. 3PLs that transform themselves from pre-IoT technologies to IoT driven digital networks will survive the competition from new tech savvy 3PILs.
Logistics is one of the few domains where Millennials and experienced supply chain professionals can combine the value of unconstrained thinking with pragmatism to deliver value on first use.
In Part 2, I will present successful examples of IoT delivering value on first use.
If you’re like me, you’ve been a believer in the potential of the internet of things since the beginning. It’s hard to deny the potential positive societal impacts a connected world can drive, but does it mean it’s a vital investment for companies? Like most emerging technologies, there are always questions on whether or not you can make a true business case for it. While some reports forecast that nearly $6 trillion will be spent on IoT solutions over the next five years, for this to truly occur IoT needs buy-in at the enterprise level. Spending on IoT projects will only continue to climb if it helps companies realize true ROI and it can ultimately drive positive business outcomes.
Since IoT is seeing a strong adoption rate around the world, it’s clear to me that there are significant benefits and ROI opportunities that prove that IoT technology can have true business value. We’re seeing companies move from simply adopting internet of things technologies to actually putting it in position to drive their business. IoT is producing measurable results, and the top performers who are seeing the greatest results treat their IoT initiatives as business projects, not IT purchases.
Let’s take a look at some of the key proof points from Vodafone’s 2016 IoT Barometer that outlines how IoT is becoming a vital investment for companies:
- Recognizing IoT ROI: Yes — a lot of acronyms, but IoT cannot move forward if there aren’t true return on investment (ROI) opportunities for organizations. Luckily, 63% of IoT adopters are seeing “significant” ROI in IoT projects, up from 59% in 2015. Whether it’s connected supply chains for manufacturers or smart office capabilities for employees, businesses are seeing significant results from their IoT deployments, changing the way they do business. In fact, adopters are seeing a 20% improvement in key business indicators like revenue growth and cost reduction as a result of investing in their IoT programs.
- Driving future success: Successful businesses always have an eye toward the future. For businesses in the Americas — e.g., U.S., Canada and Brazil — IoT was reported to be a top business focus, as 74% of companies view IoT as critical for the future success of their organization. Additionally, 48% of companies globally are using IoT to support large scale business transformations such as helping to change a manufacturing business into a service company.
- Uncovering new opportunities: IoT is driving ROI for companies by facilitating new partnerships to serve customers in new ways. In fact, 61% of businesses say they “consistently” see IoT as an integral part of wider business initiatives. In order to meet the rapidly changing demands of today’s customers, companies are continually forced to redefine their business strategies in order meet these needs, stay relevant and continue to see profitable growth. IoT data is informing business strategy in new ways, as 64% of businesses consistently use big data and analytics platforms to support decision-making.
These three key points illustrate how companies that are investing in IoT are seeing real results on their bottom line. For businesses, the internet of things is no longer just an IT project to help internal functionality. Today, investing in IoT is a business development strategy that drives growth, profitability and depth in to what companies can offer their clients. The proof is in the ROI — the internet of things is a vital investment for the modern company.
Often technology companies focus on innovators who started with a clean slate. Unfortunately, this is far from reality for most businesses, as most organizations have critical initiatives to become digitally native and agile to meet customer expectations. My employer helps companies with both legacy and modern applications — which is the reality in most enterprises today. New initiatives to modernize still have a reliance on legacy systems which contain critical data and business logic. Although some organizations are proactive in attempting to understand their highly complex systems, most are struggling. The technology to help with this situation is not as widely deployed as necessary. Some application performance monitoring (APM) solutions provide technology to build maps and paths of transactions via tracing, but even fewer work at scale in production. This technology not only provides visibility into systems boundary interactions, but also measurement allowing for the correction of performance and scalability bottlenecks. Adding performance testing (load testing) products and services coupled with APM help immensely.
Many organizations decide to build these new systems and apps using elastic cloud capabilities or services, thinking they will protect them from flash traffic or the need to scale. The oversight is that these elastically scalable applications rely on legacy technology housed in traditional data centers housing without elastic scalability. As these new apps are launched, it may cause cascading failures. Digital transformation, unfortunately, typically requires the interfacing of legacy and new systems of engagement for most organizations.
An example of this: in reality, many enterprises are adding capabilities to take advantage of IoT requirements. Just last week I had one such discussion; the enterprise was adding location awareness in a mobile app. It wanted to present context-relevant app functionality and offers depending on the user’s profile, preferences and history. To meet this new requirement a lot of additional data was being collected and fed into legacy systems to allow these new capabilities to function. In this case, the legacy system was brittle and couldn’t handle the additional transactions and data. The enterprise realized this too late to make the required changes, and the net result was a rather embarrassing launch, which had to be rolled back. The overloaded system affected day-to-day business operations, not just the mobile app users.
The result was that proper end-to-end performance testing and scalability analysis were overlooked, and the business suffered. The organization reactively implemented visibility and did additional testing, but this could have been avoided. Poor planning and the need to quickly address system failures are a challenge for most. Typically our organization gets the frantic phone calls to license software to analyze the scalability bottlenecks in highly complex production systems. The question for me is when will organizations stop being reactive? Will it require a mindset change, software change or infrastructure changes?
Feel free to comment here or via Twitter @jkowall, and thank you for reading!
Sorry for the clickbait opening! What I mean is that IoT is just a tiny fraction, and possibly the most obvious evolution of the tools we need to remake humanities interaction with our world; how we consume, preserve and sustain our societies.
Computation and communications are cheap. Our things measure the world around us and tell us about their “lives.” They act as an extension of ourselves, feeling, tasting, sensing pressure, vibration, color, heat and a lot more. Modern internet and other communications technology allow us to transmit this data across the street or across the galaxy! Data science and applied mathematics techniques give us a way of learning about the relationships in this data. Big data technologies let us store and perform computations on this raw data, turning it from data to information and finally to knowledge. The internet of things is allowing us to discover the Universe through these things we’ve built.
Indulge me a bit.
Thousands of years ago, we solved everything empirically, but we had precious little data and only rudimentary interpretation techniques. The fire burned our hand, we stopped putting our hand in the fire … a Homo erectus genius is born.
A few centuries ago, we’d developed complex symbolic manipulation (algebra, the Calculus, etc.) that allowed us to develop analytical solutions; formulas that described often relatively simple phenomena … we plug in the inputs and we get the result. Sometimes innovation itself was the process of deriving formulas from trivial observations (e.g., coming up with Pythagoras’ Theorem relating the lengths of sides in a triangle). And before anyone defends the poor and maligned Pythagoras, by “trivial” I meant “simple” observations, not “unimportant” ones.
But we were still dealing with small data and phenomena that were relatively simple. Not so in today’s world. Between McKinsey, Gartner and others, you can get a sense for just how real is the big data deluge. IoT has created oceans of data (forgive the relapse into the water metaphor, I’ll spare you the data lake even), and there’s been a positive feedback cycle whereby our species has developed incredible numerical techniques (as opposed to the symbolic ones of the previous paragraph) for describing the world around us, further justifying capturing more data, accelerating the advancement of the applied mathematics and … well, you get the point.
When I say numerical techniques, I mean that we aren’t using formulas to derive new formulas or algebra to derive new laws. Now we use the relationships hidden in IoT data to work backwards, in a sense. We start with the big data, and we may not even use that to develop a theoretical understanding of the world … we may simply assume a “black box;” the world turns, the sensors of our IoT capture the details, and we derive what’s going to happen next without understanding why (e.g., artificial neural networks in the field of AI). Lacking the fundamental why isn’t acceptable for a PhD dissertation — it’s no way to do pure research, but in the commercial world we can get by without knowing the why. In fact, freeing ourselves from the why question can permit all manner of business innovation even if it won’t win anyone a Nobel Prize. Think about this as being able to say when a particular component in a factory is going to fail, but not having a complete understanding of how this comes about.
For those of you interested in the philosophy, you might even say that the symbolic techniques of academia are simply inadequate to the tasks of understanding the complex world, regardless of how far we advance (see Gödel’s Incompleteness Theorems).
So why is this important? Why is it exciting? Why does it matter? For the first time in the history of our species, we have a cost-effective way of learning about the “private lives” of our things, picking apart not how they’re supposed to work, but how they actually work. Not how they’re intended to be used, but how they’re actually used. Our things can learn and improve. They can work more effectively, efficiently, live longer and at a lower cost. Society gets more for less, and we all live better for it. Seen this way, IoT is transformative … now that’s exciting!