Here’s a puzzle to ponder: Why is it that in today’s age of accelerating technology investments, where trends such as BYOD and IoT actively illustrate the compounding effects of Moore’s Law and Metcalfe’s Law impact of technology in business, does productivity growth across the major world economies seem stuck in low gear? If these technology investments in increasingly capable devices/things/solutions are empowering the digital transformation of business itself, why aren’t we seeing dividends of this transformation reflected in real productivity gains?
The quantitative metrics paint a concerning picture: In the United States, 2016 productivity growth was .2% according to the U.S. Bureau of Labor Statistics and averaged an anemic .5% annual growth over the past six years. However, IT spending averaged around 3 to 4% per year the last five years, and is expected to continue at this pace over the next five years. This begs the question: Why is there a gap between technology investment and resulting productivity gains? The general expectation is that investments in technology to digitally transform your business and to enhance your business processes and workflows, the results of these investments should yield increase productivity for your business. Yet this gap between technology spend and productivity gains has led to calculations that the United States alone has missed $2.7 trillion in unrealized gross domestic product growth. There are plenty of theories on why this is, including that the metrics used to measure productivity themselves are suspect.
The role of complexity in digital transformation
I tend to side with the following theory: The rapid growth in technology investments combined with the fact that many of these investments are in rapidly evolving transformative technologies (such as SaaS, cloud, IoT, analytics) is resulting in both constant change and complexity for users. It is this complexity that is getting in the way of productivity, not the technology itself. This is a hard point to measure with quantitative statistics, but I witnessed many examples during my recent qualitative research and discussions with enterprises in the healthcare, higher education and technology verticals. These discussions focused on uncovering challenges in current business workflows that prevent users from doing what they want and need to do, and how these challenges serve as a barrier to productivity within the company as a whole. I approached these discussions with an interest in how the internet of things potentially adds value to how users interact with technology and enhance user workflows. My intent was not to probe with leading IoT-related questions but to sit back, listen and learn about how users adapt to broad changes within the workplace.
One example is in higher education, where community colleges and universities are investing in online education as a mechanism to reach more students or interact more effectively with on-site students. These investments often result in frustration on the part of the educators and teachers themselves where they often view online education as a barrier to teaching. Probing further, it became clear that instead of focusing on teaching material, teachers felt burdened with having to learn the intricacies of all the technology components required for online education. These include having to know how to operate, support and troubleshoot lecture capture solutions, audio and video hardware, online streaming solutions; monitor social engagement and manage the movement of digital content into and out of learning management systems. For many, they lack confidence and are uncomfortable with their technical ability to manage this orchestration. This uncertainty describes a rough edge to digital transformation. A community college administrator expressed his frustration to me when he said, “A big way to focus on student success factors is to let technology get out of the way,” and right now the technology is often seen by teachers to be a barrier to education, not an enabler.
Speaking to healthcare customers, much of the same sentiments rang true. The digitization of the patient record may be a regulatory requirement, but it’s also seen as a way to improve health care as it allows for a consistent patient record to follow a patient across multiple care providers. However, in talking to health care clinicians, frustrations surfaced in their belief that interfacing too much with a computer and not enough with their patients. For many, this outweighs any potential positives in digitizing the patient record. Often, clinicians feel they focus more on orchestrating the flow of patient data instead of actually listening to the patient in front of them. A radiologist noted that “introducing technology that improves doctor productivity is a worthy goal if it means we can spend more time with our patients.” In his opinion, that is not happening and less time is spent with patients. A health care partner shared that the advent of electronic medical records (EMRs) has slowed down doctor/patient interactions to the point that a typical doctor in a health care clinic sees five to nine fewer patients per day today than she did in the days before the adoption of EMRs. They said the goal of digital transformation in health care should be “to make patient visits more personable with less time staring at a screen,” but that the current technology tools in place are not enhancing or speeding up the doctor/patient interaction.
Both examples demonstrate real-life examples of how digital transformation may have laudable goals, but complexities related to digital transformation have led to frustration amongst everyday users.
Digital transformation: The death of traditional apps
These examples clearly illustrate that the very nature of the apps we use in our daily lives are changing. The traditional app model is one where the app sits in our data center (or the cloud) and presents information to us via a screen. But that is changing; apps are increasingly defined by the interaction of users, location, data and devices. The app itself is focused on a specific experience tied to a workflow or a workspace. Where this creates challenges is where these interactions break down and manual intervention is then required on the part of users who are not accustomed to this type of role. Users want these interactions to be seamless and transparent, and become easily frustrated when they must actively play a role in managing, viewing and participating in this interaction flow. This is another rough edge of digital transformation and I believe IoT can play a role in smoothing out these rough edges.
The value of context in the enterprise
The value of IoT in the enterprise goes well beyond “things;” the value is in managing and minimizing complexity and enhancing business processes. That doesn’t diminish the value of new and interesting things, as IoT is being driven by the rapid growth in devices and things, and this rapid growth does play a role in the complexity mentioned above. But this growth is also yielding a massive increase in connections between users, devices and things, and there is tremendous value in the data generated by these connections. This data reveals context about users: how they interact with apps, devices and things within a workspace. This context provides tremendous potential to streamline how interactive workflows and apps exchange information transparently within a workspace, and without requiring user interaction. For example, imagine a healthcare scenario where a doctor enters a patient space that automatically detects the presence of the doctor, the patient and any smart devices in the room. This presence information is used to automatically spin up the appropriate resources, manage the flow of data and present the correct information to the doctor and patient — all with minimal user orchestration required. I know some may immediately jump to the potential security concerns with the movement of potentially sensitive data, but this context can enhance security and serve as an even stronger way to authenticate and authorize users, apps and data.
As noted above, the gap between technology investment and productivity is large. This gap exists during an unrelenting period of technology innovation that surprisingly is not resulting in any significant gain in productivity. Instead, this period of digital transformation has yielded examples of digital disruption serving as a drag on productivity. However, there is potential to view this digital disruption as an opportunity to use the data generated by the interaction of users, devices, things and apps within a workspace to benefit the workflows and users within that space. This contextual information can be used to make technology interactions more transparent and focused on simplifying how users interact with technology. And the internet of things plays an integral role in helping organizations orchestrate interactions and workflows within their business, so their users can spend more time working and being productive and less time managing complexity and technology.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Security online continues to dominate the headlines. Only a few weeks ago ransomware wreaked havoc on over 230,000 computers and spread its tentacles to over 150 countries. Prior to that it was the Mirai botnet that spread fear. The list of malware goes on and on, and businesses are trying their best to stay one step ahead of cybercriminals. No wonder a recent study found that 80% of IT teams have had to increase the amount of time they spend dealing with security issues and three out of four teams spend up to 10 hours a week on security threats. With around 80 new IoT things coming online every second, managing security is only going to become more complex. Especially due to the relatively “dumb” nature of some of the things that need protection from threats. One industry is on the frontline of the cybersecurity war, the mobile network operators (MNOs) whose cellular infrastructure IoT depends on.
Cellular networks use one of the strongest forms of security by employing SIM cards. A SIM card acts like a hardware token to authenticate the device to the mobile network and the technology. The algorithms and processes used for the lifecycle of these tokens make it extremely hard, if not impossible, to compromise its authentication. However, this layer of security is limited to connecting only the device to the network. Applications that sit on these devices need to authenticate separately to the application server into order to identify themselves. The authentication that takes place on the application level typically uses usernames/passwords or client-ID/secret pair (credentials) or a certificate.
The weakest link: Securing the layers
You would be forgiven for thinking that two-layer authentication was suitable to deliver strong security. Not so. There is hardly any coordination between the network and application during the authentication and there has been no mechanism to link these two. Application-level security is the weak point. The existing username/password mechanisms were created to authenticate users to the application servers and were considered an acceptable approach at that time as people were supposed to remember their identifiers. However, when the same mechanisms are extended to IoT devices to authenticate themselves to application servers, it creates a very weak end-to-end security link as these identifiers have to be stored on IoT devices. They are susceptible to theft and difficult to update.
By extending network authentication to application authentication, the industry can create a highly secure end-to end-security environment for IoT devices to connect and communicate with the application servers. MNOs are in a unique position to extend the network identity or network-based authentication mechanism to IoT application servers to create two-factor authentication for IoT devices. The benefits of extending network identity to IoT applications also give cellular IoT an edge over other non-cellular IoT technologies.
Playing it forward
IoT is a burgeoning industry, yet security fears worry consumers. A recent study found that one in two people distrust online security — and if consumers lack confidence, they won’t participate in the IoT ecosystem. To instill consumer confidence and secure IoT, the industry needs a new robust layer of end-to-end security for connected devices.
Some operators have taken the IoT bull by the horns and others have had to sit on the sidelines. Ultimately, the decision to play a proactive role within the IoT ecosystem will come down to the operator’s strategy it wants to pursue in its market. MNOs can play a vital role in securing IoT devices and applications. It is a win-win-win for consumers, operators and for the key players within the IoT ecosystem who require secure, reliable connectivity.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
At the touch of a button, you can request a ride, get groceries or dinner delivered to your door, or have your Amazon order delivered in two hours or less. It’s safe to say a new era is here — we are living in an increasingly on-demand world.
With every three out of 10 American workers employed by the gig economy and new on-demand services appearing daily, there’s no denying the world is trending toward increased convenience, autonomy and flexibility. More than 22.4 million consumers use on-demand services, netting $57.6 billion in spending annually. Online shopping currently leads the on-demand revolution with $36 billion spent, with transportation coming in a not-so-close second at $5.6 billion, followed by food and grocery delivery at $4.6 billion. And the result of this impressive growth in spending? Researchers believe by 2020, almost one in five U.S. workers — the equivalent of 31 million people — will rely on the gig economy for employment. Is the ecosystem ready to support life on-demand?
The future is here. Now what?
The gig economy, and every company born out of it, has propelled consumers to become accustomed to a vast array of services like ride-sharing and food service on-demand — but it doesn’t end there. In addition to what is already available, we can expect an explosion of additional services being offered on-demand.
Imagine a world where healthcare is available on-demand. Doctors and nurses will come to your doorstep (at home or your office), a veterinarian will come to your home to care for your beloved pet instead of having to take your dog to a scary office. In terms of beauty services, stylists will come to you for hair appointments and manicures instead of requiring you to go to the salon. By the way, on-demand doesn’t mean the services have to come to you. It could also mean that you could go to a service location but not have to wait in line because the schedule will be created automatically based on your movements. A nearly unimaginable on-demand world of convenience will be available at the touch of an app — until we hit an inevitable tipping point.
Hitting the on-demand tipping point
Entrepreneurs looking to enter the market and established companies will continue to take advantage of consumers’ infatuation with real-time on-demand services. However, the economy will eventually reach a point where it can no longer support the growing inefficiencies caused by an outdated and weak back-end infrastructure. The inefficiencies are created by companies and industries implementing services with the fastest and most convenient outcome to capture market share, as opposed to the most efficient and effective method to operate the service.
The cost of on-demand: Nearly $58 billion and inefficiency
As growth accelerates, we need to look at the cost. Society, including the consumers using these on-demand services and the companies providing them, need to ensure this trend does not create little-discussed but very impactful side effects — massive waste and inefficiency. The average consumer does not always think about what is happening behind the scenes to provide their products and services on-demand. To get a ride within minutes from a company that only uses contract drivers, there are often more vehicles idling around the area than necessary causing congestion, pollution and overall inefficiencies within the ecosystem.
If we are going to continue ordering rides on-demand, we need to cut the number of cars on the road and ensure they are working intelligently together. If we are buying in real time with a delivery to our front door, let’s make sure companies plan efficiently to decrease the number of delivery trucks on the road. The advantages and conveniences of the on-demand economy require companies to ensure back-end operations are just as efficient and convenient as the services they provide. Data does not show that happening.
When convenience outpaces infrastructure
In order to fully realize life on-demand, gig companies must optimize the systems currently in place instead of simply adding even more unnecessary capacity. Although it will be difficult at first, companies will need to shift their focus and put as much emphasis on back-end and operations execution as they currently do on the front-end, consumer-facing aspects of the business.
When planning or contemplating a new on-demand service, companies and governments must take into consideration the resources needed to fulfill that demand and ensure it is not exceeded solely to increase convenience at the cost of efficiency. The answer: restructure current “gig-based” models (i.e., using contractors who choose their own job) into resource-directed models (i.e., controlled and actively scheduled resources that work based on a schedule provided by the company). Companies using contractors such as Uber, Lyft and Postmates are prime examples of achieving growth through over-capacity versus operational efficiency.
So what’s the difference with these models? Resource-directed models ensure on-demand services are operated efficiently through dynamic optimized scheduling. This means that when you click your ride app, the one most applicable driver will be notified to pick you up instead of a bunch of drivers getting your request (and only knowing your pick-up location and not your destination). The system will take into account your preferences, the drivers’ preferences and the company’s goals, creating an efficient schedule for all in real time and with less waste.
If you take that to the next step, you could actually optimize between networks, creating a coordinated efficient fabric of resources. While there may be more than a few companies that offer deliveries to your home or workplace, these services will be coordinated through a few efficient scheduling networks that make sure resources are used efficiently for the good of society and the consumer — not just the company operating them. This is not to make for a feel-good, “kumbaya” world, but one where we simply allow sophisticated technology to help us be intelligent in our decisions to the betterment of us all.
The future of on-demand is in your hands
In order to reach this on-demand utopia, companies, governments and cities will need to recognize and address the looming inherent risks in IoT, smart cities and on-demand services. However, it will be the will of the consumers that has the final say. The on-demand economy was created by consumers’ desire for convenience and control, and until they demand improvements, things will continue as is.
Take airlines for example. Consumers continue to complain about cancelled flights, poor service and other issues, but planes are still full. United will not be persuaded to provide better customer service unless people actually start boycotting its flights. Uber won’t improve its internal issues until consumers stop using it and the company feels it financially. We won’t see improvements to the on-demand model until consumers demand increased efficiency and accountability from their beloved Lyfts and Amazons. Actions speak louder than words, and $57.6 billion spent by consumers for on-demand services is saying a lot.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
When it comes to IoT, there is a lot of complexity and fluidity in the systems. Suddenly, computers can be and do almost anything, including advanced learning. And, like with CoAP and MQTT, this can lead to confusion in the best method to solve problems.
It’s not as simple as: Browser -> Server -> Database — if it ever really was; nor is it: Various Clients -> APIs -> Server(s) –> Data Store(s), like it has been the last few years.
The explosion in IoT devices that need to be built (fitness trackers, thermostats, smart cars, smart lights, smart homes, etc.), combined with the proliferation of tools that can be used to build those things leads to skyrocketing possibilities of solutions to cutting-edge problems. Not only are developers having to deal with the challenges posed by connectivity (security, availability, etc.), but they have to build a cloud architecture that can handle the high-speed, on-demand nature of IoT.
This should lead to immense optimism around the future — both for jobs and for an economy (like the U.S.) driven by creativity.
But, man, it really takes some creativity to devise these solutions. Your bag of tricks needs updating to solve IoT problems on the back end — once you’ve figured out the protocol you need to decide:
- How many different kinds of data storage do I need?
- What will my device do that differentiates it from other devices?
- Do I need natural language processing? How will I do that?
- Do I need a domain-specific language?
- How will I process all the data and where will I store it?
The great news is that there are tools to solve all these problems today. The challenging part is finding the right way to bring all the right tools to bear on your problem.
Sometimes you may be using different programming languages for different parts of the system. Node.js here, Java there, Python for this, C# for that. Even if you have a unified architecture, the pieces aren’t all one flavor anymore.
So, being an architect isn’t about mastering the depths of a single language, it’s about knowing how the parts of the system fit together and being able to do so creatively. And it’s about mastering the depths of at least several things to be able to put together a finished product with your team.
For IoT especially, no two projects are created the same. The form factors, data, sensors, actuators, networks, protocol, people are all different. More and more this implies a need for creative connection of disparate pieces to fit a particular problem.
For instance, we have a client who is doing unique work with specialized actuators in its hardware device. Part of the solution is to provide a power user or administrator a visual tool to send information to the actuators on how they should behave.
In order to do this we’re using ANTLR to define a grammar and parse expressions for a custom language, Google Cloud (including Endpoints, Firebase, BigQuery and Cloud Datastore), Angular and Firebase Crash Reporting for mobile.
On another project we’re partnering with a hardware vendor to use the Azure IoT Hub to process transportation-related messages from a wide variety of sensors on connected vehicles. For this project, we had to define all the sensors and the specialized data payloads that come from each one into a cohesive messaging system.
And this is before we even talk about hardware (which is facing a disruptive evolution in its own right)!
Being a cloud architect for IoT is fun and exciting, but don’t expect to rest on your laurels, and don’t expect to be doing just one thing or programming just one language. You will need multiple skills, in-depth understanding of the cloud tools and a great team around you to rely on when you run up against a tool, language or framework that isn’t in your wheelhouse.
This is an evolutionary process where cloud architecture has been going for some time — understanding the technologies at your fingertips and how to deploy them is the name of the game.
As technology continues to transform the customer experience and customer expectations consistently rise, enterprises must take necessary steps to differentiate their products and services to deliver on these expectations. In the age of the digital customer, enterprises are constantly moving from the web to mobile apps to a post-apps arena — and fast.
In this article I’ll focus on the post-app experience, which includes a variety of connected devices — IoT, virtual and augmented reality and voice-first assistants (such as Amazon Alexa, Google Home and Apple’s HomePod among others).
The use of IoT devices across industries, such as healthcare and retail markets, is exploding for both consumers and for industries themselves. With this, organizations implementing IoT into their business strategy need to consider how various channels work together to deliver a superior customer experience customized for the individual — an omnichannel approach with disjointed experiences does not meet the expectation of the modern customer.
To deliver exceptional customer experiences across new and emerging connected devices such as IoT and voice-first assistants, enterprises need to be able to collect, analyze and organize data in a timely and secure manner. Additionally, the overwhelming need to be flexible is increasingly apparent as more and more IoT devices enter the scene. The best way to do this is through a two-part solution to deliver a superior customer experience. First, by building well-designed application programming interface (APIs) that focuses on how the end user consumes the service, as opposed to the technical underpinnings of the IoT device. Second, is by forming a proper data orchestration driven by APIs from multiple providers. These API solutions for IoT help breakdown connectivity barriers, anticipate user needs and deliver a highly customized experience — and the benefits of APIs have a great effect on the customer’s overall experience.
In fact, the benefits of APIs in IoT technology cannot be understated. Here are three of the top benefits of API management when it comes to enhancing a customer’s IoT experience:
- Security — In my previous article I wrote about IoT security processes and the steps that businesses can take to better protect connected devices and customer data from attacks and breaches. That being said, in conjunction with these security practices, APIs can also enable devices to register securely, authorize or disconnect a compromised device via an API key control and enforce corporate security policies.
- Enforcing multifactor authentication with external identity provider systems via standardized APIs
- Rate limiting to prevent traffic spikes and service degradation that can be caused by malicious actors
- Device abstraction — IoT devices are constantly undergoing rapid change and along with this, the low-level interfaces to the device change too. With APIs, we can abstract this complexity and also use versioning to manage the changes in an effective and timely manner.
- A well-developed API for an IoT device can hide smaller differences between versions of the device firmware and control and provide a uniform interface to the developer to simplify service development and user experience
- API analytics — Being able to analyze data pulled from various IoT devices can help organizations better understand consumer behavior and improve the customer experience. By harnessing API analytics, organizations can gain insights into how their customers are actually using their devices and alter their digital strategy based on those findings. For example, a Fitbit API or Apple Watch API can enable a health insurance company to offer customers incentives or rewards for exercising regularly.
In a world where customer experience is increasingly the differentiator for businesses’ success and where new IoT devices are popping up all over the place, thoughtfully creating and managing API-enabled services has never been more important. Businesses are under intense pressure to quickly roll out differentiating services and strengthen their IoT devices. They must focus on delivering a superior customer experience, which requires automating and anticipating changes across various networks and devices. For example, with the use of IoT-enabled sensors combined with data analytics, companies can assess current performance and predict future behavior of industrial equipment. API analytics can look at real-time issues and, for example, shut down equipment that is out of specs, schedule maintenance, ensure required replacement parts are available and schedule a service technical with appropriate priority. This can reduce unscheduled downtimes, prevent expensive breakdown and lower costs by providing maintenance based on usage.
All of this requires information to be securely gathered, integrated and orchestrated across an ecosystem. A unified full lifecycle API management solution for IoT allows businesses to deliver competitive services and the benefits of greater security, effective version control and API data analytics. With API management for IoT, business can react quickly, anticipate the customer needs and deliver a superior user experience.
When it comes to making cities smart, some cities are better positioned or equipped to make the leap than others. What makes a city more inclined to embrace smart technologies — such as sensor networks, data platforms and autonomous vehicles — than the rest? A range of factors, including local government’s willingness to embrace innovation and open data, as well as a vibrant private technology sector, all make an impact. Four cities that have already begun implementing smart city technologies feature strong support from the local governments and ample private sector support to see the initiatives to fruition.
These successes can be modelled by other cities hoping to make the transition.
Columbus is one city with exemplary characteristics, making it ripe for a smart city transformation. This is one of the reasons the city won the U.S. Department of Transportation’s (DOT) Smart City Challenge in 2015. In addition to the original $50 million grant given to fund the city’s plans, it has since raised over $500 million to support its smart city journey and recently hired chief innovation officer Mike Stevens to lead the effort. What made Columbus stand out from the other finalists — Austin, Denver, Kansas City, Pittsburgh, Portland and San Francisco — and claim the grand prize?
For starters, the city’s plan proposed a “first-of-its-kind modern transportation system” that was both climate-friendly and fueled by data. City officials also plan to implement a variety of smart technologies including streetlights that are also wireless internet hubs, a system allowing emergency vehicles to interact with traffic signals, common payment systems, smart mobility hubs and smart streetlighting.
While many of these technologies could be replicated in cities around the world, what sets Columbus apart is the local government’s commitment to implementing the policy. The city has hosted workshops for residents to explain the plan, highlighting another key stake of the plan that allowed it to win the grant: inclusiveness and accessibility to all citizens.
A vibrant private sector is also playing a large role in Columbus’ transformation. Not only did more than half of the city’s funding come from private companies, but a Silicon Valley think tank, Singularity University, recently announced it would be opening a smart city accelerator in the city. It hopes to spur innovation, and was drawn to Columbus after it was awarded the DOT grant.
Kansas City, Mo.
Kansas City, although a runner-up to Columbus in the challenge, has also made strides towards its smart future. In May, the city was honored with an Edison Award for its data collecting initiative, specifically along its downtown streetcar line, that gathers information to help businesses adjust to the ups and downs of foot traffic. The city also leverages Wi-Fi kiosks installed downtown to gather data about who is in the area, where they are from and if they are a new user or not. All of this data is leveraged in real time to inform streetlight efficiency, alert police to send more patrols and help businesses in the area create better marketing strategies.
The data platform has become the basis for Kansas City’s future, and the city has grand plans for its future. The city’s CIO Bob Bennett has a lofty goal: to be the smartest city on Earth within five years. Whether this comes to fruition remains to be seen, but a strong commitment from city leaders to embrace smart technologies for the benefit of citizens has allowed the city to implement many of the game-changing technologies it has invested in.
Another city hoping to be a smart leader is Pittsburgh. Similar to Columbus and Kansas City, Pittsburgh has also put data at the center of its smart initiative. After forming partnerships with universities, such as Carnegie Mellon, the city created an open data platform to provide citizens with real-time data about crime, emergency calls, building permits and even a snowplow tracking app. Named “Burgh’s Eye View,” the open-data platform was built by the city’s analytics and strategy team and shows how city governments can leverage the data they have at their disposal to develop solutions that positively impact people’s lives.
The Carnegie Mellon partnership also turned the city into the university’s “urban lab” and the city was also awarded a $10.9 million DOT grant to expand the use of smart traffic signals innovated by the university. This, along with the “green light governing” strategy Pittsburgh’s Mayor Bill Peduto embraces, has moved the city to the forefront of the smart city conversation. This style of governance follows the belief that innovating by private companies should come before regulating — one of the reasons Uber chose the city as the testing ground for its autonomous car pilot late last year.
Boston also hopes to make its smart city mark. The city has implemented an open data portal, Analyze Boston, granting citizens access to a range of data including electricity usage in the city, crime reports and traffic patterns. This open data also enables private companies to innovate using the government’s data to create useful resources for residents including bus tracking apps or the city’s data-sharing agreement with Waze that will improve traffic flow throughout the city. The mobile-ticketing initiative on the MBTA commuter rail also enables data collection and improves riders’ daily lives by removing obstacles to seamless travel including long lines to purchase paper tickets.
The smart city has also used data to try to solve a big transportation issues that plague many cities — lack of adequate parking and the congestion resulting from drivers searching for an open spot. The city has experimented with increasing prices in certain areas and installing smart sensors that adjust prices throughout the day based on demand. This willingness to innovate new solutions and embrace data puts Boston at the forefront of the smart technology movement.
A common theme
What do Columbus, Kansas City, Pittsburgh and Boston have in common, other than being smart technology leaders? A commitment from government leaders to embrace technologies like open data platforms and smart sensor networks, along with the willingness to foster innovation from the private sector. If cities want to make the journey to being smart, they should look to these leaders for examples of how to find success.
A computer more powerful than the one that helped send men to the moon resides inside many of our pockets or handbags. Smartphones are firmly an important and commonplace tool at the center of connecting a growing ecosystem of IoT-linked devices. They can host and analyze data streaming virtually every appliance, tracker and hardware with “smart” attached to its name. Smartphones also keep intensive records of information people generate through applications and other interactions. Most importantly, they can be used to contextualize the human experience and personalize complex business data streams from even more complex systems connecting to IoT. The end goal every data-reliant business should strive for: deploying IoT technology that navigates across the front and back offices using a cloud platform and complementary application ecosystem.
After all, IoT is not just an expert technology anymore. There’s more access to the right tools than ever before, and enterprise IoT platforms democratize access to it for small businesses and entrepreneurs. The constant data streams and connections between machines and people offer these small business owners and entrepreneurs opportunities to transform every employee, customer, supplier and partner into an active contributor to their business processes. However, many organizations still need an infrastructure that can move massive amounts of digital information and make sense of it for business operations. A mass enterprise migration to the cloud, the proliferation of AI and machine learning technology, and the democratization of complementary technologies can help companies of all sizes to use data accessible on connected devices to achieve competitive edges.
Connecting IoT to the cloud offers efficiency and expediency
Cloud-based software solutions can significantly cut costs, offer more flexibility and improve workflows across organizations of all sizes utilizing IoT. They can also offer small businesses and entrepreneurs reliable methods to access, monitor, receive and input data from virtually anywhere. Data streaming from IoT and the smartphone in your pocket can meet in the cloud, thus enabling new levels of access, transparency and security.
An important side effect of using the cloud to move IoT-linked data: better customer service and attention. Businesses can use the cloud to conduct almost every business operation that involves information exchange and analysis — from customer relationship management to payroll to accounting. They can also pull mission-critical data from IoT-linked devices and hardware in the field sending data through the cloud to employees and customers to inform current levels of service and future business transactions.
IoT-friendly automation actually enhances jobs
While there’s real concern that artificial intelligence will make jobs obsolete in the near future, AI can actually help businesses — especially startups and SMBs — free up valuable time spent on tedious administrative tasks. Customer service is a key area where AI technology is being deployed more frequently. Chatbots, automated conversational tools built upon artificial intelligence infrastructure, complement software-driven aspects of business operations vital to keeping customers happy and building businesses.
In practice, chatbots enable companies to respond quicker to customer feedback and resolve issues, while also storing detailed conversation logs for business owners to review later. Data collected and stored by AI-powered chatbots can help inform future decisions crucial to your business. Digitizing information at the point-of-capture eliminates the need to file receipts and expenses, eradicates the need for tedious human-driven data entry and replaces data collection with a much easier process. Automating these aspects of work also allows employees to more efficiently spend their time managing projects, monitoring IoT- and cloud-based operations and supporting customer relationships. Ultimately, IoT combined with AI makes these experiences smart and turns an IoT endpoint into an “intelligent” endpoint that businesses can pull data from that informs customer interactions.
Seamless data integration leads to seamless business operations
Businesses that deploy IoT and connected devices should also look to employ platforms that feature open architecture enabling integration with other technologies critical to daily operations. When all of the operational systems can interact with each other, the amount of actionable data flowing into the network from IoT-linked sources increases significantly. By implementing solutions that will allow you to better understand and communicate with those individuals critical to your business, businesses can realize the power of personalization using IoT, automation and smartphone technology. In doing so, businesses increase meaningful touch points with key stakeholders that can make or break businesses. Deploying a network of platforms, technology and tools that can generate, analyze and receive data directly supports — and enhances — customer relationships. Businesses that embrace change understand the immense value of contextual data and know that linking their systems between the web, IoT and the cloud will help them succeed.
In the beginning, we had M2M. Then came IoT. With the industrial IoT and challenges around IT/OT integration, we see terms like cloud, edge and fog computing.
Now, the hot-topic term is interoperability. Interoperability appeals to a technology industry that routinely churns through new buzzwords and value propositions. There is also a willing audience among adopter organizations. Many of them are starting to see a diversity of application opportunities within their everyday operations. At the same time, they fear the economic and technical pitfalls of building a whole string of disconnected, siloed IoT applications.
In the context of IoT solutions, interoperability has little meaning without the use of a qualifier, such as the word “between.” Consider the phrase, “I have a solution that provides data interoperability.” This means little until one adds the words “between your sensor and my application,” for example.
Interoperability in horizontal and vertical dimensions
In practice, the complication with interoperability is that there are several permutations for its use in the IoT context. Just consider the following block diagram representation of a pair of IoT solution stacks:
One IoT application (App#1) can communicate with a related sensor (Sensor#1), via a middleware platform (MP#1). It might even act on sensor data to send a command to an actuator (Actuator#1). There is a similar arrangement for the second IoT solution, on the right-hand side of the illustration.
Now let’s say there is a way to improve the performance of App#1 by using data from a sensor associated with App#2. This might be possible if there is interoperability between App#1 and App#2, either via an external data exchange (over-the-top interoperability) or through their respective middleware platforms. Alternatively, App#1 might be able to access Sensor#2 because of an interoperability capability between their respective middleware platforms. This scenario helps us think about horizontal interoperability in terms of applications being able to discover resources (e.g., other applications, other middleware platforms, other sensors/actuators, etc.), to recognize the services they offer (e.g., published data streams, remote actuation, etc.) and to make use of these services through transactions (e.g., publish-subscribe to solution-stack data, usage tracking for charging and settlement, etc.).
In this scenario of any-to-any interactions, there will be situations where the solution owner finds a better sensor (higher performance) or one that delivers the same level of performance for lower cost or with greater reliability. The owner of App#1 might wish to switch vendors or operate a multivendor solution by replacing Vendor A’s Sensor#1 with a better offering from Vendor B. This example offers another perspective on technology and vendor interoperability, in a vertical sense up and down the value stack.
The discussion so far considers block-diagram constructs in a two-application scenario. The physical implementation of this arrangement involves hardware and software from different vendors. If a solution owner wishes to change out a gateway and use one from a different supplier, it would value gateway to middleware (and gateway to sensor) interoperability to minimize custom systems integration work. This kind of interoperability is akin to computers connecting to the internet or mobile phones being able to roam internationally.
In today’s cloud computing world, one example of supporting physical interoperability might involve an application (and its data) hosted on Amazon Web Services collaborating with another application hosted on Microsoft Azure. Does this arrangement deliver bidirectional interoperability (covering communications, service levels, data semantics, etc.) between two cloud infrastructure services or would it depend on an intermediate translator?
Strategic implications of IoT interoperability
Given these different perspectives, why is it important for organizations to think strategically about IoT interoperability? There are at least three reasons why. Firstly, organizations need to decide about investing in single or silo applications. Are they just looking to apply a condition monitoring application on a factory machine with a largely unchanging operational life of many years, for example? Or, will they want to support other applications with the same connected device and therefore find themselves supporting multiple potentially cross-silo or cross-vendor applications in the future? This is a matter of product roadmap planning.
Secondly, are users of IoT technologies locking themselves into a single technology or single vendor solution set? There is a way around this dilemma. Many telecommunications operators, for example, deploy multivendor infrastructure in their networks by relying on standards-based interoperability. Other industries can learn from this strategy. Factories, offices and homes are places that house machines and appliances from multiple suppliers which are the conditions that will call for interoperability between devices (endpoint actuators, sensors and gateways) and applications at some point in the future.
And thirdly, users of IoT technologies need to recognize the constant evolution in technology and its impact higher up the solution stack. While today’s interoperability debate focuses on communications and hardware compatibility, tomorrow’s solutions will shift the interoperability challenge higher up the value stack.
Consider a scenario involving multiple interacting systems which attribute the same meaning to an exchanged piece of data. This ensures consistency of the data across systems regardless of individual data format. In practical terms, this might be a stream of numbers from a temperature sensor supplying multiple IoT applications; each application would apply the correct meaning to the stream of numbers (i.e., recognizing them as temperature data, either in Celsius or Fahrenheit format without needing configuration parameters to be hardcoded in the application) because of semantic interoperability between the sensor and applications.
The issue about technology evolution is that it has long-term timing and investment implications. There are long-term consequences for investment decisions and the option-value inherent in wanting to support new applications and business models in the future.
When discussing the internet of things in any situation, it is always beneficial to have a specific use case identified. Some examples seem a bit far-fetched or overthought for normal technology situations while others seem too modern to have relevance for a practical use case. There is one situation, however, that is ripe for innovation and a natural use case for innovation at the hands of IoT: the industrial technology space.
The industrial technology space is an interesting one. Having spent a large part of my career working in industrial automation and supply chain systems, I can speak firsthand to the opportunity for innovation here. There is a general dichotomy in this segment: some parts are very high-tech and automated while others are not. There is an interesting mix of very modern technology and completely manual processes. What’s interesting is that this can occur in the same facility for the same products and use cases!
Let’s put this in a practical example that meets the expectations of today: Businesses and consumers demand the ability to order a good from their phone or computer and have it show up that same day or the next day, with 100% accuracy and at a very competitive price. This is a reality for today. Yet many industrial and distribution systems don’t have the operational efficiency to meet this demand. This places many organizations at a competitive disadvantage — and you know what happens then. The moment a shopper sees that one vendor can’t meet the demand, they’ll simply proceed to the next one who can. This is the equivalent of the modern handshake, that first impression that will dictate how a business relationship will go.
To see the IoT opportunity here for the industrial technology space, we must work backwards in this workflow. From the business or consumer perspective, it’s simply going to a website to make an order. From the industrial technology perspective, there are a lot of steps to make this occur seamlessly every time. The IoT opportunity comes in devices and systems that can make industrial technology more automated, more accurate and more utilized. Each of these top-level goals will introduce momentum to permit orders to be fulfilled in the example above more accurately, quicker and more automated.
Underpinning this initiative is a primitive task that industrial technology must be available. In fact, it is the tale of two DCs: The data center and the distribution center. Historically at war with each other, now these two different personas and organizational groups are required to unite to meet the common goal of addressing the competitive pressures of the marketplace. Specifically, for industrial and distribution systems, this competition is intense and there is no margin for error, downtime or data loss. They are all related now to meet the goals of the organization.
This can happen several ways for the industrial technology space. It can be as simple as camera systems that provide more data points for better visibility overseeing an industrial process or a completely modern system top-to-bottom that removes all barriers of previous partially automated or otherwise inadequate systems and processes that are unfit to meet the demands of today’s competitive workspace.
The successful path will be about connecting information and the output of industrial technology. IoT brings the information: what, where, how many, when. The industrial technology space is the vehicle to meet the demands and here on out they are one united to a common goal.
What is your weakest security link? I’m blown away when I talk to IoT professionals that can’t immediately answer this question.
The promise of IoT is that it has the potential to connect machines that generate data which can be used to provide us with insights that make the world a better place. Today, the focus of IoT has everything to do with enabling this promise, and little to do with security.
IoT security is life and death
In an IT world, a security breach can mean the misuse of data, which could result in fines or a company’s reputation could suffer. With IoT, a security breach can literally mean life or death.
Take the latest Netflix cybernightmare, where a hacker stole and released part of the fifth season of Orange is the New Black and later demanded to be paid to keep it off the internet ahead of its premiere. Not good for Netflix, but the problem was solved when Netflix decided to release the show early and refused to pay the culprits.
Now consider Stuxnet, a computer worm that infected at least 14 industrial sites in Iran, including a uranium plant. The worm’s authors could cause the fast-spinning centrifuges to tear themselves apart, unbeknownst to the human operators at the plant. Or what about when security researchers Charlie Miller and Chris Valasek proved to the world how easy it is to hack into a car and control everything remotely including the brakes, acceleration and cruise control?
Threats like these have made security one of the biggest barriers to meaningful IoT adoption in the enterprise.
Security by design
As an industry, we say “things” but we mean “everything.” From energy and drinking water to manufacturing and transportation, most physical things will get connected in the next few years. Various experts estimate that there are 25 billion connected things in the world today, and this number will hit 50 billion by the end of the decade.
Unlike IT, there is no “end user” in IoT. For the business owner or enterprise administrator, being able to monitor and control these connected devices means relying on technology to do the job for you.
To ensure our devices are secure, IoT implementation must be designed with security in mind from day one. Ask yourself: What kind of networks are you connecting to? Which external users are available? What is the environment in which your system operates? What other systems are the devices interacting with?
The ecosystem and the value chain are very complex, and it’s nearly impossible fix every single flaw in a system. However, if companies design their devices for an extreme scenario on one end, they will be one step ahead of a potential attacker rather than trying to play catch-up with one who has already done damage. Having proper recovery policies and smartly updating these policies are critical to ensuring continuity of operations after an attack.
Framework for protection: Start with the device lifecycle
When designing IoT security frameworks, I always advise customers to build protection into the device lifecycle. Here is my seven step device lifecycle, including examples of the types of protections to consider:
- Registration: Installing security software, discovery, uniquely identifying the connected devices, registration, can it call home?
- Provisioning: Secure credentials, exchange certificates, capturing registration info
- Commissioning: Installing the device in the field, initial configuration, finding status
- Configuration: Remote secure updates of a commissioned device, updating privileges
- Monitoring: Health, operational, security and connectivity status, alarms and alerts
- Control: Remote decisioning, over-the-air updates, performance, remote service
- De-Registration: Decommission, end of life
There is no doubt that IoT will transform the world in our lifetime. Ensuring the protection of our data and personal safety will ensure better adoption. And the secure management of these systems will play a critical role in whether or not we can trust IoT.
My advice to companies implementing IoT is don’t try to solve it all yourself. Play to your strengths and design what you’re good at. Partner with security providers for everything else.