IoT Agenda


June 7, 2016  10:15 AM

Developing digital trust in the IoT era

Eve Maler Profile: Eve Maler
Internet of Things, iot, trust

The API economy exposes new business value digitally

The API economy has turned the tables on the way businesses monetize their offerings. Application programming interfaces enable every type of industry to expose value digitally, reach out to bigger audiences and disrupt established industries.

Ridesharing is the classic example, upending traditional taxi and car services. Ridesharing apps are now creating sufficient industry disruption that new legal ordinances are being named after them: witness Illinois House Bill 4075, also known as the “Uber bill.”

Moving from web to mobile forced a digital transformation

The main push for APIs came from the smart mobile device revolution and the radical simplification of how users and customers interact with products and services; mobile app developers needed APIs to gain access to the functions of what were originally web applications. The resulting explosion of APIs means that organizations across all industries are pursuing “digital transformation” strategies, prioritizing new processes and workflows.

The new API economy controls smart things, but is it trustworthy?

Now a new type of API economy is growing that exposes not merely the value of applications in a new way, but also the features of smart devices themselves. An API call can control not just hailing a car, but driving a car, accessing the trunk and locking the doors.

The popularity of the internet of things is leading to a whole new set of business opportunities. However, providing safe, secure interactions and handling digital identities and authentication requires greater care in this relatively uncharted territory. For example, security researchers Scott Helme and Troy Hunt uncovered an API vulnerability that not only allows the Nissan Leaf connected car to be controlled independently over an Internet connection, but also provides access to other Leaf automobiles through the NissanConnect EV app.

The state of data privacy regulation

According to a survey of nearly 300 IT professionals conducted earlier this year by TechValidate for ForgeRock, only 9% of respondents agreed that privacy/consent methods such as opt-in checkboxes and cookie acknowledgments are ready to adapt to the new digital economy. Clearly, IoT devices are a major source of tension when it comes to meeting user and business needs for consent. It is not practical to pull out a companion mobile app to consent or to configure your sharing preferences every time you need to interact with a smart “thing.” As the regulatory landscape of data protection and privacy shifts to give a bigger role to consent, developers will need to consider this reality.

In April 2016, the General Data Protection Regulation (GDPR), which is designed to give EU citizens better control of their personal data in the context of a more unified EU marketplace, was set in motion for implementation in 2018. The regulation states that privacy-friendly settings must be the default and requires that developers design privacy-friendly settings for all apps and websites.

The point of the GDPR policy is that data privacy and protection should not just stop at the end user; for developers, it should begin at the API level — by design. Tech companies, especially those that create IoT devices and applications, are beginning to understand the importance of closing all the doors to their APIs. APIs have recently received a bad rap — horror stories about how hackers can tap into the Nissan Leaf’s API for access and control put added pressure on developers to tighten up security during the design process.

Developing digital trust

The bars for companies to meet consumer needs and government regulations are rising. It is likely that legacy technologies and existing tactical compliance efforts can offer only a temporary solution at best. New data privacy methods and technologies will soon need to be deployed widely in the U.S. and EU.

In response to evolving consumer demands and government regulations, companies are exploring ways to incorporate user consent into their applications. For instance, Apple CareKit and ResearchKit integrate with 23andMe. A user’s 23andMe DNA test results, intended to help determine ancestral history, can be integrated with the user’s Apple CareKit to help researchers best understand the individual through his or her ancestral background and health activity. However, researchers can only access data when the user provides specific consent. Apple’s CareKit allows users to share data with other parties; however, the data is proprietary, and only users in the Apple ecosystem can access it. Users also have the ability to block data being sent from 23andMe. While Apple CareKit is a good example of leveraging user consent in ways that go above and beyond regulatory dictates, a consent standard across industries is the more sustainable, long-term solution for API ecosystems that do not involve Apple.

User-Managed Access (UMA) is a key standard in this area; it gives business benefits to those who are not the “800-pound gorilla” in any one field. UMA gives individuals a unified control point for authorizing who and what can access a variety of cloud, mobile and IoT data sources. Users can share data and API access selectively with other parties; withdraw consent for that sharing in a finer-grained fashion so that other data feeds can remain unperturbed; and manage delegation, consent and withdrawal more conveniently from a central sharing hub.

With the rise of cloud-based data, health and wellness apps, and consumer sensors, companies such as Philips have identified the importance of enabling consumers and patients to share all available sources of data with family members, health professionals and others. However, this sharing must be done under close personal control. The company is looking to leverage standard solutions into its HealthSuite Digital Platform that will make it possible to foster patient trust. ARM is also doing great work on developing trust models for sensor-to-device-to-service security.

For companies like Philips, whose core competency is something other than just consent mechanisms, ensuring digital trust is particularly critical. By definition, the API economy and the shift toward the IoT necessitate complex technology partnerships for joining applications and sources of personal data within new ecosystems.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

June 3, 2016  1:12 PM

Driving forces accelerating and decelerating connected car security

Jessica Groopman Jessica Groopman Profile: Jessica Groopman
Cars, Connected vehicles, Internet of Things, iot, iot security

Although the automotive industry isn’t particularly new to data or telematics, competitive forces are placing huge pressure on manufacturers to figure out how to actually use all these data. What’s new is the imperative for manufacturers to, in real-time and even predictively, analyze data and leverage it meaningfully and in ways that drive value.

Driving force #1: The pace of innovation

The pace of innovation is moving faster than automotive industry can handle.
The reality is vehicle manufacturers are finding themselves caught in a conundrum of lagging innovation and never-ending disruption. Automotive manufacturers have traditionally operated on a five-year cycle of innovation, but in the world of networked technology and services, five years is a lifetime. The result is that the pace of technological innovation is barreling along while auto manufacturers struggle to stay competitive today, never mind tomorrow. In-car infotainment systems are one area of intense competition today, where manufacturers once saw this real-estate as differentiating, the speed of innovation and the ability to push out software updates and new features driven by mobile giants far outpaces most auto OEMs.

Driving force #2: Technological innovation begets more innovation … and risk

Forces driving secure and effective adoption transcend OEMs themselves. Standards play a central role in the ability for cars to “talk” to each other (never mind to outside service providers), whether to exchange position, location data, speed or other information in real-time. While technological standards remain fragmented, many governments (particularly in Europe) are beginning to embrace the potential safety benefits of connected cars. Already subject to heightened regulatory requirements, this industry will either benefit or suffer from governments’ and related agencies’ alignment, openness or rejection of the rules, investment and communications infrastructure necessary for a connected or autonomous car environment. This, in addition to increasing LTE, wireless and low-power connectivity infrastructures, will accelerate adoption-friendly environments.

Meanwhile, existing and emerging technology companies — namely Google, Apple, and Tesla — symbolize traditional automotive manufacturers’ greatest competitive threats. These companies are leveraging world-class mobile, hardware-software design and more rapid R&D sensibilities to [potentially] leapfrog traditional auto OEMs. Who will dominate the in-car app ecosystem? Manufacturers, ISVs, third parties? While partnerships and alliances in other venues suggest the need for such collaboration in automotive, can each of these entities ensure adequate security across the ecosystem?

 

Connected car security

Introducing new technology often begets more technology. Such is the case for identity management within a vehicle. Connecting components, software, applications and other services doesn’t just create [the potential for] value and risk, but to actualize and secure such a network requires each takes on its own identity. The identity of the user, of the car, of the connectivity mechanisms, of the apps and other devices becomes central to security so that only those authenticated can control the car, communicate with it or from it, and make changes to any part of the system. Herein emerges more friction. For instance, cars often have more than one driver. Passengers (e.g., a child interacting with the infotainment system) may require their own identity management mechanisms. The second-hand market requires a distinct set of identity needs (e.g., wiping user identity but not car component identity). And so we see the emergence of identity management platforms to address a poorly understood, but critical requirement in the connected car security story.

The roles each of these forces plays across connected car security, privacy, and safety are critical — not only for improving the functionality and driving experience, but to address adoption concerns as well.

Driving force #3: The driver’s experience

Across all industries, the human-machine relationship is evolving — this complex, cultural and curious behavioral adaptation is only magnified in the automotive sector. While consumers have embraced mobile technology rapidly and pervasively, data suggests we may value different elements in a driving context. A recent Telefonica study found that consumers are less interested in typical personal computing activities such as social networking and downloading applications while driving, and place greater value on safety and utilitarian features such as accident avoidance, navigation, diagnostic testing and maintenance alerts. But while convenience appeals, half of consumers surveyed by Veracode still express significant concerns related to the security of driver-aided applications like adaptive cruise control, self-parking and cars sharing data with other cars.

A second area of interest is which party takes responsibility when security-related issues arise. Enter the question of liability — a question of uncharted legal territory and precedent not only because data ownership norms are poorly understood, developed or standardized, but because — regardless of legality — security introduces a wide range of potential nightmares from a brand and PR standpoint. Today, 30% of drivers feel that if they download an app that poses a security threat to the vehicle system, they should not be liable (i.e., it should either be the manufacturer, app developer or app store who is liable).

Driving force # 4: Interoperability

Another important dynamic in this story is that of interoperability and how end user experience design introduces new security risks. The most seamless connected experiences are those which are integrated and interoperable across multiple devices, apps, environments and scenarios — from parking the car to automating home lighting to leveraging a mobile device to sending an alert to work, back to the car and so on.

OEMs like Ford and Mercedes are partnering with connected home device manufacturers and platforms to measure energy efficiency by integrating appliances with electric vehicles. A company called Aricent offers drivers the ability to control their homes remotely from the car by connecting the car’s network to the home network via LTE. The very utility of emerging “value-added apps” relies on interoperability, from smart parking to real-time/geo-based services to personal or municipal notifications.

Connected car data transfer

End user experience, and thus adoption, relies on this integrated structure to extract the greatest value and convenience. Put simply, a seamless user experience is a function of a seamless data transfer requiring as few apps as possible. Interoperability is important for OEMs and technology suppliers to understand in a security context for a number of reasons:

  1. Interoperability expands the system: Suddenly the “system” of the car expands — into the home, across devices, work, municipal, never mind third-party service providers. For manufacturers, these non-proprietary apps spell increased vulnerability and potential liability
  2. Interoperability introduces strange bedfellows: The imperative for interoperability means automotive OEMs must forge partnerships, alliances and/or collaborations across ecosystem constituencies previously irrelevant
  3. Interoperability often requires new business models: Interoperability means data flows freely across previously “closed-off” boundaries. Ecosystem interactions based on such data exchange require OEMs shift from a purely product-centric business model to one driven by ongoing service enhancements and new value creations leveraging new stakeholders

Although on their face these impacts may seem irrelevant to the security, privacy and safety of a connected car, the reality is that each translates to a wider security threat surface and thus more security threats, actors, stakeholders and risks.

Driving force #5: Security solution landscape is highly fragmented

Today, security solutions providers vary widely in approach, coverage and size. For widespread adoption of “smart” vehicles to occur, connected technologies require security solutions that simultaneously and cooperatively address the broad and diverse layers of any IoT architecture. Across vehicles, devices, networks and applications, solutions will take different forms. Different companies take different approaches. Argus Cybersecurity, for instance, addresses automobile security by providing a suite of security solutions that address new and after-market needs. Towersec exemplifies another approach of providing unique firewall protections based on system, focusing on mission critical systems, telematics systems and infotainment systems. Meanwhile, a host of security or safety-focused start-ups like SmartDrive, Lytx and Navdy add a new angle to the competitive story, not only because they address driving safety, but because some sell directly to driver, not manufacturer.

Security management is becoming increasingly important for connected cars to minimize potential hacking threats. According to Harbor Research, ADAS suppliers are embedding security measures and driving the overall security management applications revenue from $2.0 billion in 2015 to $8.8 billion in 2020.

Regardless, the obligation rests on OEMs to arm vehicles and devices with the proper technology to best safeguard all endpoints as much as possible. This includes ensuring proper authentication and authorization (for networks, devices, etc.), as well as transferring security parameters to allow for trusted operation. Network security mechanisms are central in order to ensure entrusted operations and prevent attackers from endangering or modifying the expected operation of networked things or accessing secure data. Beyond the architecture for connectivity, the dizzying array of “value-added applications” also require security through tools that guarantee only trusted instances of applications are running and communicating with each other. Across all of this, security solutions must coordinate with each other in order to constantly mitigate risk and vulnerability across the entire system.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


June 2, 2016  9:59 AM

Four times when the internet of things isn’t the answer

Derek Peterson Profile: Derek Peterson
Internet of Things, iot

The internet of things is a seductive world. Once product designers imagine the possibilities of making ordinary objects into internet-connected objects, it can suddenly seem that there’s no product that couldn’t be improved by becoming an IoT device.

Yet it can be easy for companies to get caught up in developing IoT products just because they can without first asking honestly whether they should. While there are certainly a host of situations in which creating an IoT device does make good business sense, there are also several reasons to put the brakes on an IoT project and reexamine whether making a device connected is really the right path to pursue.

Here are four common business situations when the internet of things isn’t the answer to your design and business challenges. See if you recognize your business in any of them — and get ready to ask the right questions to find out whether adding IoT capabilities to your next product is the right next step for your company.

#1: When products lack compelling use cases or business cases

Today, almost any device or product could potentially be a connected, IoT device. A car? Of course. A lawnmower? Sure. A cat litter box? Why not?

Yet the “Why not?” question is an important one to take seriously — and many business get caught up in the potential possibilities of IoT devices too much to take a step back and honestly address whether making a product into a connected product creates sufficient value to justify the cost of development, troubleshooting, prototyping and bringing it to market.

An important question to ask is, “What additional value does being an IoT device deliver to the user?” In the case of the connected cat litter box, perhaps the box could send users an email when a cleaning is due … but that’s probably also an “alert” that a user would be aware of using the good old sense of smell.

The point is, do an honest assessment up front to make sure the payoffs of developing a connected device are there for both the user and the business.

#2: When you’re unsure about access to resources for the long haul

In some ways, connected devices are like the orchids of the technology world. They are not products that you can put out into the world and forget about — they require constant care, updating and support to truly thrive. Make no mistake — your smart watch is no hardy cactus that will do well despite any amount of neglect.

That’s why developing and launching an IoT product may be a bad idea when you’re unsure about whether you’ll have access to the business resources to support the product long-term. There’s often plenty of excitement and funding available for new product ideas during the development and even testing phases; yet once they’re out in the marketplace, business enthusiasm often turns to the next big thing.

Make sure that you have a plan in place to support your IoT product and its users over the long haul to ensure its success, from customer support to technical upgrades to troubleshooting to marketing and more.

#3: Lack of readiness to capitalize on the new business model

While IoT products present lots of new opportunities, they also require ongoing execution on a completely new type of business model. Is your company ready to support the infrastructure, revenue models and other aspects of the new business model that an IoT device will require?

Take our connected cat litter box example again. Instead of simply producing and selling the best cat litter box on the market, is your company suddenly ready to be in the business of selling monthly subscriptions for cat litter refills? How about shipping those refills out, or dealing with customers cancelling subscriptions or wanting to change quantities or frequency of deliveries? Will these new business model requirements take away from your primary focus of designing cat litter boxes with industry-leading features that set you apart in the marketplace?

Be ready to ask the tough questions about how the new IoT business model will affect everything from cash flow to staffing. How will sales teams’ compensation change? What new customer support staff and mechanisms will need to be added? How will you provide technical support on an ongoing basis for the new device? Does the cost/benefit ratio of the new business model actually work out in your favor over the long term?

The answer may be a resounding “yes” — but it’s important to be realistic about how an IoT device fundamentally changes your business model so that there are no surprise hits to your bottom line once the device hits the market.

#4: When there’s wavering corporate commitment

Many, many IoT product development projects get the blessing of corporate leaders thanks to the excitement surrounding the possibilities that connected devices can deliver. But just as many leaders fail to realize that unlike more traditional products, IoT products require a different kind of long-term commitment to really go the distance and be successful.

Be realistic about the level of corporate commitment you can expect to sustain over the lifecycle of your proposed IoT product, from concepting all the way through the support and maintenance phases post-launch. Will there be corporate support for the funding, team members, time commitment and other key factors you need to ensure the success of your product down the road?

Because so many companies are developing IoT products now, it can seem deceptively simple just because it looks like “everyone is doing it.” Yet it remains difficult to do well. Make sure that there’s a strong corporate commitment to your product at the outset — and that that unwavering commitment will still be there when the sizzle-appeal of the initial development and launch phase has passed.

While I’ve just described key situations in which IoT doesn’t make sense, there are certainly a host of situations in which IoT can deliver outstanding additional value and functionality for consumers and companies. The key is determining when it makes sense to make a product connected — and when it makes sense to pass on that technology in favor of a more traditional solution.

Arming yourself with the right questions to ask can help you make a “save” when it really counts for your company — or know when to go big and take a calculated product design risk that can truly pay off

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


June 2, 2016  9:10 AM

Complexities of connecting everything: Teams, devices and code

Andreas Dharmawan Profile: Andreas Dharmawan
Internet of Things, iot, Software deployment, Software development

In my previous post I discussed the challenges of IoT software delivery as they relate to managing the complexities of integrating three distinct development pipelines comprising the different components of the IoT software. These are:

  1. The embedded software in the device itself — for example, software embedded in a car
  2. The big-data backend application used to store and analyze the real-time data accumulated from the different devices
  3. The mobile app — used by end users to control the device

Each one of these software components is being developed and delivered by a separate team (or in many cases, several teams). In this article, I’d like to take a closer look at some of the distinct characteristics of each of the teams involved in the production of IoT products, and the strains they put on the IoT delivery chain.

Similar, but different

“There’s no ‘I’ in team.” Sure, we’re all in this together, working hard to get the next big thing in the IoT space to market. And we know we depend on each other for our stuff to work … But streamlining your IoT development processes across different teams is no easy task.

We need to be aware of the fact that each part of the development team (embedded, backend and mobile) uses different technologies, tools, stacks, deployment patterns and delivery practices in their work. Their day-to-day tasks and workflows are different.

Let’s take a closer look at what it means to be an engineer in each one of those teams:

1. Embedded software teams

Embedded software teams

The embedded software must take inputs from sensors to understand the conditions of the physical world environment. Based on the computations of this input, the program then performs certain tasks — for example, activating or modulating a set of actuators to produce the desired behavior.

Consider the simple scenario of pressing the gas pedal on a hybrid car. What seems like a very simple operation, in fact involves thousands of decisions made in less than a second. The engine’s electronic control unit (ECU) receives the request to accelerate. It then collects various data, such as how hard the accelerator is pressed, the ambient temperature, the current speed, the current gear that the car is in, whether the car is on an incline or decline, whether the car is in eco mode or sport mode, whether the battery level is high or low, whether there is any slippage in any wheels, and many, many more. After collecting the data, the program performs calculations to send signals to the transmission ECU to adjust the gear if necessary, to the gasoline engine ECU to start the engine if needed, to the hybrid engine ECU to coordinate the power delivery from the electric motor and the gasoline engine efficiently and so on. After all of this, the car speeds up, but the computation continues. The engine’s ECU measures any change in the pressure applied on the gas pedal. If the driver eases on the accelerator, the whole process of collecting data, making decisions, and sending signals to different ECUs and actuators will happen again, to adjust speed, energy use and performance.

The tools of the trade:
This team works predominantly with a real-time operating system (RTOS) and works collaboratively with the mechatronics (mechanical and electronics) team. The most common programming language for this team is C/C++.

Given the nature of what the program must do, the software team uses model-based system development. These are specialized tools (provided by companies such as MathWorks, Vector, D-Space) that are often used to facilitate this kind of programming. Meanwhile, on the mechatronics side, the hardware team must work on the mechanical and electrical design, using specialized tools from Siemens, PTC, Dasault, IBM and the like. In the model-based system development, the software team can initially work independently of the hardware by using emulation and simulation. This kind of test is called Simulation in the Loop. Once the hardware module is ready, the software is injected to it and real integration test is performed — called Hardware in the Loop.

The mechatronics team uses a product lifecycle management tool to manage the mechatronics development, test, and verification. The embedded software team uses its own application lifecycle management (ALM) and continuous delivery tools to manage the software development, test and release.

Keep in mind, the different processes must interact with each other at certain points, and both teams must do their best to reduce the friction between the two completely different processes for the hardware and software development lifecycles.

2. Big data software teams

Big data teams

This team is tasked with processing the massive amounts of real-time data, and is also mostly concerned with horizontal scalability — to be able to support data throughput as more devices are sold. The software produced by this team is typically deployed in data center and it is often replicated in several locations for redundancy and low-latency. As the variety of the IoT devices being sold increases, the backend software must handle all of the variations in the flavors of the product/services offered to customers, and their corresponding SLAs.

The tools of the trade:
This team often uses agile development methodologies to support the need for frequent feature enhancements and software updates. Instead of batching all changes into one “mega” release, the development work is divided between small teams who work in short sprints, and incremental changes to the code and new features are being deployed fairly frequently. This team, too, often uses ALM or agile tools to support development.

To enable the cadence of deployments and streamline their delivery process, this team uses a software delivery pipeline solution that covers accelerated builds, preflight, continuous integration, continuous test and smart deploy. This tool also manages the provisioning, configuration and management of the IT infrastructure across the different development, QA, staging and production environments.

The backend software is the “brain” of the IoT service — and IoT devices are always connected. This mandates that the deployments to production are done in an extremely reliable manner, to ensure there’s no service interruption. To ensure smooth operations, this team often uses DevOps automation and continuous delivery solutions to facilitate the critical deployment process.

3. Mobile app team

Mobile app teams

The culture of mobile app development started out with startups and individual developers, who often perfected their code in the “hip” coffee shops in San Francisco, Palo Alto, San Diego, Austin and Manhattan. These programmers often do not require office space, and seldom do they have an IT team to support them. Instead, they are more inclined to use modern SaaS solutions designed to facilitate the development, continuous integration, testing and release of their app. These days, due to the strong trend in IoT, many large enterprises (such as automotive companies) hire these “coffee shop” programmers. Still, despite the fact they now work in the more formal environment, their work habits and their preferences regarding tool chains have not changed.

The tools of the trade:
The most popular programing languages among mobile app programmers are Java and Swift. Mobile developers are comfortable integrating several SaaS-based mobile app development tools to realize their development environment and processes. They can store their source code in GitHub, use SauceLab for mobile app testing and AppDynamics for mobile app performance test and monitoring.

The frequency of updates for mobile apps is also extremely high, driven by the competitive nature of the app landscape and the ability to optimize the application based on the available real-time feedback on the app’s usage.

4. Connecting it all

connecting it all

As we’ve seen, each team required to deliver on an IoT product is inherently different from the others. Because of this, friction naturally occurs when the three teams must coordinate their integration and system test. If this friction — and possible failure points between the separate processes — is not minimized, releases would inevitably get delayed, and the product’s quality will suffer.

In addition to common agile development practices and continuous delivery and DevOps platform requirements, there are unique requirements from a tooling perspective to enable efficient and streamlined IoT app delivery. What is needed is a single platform that can address the three different domains: a multi-domain continuous delivery platform capable of eliminating the friction and streamline the end-to-end process.

This solution must be able to integrate and orchestrate the work transitions or handoffs between teams throughout the product’s lifecycle. In addition, it must have the ability to track the artifacts as it moves from one domain to the other, and keep track of the artifact as it moves in and out of the different domains, the outcome of each processing, and who performs the work at different stages.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


June 1, 2016  10:04 AM

Why you need to provide availability at the IoT back-end

Mike Resseler Profile: Mike Resseler
back end, data availability, Data Management, Internet of Things, iot

Some say the internet of things (IoT) is the most transformative IT initiative since cloud computing and virtualization, enabling companies of all types to achieve unprecedented efficiencies and create brand new revenue streams. Others say it is just another overhyped IT trend that will fade away sooner or later.

Personally, I don’t believe IoT is overhyped. On the contrary, for me, IoT is something that has already been a part of our lives for many years, though it hasn’t typically been recognized as IoT. The technician who comes to your home to measure your electricity meter and reads that data on some sort of tablet device? That’s IoT. Your favorite delivery brand (UPS, FedEx, etc.) that allows you to follow and track that package as it journeys to your house? That’s IoT. The point is, IoT has already been with us in a more primitive form for long time already. Today, IoT is being rapidly incorporated into consumer products, and response from consumers has been enthusiastic, giving this technology the attention it needs to fuel rapid growth in services, devices and, perhaps most importantly, data.

Today, many express concern about the security of that data (and their privacy). It’s certainly an important issue, but I would like to address something that’s perhaps a bit more urgent: the availability of that data.

Let me give a quick example: A few years ago an electricity meter technician came to my house to measure my usage. But when he went to read the data, he found could not access my old data or even the serial number of my device because the data was not available. He made a call to his office where he was assured that the data would be back online in about an hour. Now, he could either go back to his car and sit and wait for an hour or take me up on my invitation to have a coffee while waiting (he choose the latter).

If you zoom out from this individual incident to look at the big picture, while my technician was enjoying a cup of coffee, you can envision how many of his colleagues were also on the road trying to gather data, all having to wait an hour before they would be able to continue their work, simply because the database that was down and needed to be restored. This created a massive cost for the power company that could have been avoided had the data been available. And besides the cost, there was also the damage done to its reputation, as homeowners saw the technicians appear a second time to attempt a second reading.

Data must be available at all times so people can do their work. And in this example, it is certainly important, as my final electricity bill is created from that information. Not having that information — or having the wrong information — would be costly for the electricity supplier.

If we look at it from the consumer point of view, then we know that devices will get damaged, stolen or lost. Ideally, when consumers buy or receive a new device, their old data is streamed again to the device without their needing to do any work. For example, a health tracker that gets replaced should automatically download the historical data again so that people can continue measuring their steps, heartbeat, sleep pattern and all other data they want to track.

But let’s take another example and look at a smart fridge. Over the course of months and years, the data gathered from that fridge makes sure that the owners will get the information of missing items (or even automatic purchases from a store) based on their preferences. If those owners replace that fridge because it is damaged or buy a new one, they don’t want to go through the learning cycle again.

With the above examples, it should be clear that the IoT data gathered should be available at the back-end of the service, which could be a workload at your private cloud or a service running in a public cloud. While we can debate at another time whether that data should be available offline on the device, it is important to realize that the service and its data should be always available. So if you plan on deploying an IoT solution, give some thought to how you will make sure that the data remains available 24×7.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 31, 2016  1:27 PM

How to stop our smart homes from turning against us

Art Swift Profile: Art Swift
Internet of Things, iot, iot security, smart home

The Internet of Things is transforming our homes into data centers before our very eyes. Yet unlike in the data center, we don’t have IT professionals on call to manage, patch and secure these systems. Already in 2016, there are reports of LG Smart TVs being targeted by scareware. And just recently, new research has highlighted serious flaws in the Samsung SmartThings platform which could allow remote hackers to unlock doors, trigger false fire alarms and reprogram security settings inside our smart homes. Many home users are unfortunately unprepared to deal with such events. They need to wake up to the fact their innocuous-looking domestic IT could be hijacked by cybercriminals or government spooks, with potentially serious consequences. But industry and regulators also need to take action — to force manufacturers to improve the security of consumer electronics products.

The vision set out by the prpl Foundation in a new guidance document is all about building security into the silicon — with a secure boot establishing a root of trust; and restriction of lateral movement thanks to hardware virtualization.

Complexity breeds insecurity

IoT innovation is everywhere, and it’s becoming ever more pervasive. Time was when we had one dumb feature phone in our pocket, a single shared PC in the home, an analog TV and a collection of unconnected home appliances. Today we have at least one smartphone alongside a tablet and maybe some other smart peripherals. In the house there could be a smart TV, home entertainment hub, smart router, connected fridge, smart toaster, IoT kettle, Wi-Fi washer/dryer and so on. Even the garage doors, lightbulbs and burglar alarms in our homes increasingly feature embedded, Internet-connected computing systems. That’s not to mention our automobiles — where everything from vehicle emissions to the on-board entertainment system, and even steering and braking is increasingly controlled by tiny sensors, software and silicon.

This new wave of IoT products might be highly intuitive on the surface, but it’s not as easy to manage as many people think. In fact, even the most tech-savvy consumers would have problems identifying and patching the growing collection of smart products in their homes. What makes matters more complex is that many manufacturers don’t release timely updates for their products, if at all. This is despite the fact that many are designed without security in mind. The firmware is left unsigned, which means if an attacker can reverse engineer the code they could remotely modify, reflash and reboot the device to execute arbitrary code. And too often lateral movement is allowed, meaning hackers can pivot inside a targeted system until they find what they’re looking for.

Even with an IT administrator on hand in the home, we would struggle to lock down this kind of risk. So what could an attacker actually do by exploiting these firmware ‘design flaws’?

  • The so-called “SYNful Knock” attacks discovered in 2015 showed how likely nation state actors managed to modify the firmware image of Cisco routers to achieve persistence inside victims’ networks. Compromising such a device at the gateway to the home network could give attackers a perfect opportunity to steal data, monitor communications and install malware on parallel systems.
  • Remote control of a smart device or embedded computer could allow an attacker to turn that device into a bot to launch DDoS, click fraud, information-stealing attacks and much more. One IoT device on its own is about as powerful as a BB gun. But imagine what you could do with a million BBs, all focused on the one target? Such botnet armies are well-known in security circles, but traditionally are composed of compromised computers. Yet IoT devices are perfect for this purpose: always on, always internet-connected and with fatally flawed architectures that can be exploited.We know of several cases already where IoT devices have been taken over en masse to build botnets. As far back as January 2014 a global phishing and spam attack was traced back to a compromised network of smart household devices. And cybersecurity firms are predicting things will get worse over the coming year.

As seen with SYNful knock, it’s not just criminal gangs that have the capabilities and motives to find vulnerabilities in IoT systems. Governments and the defense contractors they employ are actively looking for such weaknesses. Even if they claim this is done for national security reasons, we all know that once a vulnerability has been exploited in a system, it will eventually find its way onto the cybercrime underground forums and websites that crisscross the darknet. Do the intelligence agencies work with major technology vendors to engineer backdoors? We don’t know for certain. But at the very least, their efforts to discover and use such flaws are a security threat to us all.

Time for change

So what do we do about this? I propose the following:

  • Good security is at least half about good management of the product. Yet the consumer technology industry prioritizes the user experience over everything else. If a more secure product requires one more page of the user manual to read or 30 seconds more brain power, it is dismissed. Regulators must understand this. And they must impose a bare minimum standard for security updates — forcing manufacturers to administer these, so devices are not left unpatched for too long.
  • The recently discovered Samsung SmartThings flaws raise some important questions about smart home security. Do these systems really need a mobile app? Does the app need to connect to central server in the cloud? And, most importantly, is it right to have a smartphone control anything that is critical to you? In many cases the app itself is developed not by the smart device OEM but a third party over which they might have little control or visibility. OEMs should implement open and interoperable standards in their devices and home IoT architecture should rely only on a local, secured hub.
  • If you’re going to shift responsibility from the end user to the vendor, you need a secure infrastructure extended into the device itself. As outlined in the prpl Foundation document, we need:
    • Secure boot — ensure IoT systems will only boot up if the first piece of software to execute is cryptographically signed by a trusted entity. It needs to match on the other side with a public key or certificate which is hard-coded into the device, anchoring the “Root of Trust” into the hardware to make it tamper proof. This would have prevented the attacks on Cisco and others.
    • Hardware virtualization — this enables separation of each software element, where a system can be designed that keeps critical components in secure isolation from the rest and preventing lateral movement. This can allow consumers to enhance and modify their products whilst crucially allowing regulators to prohibit and lock down modification of any function deemed too dangerous.

How many smart devices are there in your home? When was the last time you checked the firmware to make sure it was updated? The answer for most people will be “not sure.” Yet these embedded computers are connected to the Internet and each other — in vast numbers — all over the world, and contain fundamental flaws which can be exploited by anyone with the right know-how. This can’t be allowed to continue.

As the Internet of Things and connected embedded computing begin to permeate every part of our lives, we need to come together as an industry and rethink our approach to securing and managing these devices.

Click here to read more about prpl Foundation’s blueprint for a hardware-led approach to IoT security: Security Guidance for Critical Areas of Embedded Computing.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 25, 2016  8:43 PM

Wall Street blockchain success shows promise for IoT

Earl Crane Profile: Earl Crane
Blockchain, Internet of Things, iot

The cryptographic wonder behind Bitcoin and other virtual currencies is about to bring its power to the Internet of Things. That technology — called blockchain — enables a non-repudiateable transfer of value between individuals or organizations through the use of the currency’s public ledger. This use of a public ledger is valuable beyond just providing a transfer of value. It provides a common, shared, irrefutable mechanism to transfer information, with a public record.

On April 6, 2016, seven firms revealed the results of a multi-month test to use a public ledger to manage the record-keeping and lifecycle of certain financial trades. This is significant not just for market efficiency and resiliency, but as a demonstration of a more broadly successful application of blockchain technology. This is a glimpse into the future of other areas where virtual currency technologies may bear fruit for distributed information exchange.

Application in the Internet of Things seems clear; make use of a distributed ledger to establish, manage, maintain and eventually dispose of information. The domain of IoT is nascent, emergent and frequently unscripted. It (and specifically securing IoT infrastructure) requires a decentralized, distributed mechanism to exchange irrefutable cybersecurity information that is and broadly communicable with various recipient technologies.

The catch is that a standard blockchain public ledger technology must first be agreed to by all parties. Though this process could exist at the National Institutes of Standard and Technologies (NIST), the market is already ahead of the game. It has identified the need and created multiple information sharing capabilities at companies like Apple, Google and Samsung. To really make this valuable, a clear market standardization needs to be established.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 25, 2016  8:39 PM

Flash makes IoT analytics descriptive, predictive, prescriptive

Chris Marsh Profile: Chris Marsh
Data Analytics, Flash, flash storage, Internet of Things, iot

The Internet of Things has brought promises of industry transformation through the collection and analysis of vast quantities of data to improve the interrelated computing devices, machines, objects, animals and people. To undertake this grand connectivity scheme, cloud-based services are starting to mature and reduce the skepticism that IoT can flourish in the cloud.

Cloud must be the backbone for IoT. Cloud service providers (CSPs) are positioned to provide the operationalizing of IoT to help foster innovative, ISV-based solutions, and the storage, analysis and security of IoT data.

Today’s cloud architects face new challenges to design robust infrastructures that are built to withstand the persistent processing, bandwidth and storage demands of IoT in action. This requires end-to-end solutions that deliver efficient and reliable results, at scale.

The need for flash storage for IoT

This is where the promise of flash enters the equation. Flash memory can be found at many of the critical touch points of IoT. Without flash on the edge, mobile devices and other IoT end points wouldn’t be able to achieve nearly the same computing performance nor store the volumes of data essential to the analytics process. But the impact of flash doesn’t stop there. Flash inserts itself again at edge data processing locations where analog-to-digital conversion is performed. Flash is also a critical fixture in the centralized datacenters that support cloud infrastructures.

In a recently released report focused on how flash memory is driving new cloud-based applications, IDC in conjunction with SanDisk, discussed the complexities of the IoT ecosystem and in particular how these compute and data processing intensive systems must support storing and analyzing large volumes of data. The report calls out the importance of flash memory in IoT endpoints for “longer life cycles, temperature variations and the support field programmability.” To provide maximum value, “flash memory must be industrial grade, reliable and secure, and provide as much endurance as possible” and “afford immediate insight and control of IoT devices and processes.”

IoT data analytics require centralized datacenter infrastructures, i.e., NoSQL databases for persistent data, IoT archival data within data lakes and event stream processing. These infrastructures will be configured in distributed and cluster-based application architectures where the scaling of compute and storage resources may be required to scale independently; a cloud service provider stands to gain CAPEX and OPEX efficiency by taking a disaggregated rack approach for IoT services.

Flash has been identified as best at the performance tier of these datacenters where it can help deliver analytics on the volumes of IoT-generated data. Flash storage is necessary when accounting for the massive concurrent small file reads on a NoSQL database persistent store, along with the archival storage of historical IoT data in a data lake. Flash storage helps enable cloud-based IoT services to perform at scale and still meet the performance requirements of NoSQL read and write I/O along with timely Hadoop batch job processing.

What’s next?

Undoubtedly, the IoT space will become even noisier in the years ahead. In its 2015 third platform predictions, IDC estimates that there will be 200,000 new IoT applications and services, and 30 billion devices by 2020. As more and more “things” are brought online, businesses will continue to find unique ways to capitalize on new service models and benefit from real-time customer insights.

Though storage and analytics will, in some cases, move more to the edge devices when necessary for data to be closer to the point of collection, the deeper analysis and computation involved with more complex IoT applications will continue to task CSPs in meaningful ways.

The game’s afoot for cloud architects. For those who want to stay competitive in this bourgeoning and combative market, they’ll need to provide customers with the ability to use cloud infrastructure for IoT descriptive, predictive and prescriptive analytics. Otherwise, they’re at risk of being sidelined.

This article expresses the views of the author and not necessarily that of his/her employer.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 24, 2016  12:22 PM

Blockchain: Defender of the ‘Internet of Things’

Tiana Laurence Tiana Laurence Profile: Tiana Laurence
Blockchain, Internet of Things, iot, iot security

The “Internet of Things” is here and has more data on you than you may know. The significant cultural and technological shift of this deep embedment into our lives, bodies, homes and almost everything else we touch has allowed for efficiency, flexibility and convenience with our day-to-day lives. That connectivity is an incredible thing, but one major question remains within the burgeoning IoT industry: how do companies secure the data collected on you?

Consider the information at stake. Your Wi-Fi-enabled security cameras can give real-time information about when and if you’re home. Same with your Internet-connected alarm system. Even a smart TV has valuable information; it’s connected to your Netflix or Amazon account. Any account information on these accounts can lead to a credit card or identity details. Of course, the mother of all identity concerns comes from the smartphone: it’s a centralized resource of account information that can connect with almost all smart devices, your smart home and even your car — something that becomes even more vulnerable as the age of self-driving cars approaches.

Recently, a CBS 60 Minutes story demonstrated the multitude of capabilities of a hacker that only has a person’s phone number.

It’s clear that the IoT age presents security concerns in ways that seemed unthinkable just a decade ago. The solution, though, may stem from one of the most unique innovations of the digital era: the blockchain.

Originally developed as part of the Bitcoin digital currency platform, the open blockchain model has inherent transparency and permanence. These are essential to creating a secure means of direct authentication between smart devices. The model currently used for Bitcoin can be propagated into other applications — any industry that requires archival integrity can adopt the blockchain. For the IoT industry, a blockchain can be created to manage device identity to prevent a spoofing attack where a malicious party impersonates another device to launch an attack to steal data or cause some other mayhem. Blockchain identity chains will enable two or more devices to be able to communicate directly without going through a third-party intermediary and in effect make spoofing more cost prohibitive.

Regarding this type of authentication, the model allows users to synchronize multiple devices against a single system of authority that is distributed and censorship resistant. This would apply to an open blockchain, not permissioned or private. The identity chain, created for each device is a permanent record. Through cryptography, only validated devices receive access. As new devices are added, their identity records become part of the blockchain for permanent reference. Any change to a device configuration will be registered and authenticated in the context of the blockchain validation model, ensuring that any falsified records can be caught and ignored.

This is a new technology and will take some time to move from testing into our everyday lives. Many industry leaders and governments will begin testing this year. Beyond whether or not the tech works, many stakeholders will need to get on board. An industry conglomerate that agrees on a blockchain design would be helpful. Having all the IoT devices write to the same source or have systems that are interoperable will be critical. It’s not necessarily that every IoT device manufacturer or software developer write data to the same blockchain; instead, it could go further upstream and be an agreement between OEM manufacturers of essential components that are used in the authentication process flow.

In addition to baseline authentication (device model, serial number, etc.), the blockchain can create records of any data it generates — for example, a smart front door lock can have a transaction log of video activation when someone exits/enters the home or unlocks it remotely. Each item in the history creates another historical link in its respective identity chain that can provide further data to use for authentication matching. If someone with malicious intent was to try and change the protocol of the door lock without the correct credentials or there was a change in the configuration, the blockchain validation model would not allow for the door lock to be changed.

An important component of the blockchain’s effectiveness comes from its standing as a public record, with user nodes all auditing the same record. Of course, with a public record, there will always be privacy concerns over sensitive data. However, the blockchain protects against this through the use of one-way hashes. In the blockchain world, a cryptographic hash function is a mathematical algorithm that maps data and shortens its size to a bit string, “a hash function,” which is also designed to be one-way and infeasible to invert. This means it is nearly and practically impossible to obtain the content of a hash without the source data.

The Internet of Things is still a new industry, one that will become more pervasive and significant as our technological innovations turn science fiction into our everyday lives. At this early stage, it’s critical to establish a scalable solution that will push the industry forward as the volume of connected devices grows exponentially. The blockchain represents a unique type of solution, one that is established as a secure means of protecting financial data but flexible enough to be applied to any high-stakes record keeping. With the IoT age demonstrating the ability to connect just about every aspect of a person’s life, it truly doesn’t get any more high stakes than that.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 19, 2016  10:52 AM

Smarter IoT applications incorporate machine learning

Deepak Puri Profile: Deepak Puri
AWS, Azure, IBM Watson, iot, Machine learning, prototype, Splunk

Data overload! Many IoT applications generate so much data that it isn’t humanly possible to analyze and act upon it in time.

Data scientists analyze mountains of data to identify patterns and define rules on how the IoT system should respond. Things change though, and new factors emerge which influence what the right action to take is. How do you make sure that your IoT system evolves in a changing environment and picks the optimal response?

Machine learning gives computers the ability to learn without being explicitly programmed so that they can create algorithms that can learn from and make predictions on data” may be the answer proposed Arthur Samuel.

Human brain as engineering processing machine sketch concept vector illustration

Sometimes you just need a little help.

Defining the rule for a simple IoT application — such as turning off a motor when it’s too hot — is fairly straightforward. Identifying correlations between dozens of sensor inputs and external factors is much harder. Consider a use case where you have to decide when to dispatch a truck to replenish vending machines based on sensor data from the vending machines reporting sales, inventory levels, the local weather forecast, local events and promotional advertising campaigns. Guess wrong or send the wrong supplies and you lose sales by not having enough of the right supplies for the vending machines to sell.

IoT Machine Learning Schematic

Most of the leading IoT platforms (including Azure, IBM Watson, Splunk, AWS and Google) now offer machine learning capabilities. This enables the IoT system to analyze sensor data, look for correlations and determine the best response to take. The system continuously checks to see how well its predictions are working and keeps refining its own algorithm. There are two major types of machine learning:

  1. Supervised Learning refers to developing an algorithm based on a set of examples. For instance, a simple use case may be a record of sales by product per day. The algorithm develops a correlation between how much of each product is likely to be sold per day. This information helps determine when to send the truck to replenish that vending machine.
  2. Unsupervised Learning does not provide the system with labels (such as sales/day) to analyze. Instead it presents all the data to analyze, which lets the system to identify correlations which are not so obvious; for example, price discounts, local events and weather all influence sales at the vending machine and need to be taken into account to determine the replenishment schedule.

Many firms start off by manually defining the business rules for their IoT system to follow. They then start adding machine learning based rules as they collect more data and information on other external influencing factors.

Resources for Machine Learning systems:

If you think applying machine learning to IoT is advanced, check out Kaytranada’s wonderful new video to see what machines might eventually learn to do one day!

Applying Machine Learning To IoT

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: