IoT Agenda

Page 41 of 54« First...102030...3940414243...50...Last »

October 11, 2016  11:43 AM

Understanding the evolution of the internet of things

Steven Woo Steven Woo Profile: Steven Woo
connected, Data, Internet of Things, iot, Moore's Law, privacy, Security

The internet of things has certainly come a long way since Kevin Ashton coined the term way back in 1999. Quite a lot has been written about IoT over the past 17 years, with many analysts and journalists correctly predicting an almost exponential increase in connected devices. While most people focus these days on where IoT is going, I think it’s just as important to talk about the past in order to understand the future.

Just a few decades ago, most Americans were “connected” to the world outside their homes in very limited ways, primarily with landline telephones, radios and televisions. Aside from the telephone, home radios and televisions were a one-way experience. You could watch or listen, but not talk or interact with a radio or TV. All of this began to slowly change with the advent of home computers, such as those made by Atari and Commodore in the 1980s and later the IBM PC. Suddenly, entire families could interact with a machine in a more meaningful way, and more importantly, the outside world as well.

The notion of connectivity was still very much in its infancy, as slow dial-up speeds and sparse infrastructure hampered online activity. But aided by Moore’s Law, this was technology that was available to the masses and becoming more affordable every year. But during this time, concerns over online security and privacy began to work their way into the public consciousness, with movies like War Games illustrating the pitfalls of this new digital Wild West. By the late 1990s, landlines had given way to mobile phones, DSL had replaced dial-up, and clunky desktops were fast being supplanted by sleek laptops and, later, by tablets. Online commerce boomed (along with fraud) as most people became connected to the internet one way or another.

And then the inevitable happened. Devices which were not originally designed for internet connectivity began to go online, such as vehicles, watches, refrigerators, washing machines and medical monitoring systems. And so, in many ways, we are still grappling with the rapid changes and technological advancements that began in the middle of the 20th century. For instance, there were no privacy or security concerns over a watch worn in 1932. Although a watch could certainly be physically stolen, no one really had to worry about the watch disclosing personal information that could enable identity theft or fraud. Similarly, aside from its mechanical components, the modern vehicle can be thought of as a network of networks which can be attacked, hacked and even driven off the road. Obviously this wasn’t an issue for a 1972 Volkswagen Beetle.

As the internet of things evolves, the industry must develop new ways to tackle important issues like privacy, security and how to deal with the enormous amount of data generated by connected devices. In addition to developing technology to address these issues is the added difficulty of developing solutions that fit the business needs of the markets being served. The internet of things crosses boundaries between enterprises and consumers, further complicating the technical and business needs of solutions that will be accepted in the market. In future posts, I’ll be exploring some of these ideas and key challenges further.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

October 10, 2016  2:47 PM

IoT and software monetization: Getting started

Eric Free Eric Free Profile: Eric Free
Business model, Data monetization, Internet of Things, iot, manufacturers, Manufacturing, monetisation, Supply chain

It’s been a long time since the manufacturing industry has seen something as powerfully transformative as the internet of things. According to Gartner, by 2020 there will be about 20.8 billion IoT devices in circulation. IoT is catching on fast and businesses all want a piece of the pie. However, manufacturers and IoT companies need to shift their business models in order to truly reap the benefits of the IoT craze. This involves adopting a more software-centric approach, which means manufacturers must redesign products from fixed-function, disconnected devices to flexible, seamlessly connected systems.

What is a software-centric business model?

When it comes down to it, simply selling more devices will not produce the profit spikes companies are looking for when entering the internet of things market. Manufacturers must start thinking and acting like software companies to drive significant growth. A software-centric business model is all about leveraging the software applications built into the products to reduce manufacturing costs, increase product innovation and capture new revenue streams. Device manufacturers should adopt licensing and entitlement management solutions to control the product functionality, features and capacities of these internet-connected devices.

For example, say a telecommunications company develops a connected video surveillance camera with 10 particular features. Using software licensing, the company only needs to manufacture one model. It can use software licensing to turn on features one, two and three and sell that as the basic model. Then, it can turn on features four, five and six and sell that model as the premium model, and so forth. Before, this would have required the company to have multiple manufacturing supply chains, but with IoT and software licensing, it can be reduced to a single supply chain.

This will help IoT companies better compete in new and current markets, while speeding up time to market for new products, feature combinations and product enhancements.

How does software monetization improve the bottom line?

Using automated licensing and entitlement management systems to monetize IoT devices can improve profits in a number of ways. For starters, it can reduce manufacturing and supply chain costs. As noted in the above example, companies cut down on the number of models they need to produce by controlling features, capacity, configurations and throughput via software licensing and entitlement management. This flexibility in manufacturing means producers, distributors and resellers need fewer inventories, further streamlining the supply chain.

Software monetization also lets device makers uncover new markets and revenue streams. Manufacturers can offer product enhancements through software updates and charge for the enhancement based on a software maintenance and update model. This opens up opportunities to charge for new levels of software support while also creating a better customer experience. Since software allows for product flexibility, IoT companies can easily and quickly package and price their devices to address new, emerging or niche markets that would have previously been impractical to target due to costs. IoT devices also have the ability to gather massive amounts of data, which can be analyzed and used to identify further opportunities.

Lastly, software monetization can extend product life. Since device functionality is managed and controlled using software, rather than being hard-coded into the device’s physical components, it’s easy to upgrade and enhance products using software commands via the internet. This allows customers to derive more value from devices over time with minimal disruption.

Business considerations when shifting models

When moving to a software-centric business model, IoT device manufacturers should take several factors into consideration. First and foremost, there needs to be business buy-in for the transformation. This needs to take place across the business, not just within engineering and product management. In order to get buy-in across the company, there should be an understanding of the traditional software licensing methodology and its proven approaches. Only then can employees envision how this can be leveraged in the IoT industry.

Companies must also take the steps to determine the appropriate software license compliance policies and enforcement mechanisms among the spectrum of available options and anticipate the flexibility needed to make changes as business conditions change. Additionally, there needs to be an understanding of the differences between delivering hardware and digital goods and the software value lifecycle. The software value lifecycle is vastly different than that of one-off hardware transactions as it’s an ongoing process and is increasingly subscription based.

IoT companies then need to create the business processes needed to support the value lifecycle. How can software be leveraged to continuously offer new and improved features and how can the business support continuous innovation? Companies should continuously fine-tune their strategies for product development, delivery and execution to optimize revenue and margins.

By leveraging software licensing and entitlement management, manufacturers can take advantage of opportunities for faster innovation, improved and more personalized offerings and new revenue streams. A software-centric business model allows for the flexibility and agility needed to thrive in the fast-moving IoT market.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 10, 2016  2:08 PM

The internet of things is in your future – the law says so!

Chris Witeck Profile: Chris Witeck
devices, Internet of Things, iot, Moore's Law, Sensors

I wrote earlier how in many ways the impact of IoT is being underestimated. Yet more and more you are seeing opinions that perhaps IoT is in reality overhyped and growth is stalling. Does the lack of standards or growing security concerns mean that IoT may just not meet many of these grandiose expectations? Or, to borrow terms from the Gartner Hype Cycle for emerging technologies report, are we moving towards the natural evolution of technology, past the peak of inflated expectations, and diving right into the trough of disillusionment? Are we waiting for the slope of enlightenment to hit where everyone goes back to having those IoT ah ha! moments?

I agree that with any new technology the initial hype often doesn’t match the substance behind it. As I have said before, I believe much of the initial hype was misplaced. It was myopically focused on devices and failed to consider the interaction between devices, data and applications and how we integrate all of these devices and systems together. While I just stated that there was too much hype on devices, it will be the devices that really drive IoT forward. How can that be, isn’t that a contradictory statement? Not really. The growth in the number of devices combined with the introduction of new device form factors will serve as a forcing function for organizations to improve how they manage the data flow between devices and applications. The whole notion of how we define an application will change — moving away from a model where information is presented to a user from a cloud system or a back-end system, towards one defined by the interaction of data, devices, apps and users. And this is not just my opinion, this belief is backed by the law! More specifically, Moore’s Law and Metcalf’s Law.

Moore’s Law and IoT

I’m sure most people know what Moore’s Law is, it states that the number of transistors in a dense integrated circuit doubles approximately every two years. Moore’s Law is basically saying that on a regular basis, devices are getting more powerful, smaller and cheaper because the brains within the devices are getting smaller, more powerful and cheaper. I think anyone old enough to remember what a PC looked like 30 years ago compared to what a smartphone is capable of today understands that analogy. It is pretty amazing to me that this law postulated in 1965 is still in effect today, although many predict we are fast approaching the end of Moore’s Law (we will get to that later).

This law is easy enough to understand, however I don’t think most people appreciate just how revolutionary the impact of Moore’s Law has been. In the Wikipedia definition of Moore’s Law, it presents this graph, depicting the early days of Moore’s Law to recent history. What this graph illustrates is the amazing consistency of Moore’s Law, which for the most part has stayed true to the original cadence predicted all of those decades ago.

Microprocessor transistor counts, Moore's Law

Source: Wikipedia

However, this graph doesn’t show the true exponential growth curve of the law. Instead it presents the growth on a logarithmic vertical scale, which makes it easier to show all of the data points. If you were to look at the same growth on an exponential scale, starting in 1971 and culminating in 2011 (where the above chart ends), you would end up with something that looks more like the chart below. Take note of how much of the growth has occurred in the last ten years of the chart. The data is the same as the Wikipedia chart, it is just expresses the growth exponentially.

Moore's Law exponential

Now the end of Moore’s Law is something of much debate. Some forecasts have it hitting its peak in 2021. It’s not to say that innovation won’t continue, we just won’t necessarily be beholden to the rule that transistor capacity will continue to grow at the same rate it has for the previous 50 years.

But what if it does continue at the same growth, at least for a few more years? What happens to our chart above if we extend the timeline from 2011 to 2021? I have plotted that growth below, following the same exponential growth curve. In the updated chart, you can see that the 2011 spot on the curve is pushed towards the bottom with tremendous growth in processing power just from 2011 to 2016. Has that growth held? There have been chips announced this year with over 12 billion transistors, so the answer so far is yes. If we continue Moore’s Law from 2016 to 2021 on the same growth curve, we will see more growth in processing power in the next 5 years than we saw in the previous 45 years. This leaves plenty of room for innovation in the coming years.

Moore's Law exponential 2

How does IoT relate to Moore’s Law? With the growth of processing power, is there a direct impact on the growth of IoT devices? Some indicators from recent years point us towards the answer — yes. In 2015, 97 million wearable devices were in use and 604 million M2M connections were made. As the guts of computing devices have become more powerful, the flexibility to create different, cheaper and more pervasive devices with form factors for new and interesting use cases has and is happening.

Metcalfe’s Law and IoT

With Moore’s Law leading to all kinds of devices in use, what is the impact of all of these devices? Let’s measure that impact in terms of another law: Metcalfe’s Law. For those not familiar with Metcalfe’s Law, it states that the value of a telecommunications network is proportional to the square of the number of connected users to a system. For example, if you only have two phones in a network, you can only make one connection. If you have 5 phones, you can make 10 connections. If you have 12 phones, you can make 66 connections (something this image illustrates). As you add more users/devices, the number of connections continues to scale out.

This law came about in 1993, in the somewhat early days of cellular networks. This law has evolved as the internet has evolved, with recent examples illustrating how Metcalfe’s Law can help establish the value of social networks — the more users using a social network, the better the experience becomes for everyone using that network.

How does IoT relate to Metcalfe’s Law? The impact relates back to the previous discussion with Moore’s Law leading to new kinds of devices, form factors and use cases. Many of the previous examples for Metcalfe’s Law focused on the connections between the users attached to a device on a network. With IoT, networks contain not just devices associated with a user, but also devices associated with other devices. This brings up new ways to derive value from the network. The more interactions I have with data, devices and apps within my personal network, the more value I can derive from the context captured by those interactions. A business can then capture more inherent value from the data generated from these interactions by casting a wider data net. The value of big data and analytics in an IoT-influenced network comes from mining these interactions for information that improves an organization’s security posture, helps them make more informed decisions regarding their customer’s behaviors, or helps make their business more effective and efficient.

Summary

Moore’s Law and Metcalfe’s Law planted the seeds for the internet of things decades ago, illustrating and helping to predict potential for IoT. These laws do more than simply set the stage for the billions of devices coming with IoT, they also set the stage for digital transformation in general. This brings us back full circle to my earlier notion that the definition of an application is changing. Applications will be increasingly defined by the interactions we just described. In essence, Moore’s Law and Metcalfe’s Law directly impact how we think of devices, applications and networks, and also directly impact how digital transformation will transform the enterprise.

Perhaps it is time then for a new law, the Law of Digital Workspaces? For this law I postulate that business user productivity within a digital workspace is driven by the number of devices, sensors, things and apps that user is connected to. What do you think? Share your comments below. Minority Report, here we come!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 7, 2016  10:51 AM

Deep data: The solution to the IoT information explosion

Badri Raghavan Profile: Badri Raghavan
Data Management, Data monetization, Internet of Things, iot

Today, there are 15 billion connected devices worldwide, and with those connected devices come massive amounts of data. Businesses are leveraging this wealth of data to gain insights into prospects and customers to steer engagement strategies, identify pain points in marketing and sales performance, and unleash intelligence about where organizations can best allocate resources. In essence, data is the key to accelerating business performance and gaining competitive advantage.

The problem with data, however, is that there is now so much of it — with more and more becoming accessible every day — that businesses are struggling to make sense of it all and use it to its full potential. And with the internet of things market expected to grow leaps and bounds in the next few years (the amount of internet-connected devices is projected to reach 50 to 200 billion by 2020), businesses need to quickly figure out how to manage and optimize their ever-increasing amounts of data. And while data storage is increasingly cheap, the rate of growth is such that the cost of storing, processing, auditing, securing and exploiting the data is growing much faster than any savings accruing due to falling storage costs.

To make the best use of their data, businesses need to understand the most effective way to leverage the data while keeping costs and complexities under control. When it comes to monetizing data, less is often more. Big data in and of itself will not drive them forward; rather the key is what we call “deep data.” Deep data is an approach through which businesses identify and aggregate the most meaningful data streams and model them in a way that delivers insights into their most pressing business challenges. It’s based on the notion that rather than hoarding irrelevant or less useful data, businesses should focus on the data streams that are rich with valuable information.

To determine which data streams will provide the most value, companies must start by focusing on the business challenge they are trying to solve by leveraging data. From there, they can apply advanced analytics to small but information-rich data versus sifting through piles of information, which can often feel like looking for a needle in a haystack.

In the energy space, we have found the deep data framework to be hugely successful. Looking at data from the utility meter, arguably the oldest and most prevalent IoT device around, we can identify ways in which utility customers have historically used energy. That intelligence, coupled with data pertaining to weather and geographic factors, enables us to draw conclusions about how those customers may use energy in the future. Armed with these insights, utilities are empowered to prepare for shifting energy usage patterns, educate consumers about their energy spend and engage customers with offerings that meet their individual energy needs.

As more and more “things” are connected to the internet, businesses will find that the deep data approach will not only alleviate stress related to the explosive growth of data, but deliver welcome insights that can help decision makers choose the best path forward. That’s something we can all feel good about.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 7, 2016  10:48 AM

Monetizing customer usage data

Michael Beamer Profile: Michael Beamer
Customer data, Data, Internet of Things, iot

The internet of things provides businesses with an incredible opportunity when it comes to analyzing customer data. With the right technological infrastructures, businesses can now capture customer usage data at levels previously unimaginable, helping to inform new market strategies and product innovation.

While being able to collect immense quantities of customer data is proven to help companies facilitate the buyer journey and improve customer experiences, it shouldn’t stop there. Instead, companies should find ways to monetize it as well.

Customer data is among — if not the most — important asset of any enterprise. The best customers will be identified by data; the customer experience will be enhanced by data; and new products and services will be developed based on data.

That probably explains why industry analysts have labeled enterprise data everything from “the new cash cow” to “the new gold rush.” IoT companies, in particular, are constantly under pressure to turn usage data into new sources of revenue, and that pressure is only poised to intensify in the years ahead.

But how do businesses transition from simply collecting data to monetizing it too?

It starts by looking at usage trends. With usage trends, businesses can identify the best customers to target for cross-selling of additional products or services. They can additionally predict, with a high degree of accuracy, which customers are likely to accept an upsell to a higher level of service.

Usage data also can reveal customers with patterns of low usage who may be unsatisfied or who may not be fully utilizing a particular product. By making use of this type of data, companies can reduce customer churn and ultimately improve the average lifetime value of their customers.

In order to fully embrace the value of data at the enterprise level, companies need a data monetization platform that can benefit the entire organization and marry front and back office infrastructures. The system should be able to collect, monitor, measure and record real-time, digital customer activities, such as application usage or consumption of power, data, storage and bandwidth. Businesses also should be able to be easily bill customers based on those activities.

In IoT, this is particularly important. The amount of granular data generated by IoT deployments creates a demand for dynamic billing solutions that can turn connected products directly into profit from the get-go.

As businesses continue to compete on the ability to leverage customer data alone, here are some factors to keep in mind when it comes to deploying a data monetization engine:

  • Flexibility and scalability: As customer preferences change, so will the way businesses want to use and monetize their customer data. Having a platform that’s flexible and scalable with fluctuating customer and market demands is key to monetizing changing quantities of customer data.
  • Usage preprocessing: As the amount of usable data explodes, it can be easy to overwhelm back-office software systems and bog down servers with data that is not needed to monetize products and services. That’s why monetization platforms should use a usage rating engine to preprocess usage data externally, sending only the relevant data to a company’s system of record.
  • Rules-based rating: The usage rating engine should use rules that can be configured easily to control entitlements, services and containers. Being able to leverage a powerful, agile rules-based rating engine enables businesses to streamline billing operations and keep costs under control.
  • High-value real-time usage: As stated previously, collecting usage data at a high degree of quality and granular detail is key to success. The right monetization engine will leverage this real-time usage data to automate critical events within a bill cycle — that could include anything from sending upgrade offers to charging for overages.
  • Service identifier mapping: The tracking of physical components assigned to or used by a customer is a crucial step to leveraging usage data. Cloud application and infrastructure providers have particularly better visibility into customer usage when multiple device identifiers can be applied to client accounts and used to customize billing.
  • Flexible pricing paradigms: Most ERP systems are limited by the pricing paradigms they can support, but a multidimensional data monetization platform has almost unlimited flexibility in packaging and pricing. Building rules logic in a monetization platform should be as easy as creating financial modeling using a formula in a spreadsheet.
  • Product bundling: Many systems are limited by the product bundles and packaging strategies they support. This inevitably leads to product catalog creep; sometimes exploding to hundreds of thousands of needless variations created because of system limitations. The right monetization solution should allow businesses to bundle products or services together and create sub-allocations for cost basis. The monetization platform should also be able to manage discounts and pricing tiers, accurately and at scale.

As the internet of things continues to grow and reach new heights, the need to analyze customer data and bill for new products quickly and easily is more important than ever before. To do this, companies should leverage the customer data they’re already gathering — or should be gathering — and implement a flexible, agile and scalable data monetization platform that can support their broader business strategy. Then, they’ll be empowered to turn their customer data directly into an enterprise-level cash cow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 6, 2016  2:02 PM

Eight commandments of IoT integration

Aaron Allsbrook Aaron Allsbrook Profile: Aaron Allsbrook
Application integration, cloud, Cloud integration, Digitalisation, Embedded software, Enterprise, enterprise information integration, Enterprise IT, Enterprise Security, Enterprise software, IIoT, Industrial IT, Integration, Internet of Things, iot, iot security

Launching your first enterprise integrated IoT solution is a monumental moment for any organization. The initiative represents a first step towards an unprecedented level of automated industrial and integrated business processes. We can now create tailored user experiences based on real-time information leveraging the emerging technologies of edge processing and machine learning, along with enhanced strategies for product design and IT digitalization.

IoT integration required

No matter how you run your first IoT project (relying on a service provider, leveraging a software platform or hacking away on your own), the key enterprise success criteria will depend on the ability for your IoT application to integrate with existing business systems. Like a mobile app that doesn’t engage users with push notifications or an e-commerce site that doesn’t have a shopping cart, IoT cannot deliver its promises without integrating with existing and legacy systems.

So how do you start on this IoT integration challenge?

Eight commandments of IoT integrationHere are the eight commandments of IoT integration we use to get our hands around the problem and build something amazing!

IoT integration: Thou shalt…

I. Secure everything
It’s tempting to build IoT rapidly and start moving data between systems. A beta version demo showing a connected factory floor immediately sends your boss, marketing team and executives into an excited frenzy of a Jetsons-like future. The last thing anyone wants to think about is the consequences of those devices being compromised, remotely monitored or even captured as part of a botnet. Sadly, this is outcome very likely and proven to be happening all around us. When doing your IoT Integrations, always ask yourself:

  • Do I have authenticated trust with this device or user?
  • Should the information being transmitted be encrypted?
  • Should this trusted device have authority to do what it’s doing?

II. Pick up where the existing system left off
With the mobile app wave, many companies spent years to be mobile ready before attempting to build an app. This was a massive inhibitor and falsely implied that old systems needed to be replaced. Instead, recognize that the vast majority of existing systems already have integration points built into them. There is a method for accessing them even if the method isn’t currently in “tech-vogue.” SOAP is still a viable integration option even if Reddit dismisses the idea. Whether you are integrating with a mainframe database, a Java archive, a massive service-bus or a DLL, understanding the architected communication protocol for your legacy systems is an important place to start.

III. Speak standards
Standard-based protocols aren’t a magic bullet, but they represent advancement when it comes to giving yourself a leg up on integrating and future proofing your enterprise. Choosing MQTT or REST as your IoT standard may still impose challenges as you communicate into MTConnect or OPC-UA, but by picking a standard, you make a whole community of libraries and resources available. Ultimately, the benefit of selecting an open standard for your solution is the ability to leverage the vast number of historic and open projects.

IV. Adapt and advance
The need to use legacy protocols with open standards creates competitive software designs. To overcome this conflict, an adapter layer is critical. This adapter will perform the task of relaying data across one protocol to another as efficiently as possible. This means the adapter shouldn’t perform any complex logic or refactoring. Instead, it should be as lightweight as possible and simply act as a cross channel for communication. There are connector software vendors that can help with this task, especially if your solution is cloud based. Alternatively, if you have a subject matter expert, the adapter can quickly be built in-house.

V. Not duplicate
Over the years, many enterprise integrations have been built by simply copying the database nightly. “DO NOT MAKE COPIES OF YOUR DATA TABLES.” Excuses for this terrible practice are based around security, performance or general lack of skills to do the actual integration. As far as IoT integration goes, this is one of the worst things you can do for several reasons: it immediately creates questions about what is the system of truth, it doubles the work effort to constantly sync conflicting rows, and creates developer confusion. When you’re tempted to copy data, it’s time to review your architecture design and the technical debt you will incur.

VI. Not duplicate
Another temptation every integration author feels is the desire to duplicate business logic. For the enterprise this is terribly risky, as business rules become ingrained into many different code bases and incoherent to understand. Many companies struggle for years to understand how their own processes work and how they can modify them safely going forward. Protect yourself now and approach your IoT with a reusable API focus. A single point of interface into your systems of record will allow you to be sure you know where and what is running your business

VII. Cache where you can
The biggest argument for duplicating data is performance, and it is a fact that many data stores of the past just aren’t fast enough to handle the IoT workload. Think caching rather than falling prey to duplicating. Caching is the best of both worlds, allowing you to put information in a location that is highly available and scalable, and at the same time does not represent truth in the system. By using middleware like redis or etcd, you can take the read load off the enterprise system and then use well established queuing systems for keeping your cache in sync.

VIII. Hold the refactor
Often, an IoT integration task turns into a system-wide refactor initiative as developers get lost in the rabbit hole of the legacy system. Strategically, if the system was important enough to be refactored before your IoT initiative, the burden of updating it shouldn’t be part your IoT project. Instead of attempting to fix the vast number of other things that are wrong with that system, hold your nose and just get at the information you need. The technical debt of the old system should not impede the success of your IoT.

Summary

The internet of things is clearly the most exciting opportunity for enterprises to optimize and grow their business since the inception of the internet itself. To create actionable insights and achieve the expected ROI from IoT, integration will be critical to connect user, machine and device information into the systems our enterprises use every day.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 6, 2016  1:58 PM

IoT Village DEFCON 24 results: Connected devices still vulnerable

Ted Harrington Profile: Ted Harrington
Internet of Things, iot, iot security, zero-day

We like to hack stuff. So much so, that we organize events to galvanize the security research community to hack stuff right alongside us. Stretching back to 2013, when we published a piece of security research showing that all of the 13 most popular routers were vulnerable to remote attack, local attack or both, we’ve organized well over a dozen hacking events all over the United States. The purpose of these events is to shine spotlights on areas that may need security improvement, and organize a volunteer army of some of the brightest minds in the security industry to collaborate on addressing these many, often complex, security challenges.

Two such events we organize are IoT Village and SOHOpelessly Broken. IoT Village is a community of security research featuring talks, workshops, hacking contests and press events. SOHOpelessly Broken is a hacking contest that started as the first ever router hacking contest at esteemed security conference DEFCON and has since expanded scope to include all connected devices.

Among many, one of the great benefits of organizing hacking events is that we get a first-hand glimpse into themes across some of the most salient security issues of our time. One such issue pertains to the security considerations introduced by connected devices. During DEFCON 24, which happened in Las Vegas over August 4-7, 2016, we hosted both of these events, which produced some fairly eye-opening results, including a new wave of security findings: 47 new zero-day vulnerabilities across 23 different device types and 21 different manufacturers.

Abstracting from those success metrics, we observed several pronounced themes:

  1. Fundamental issues persist. During its inaugural run last year at DEFCON 23, IoT Village uncovered 66 new zero-day vulnerabilities across 27 device types and 18 different manufacturers. Many of those vulnerabilities were design level violations of well-(mis)understood security principles, leading to issues with privilege escalation, remote code execution, backdoors, runs as root, lack of encryption, key exposure and many more. Fast forward to this year and many of the same basic design flaws persisted, including use of plaintext passwords, buffer overflows, command injections flaws, hardcoded passwords, session management flaws and many more. These were all found on a new crop of devices beyond those investigated last year, suggesting that the scope of the issue not only continues to be systemic, but is expanding as IoT adoption accelerates.
  2. The scope of IoT is expanding. Last year the emphasis of research was focused heavily on the smart home. This continued to be an area of importance this year, however we also saw similar issues across connected transportation and even the energy grid. One harrowing example is where one security researcher showed how an attacker could shut down the equivalent of a small- to mid-sized power generation facility by accessing the flaw in solar panels manufactured by Tigro Energy.
  3. Interest in IoT security is increasing. IoT Village doubled its overall floor space, and yet was still standing-room-only for all of the talks. The CTF track of the hacking contests grew from 11 competing teams to 51 competing teams. DEFCON awarded a coveted “black badge” to the contest winners, which is an exclusive designation only given out in extremely rare occasions, and is DEFCON’s version of the Hall of Fame.
  4. Manufacturers are starting to get more proactive. This year, two different manufacturers (FitBit and Canary) got involved with IoT Village, donating devices for researchers to investigate. Both FitBit and Canary hoped to engage the community in helping make their products more resilient against attack. Another manufacturer (Trane) created a new vulnerability disclosure process across its enterprise, as the result of research into one of the thermostat products. Trane is trying to make it easier for researchers to report security flaws, so issues can more quickly be remediated.
  5. The government is starting to take notice. Top executives from both the Federal Communications Commission (FCC) and the Federal Trade Commission (FTC) delivered speeches on the IoT Village stage. Rear Admiral (ret.) David Simpson, the bureau chief of the FCC, spoke to a packed audience about how security research in general — and events like IoT Village in particular — are doing a good job of “making things harder” for malicious hackers. Terrell McSweeny, commissioner of the FTC, presented about the law enforcement actions that her organization is pursuing related to IoT.

It is our hope that by fostering a community of research, we can be a catalyst for change in ways that benefit consumers, business and entire industries. If you are a security researcher, a device manufacturer, a member of the law enforcement community or anyone else with even a passing interest in addressing these challenges, please contact us to get involved.

Happy hunting!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 5, 2016  1:16 PM

Commercial IoT development: Agile or waterfall? Or both?

TJ Butler TJ Butler Profile: TJ Butler
Agile, Agile software development, Internet of Things, iot, Waterfall, Waterfall development

It doesn’t take long working in the world of IoT to feel the pulse of the clock. From idea inception to implementation, there is a sense of urgency to move fast. Each tick is a reminder that the competition is working diligently to provision a patent first, get to market first or be the first to scale.

Fast pace development is nothing new to software development teams big or small. Quick and nimble development — often called “agile” — is at the heart of the startup culture that has driven the IoT boom, and it’s resulted in an expectation that these products can be delivered fast — especially if you work with a small team. On the other hand, the typical process that ensures enterprises deliver a controlled, predictable and consistent product — often called “waterfall” — can get in the way of fast innovation. As a result, many enterprises work with small teams or create internal innovation teams to spearhead research and proof of concept efforts. Enterprises leverage smaller teams because they aren’t tightly coupled to the extended business processes that support everyday business like regulations, quality assurance, billing and customer service. The smaller, focused team aims to fail fast, iterate, improve and repeat, quickly honing in on delivering a minimum viable product. In the case of IoT especially, it is extremely helpful, I’d even say critical, to get out in the field and learn what you couldn’t experience in a lab before getting too far down the design road. IoT, unlike software, includes a hardware element that may not be nearly as flexible due to manufacturing physical product as part of the core solution. You may not have the luxury to make an iterative improvement on a sensor once it’s been manufactured and is sitting in a warehouse.

Certainly, this is the nature of innovation in general, but IoT introduces a software component and culture which is often new to traditional product manufacturers. In many cases, IoT is digitally transforming products that have had no technical dependencies other than parts procurement or resource planning. The transformative combination of hardware and software creates new value beyond the widget itself, but it also creates dependencies which disrupt long-standing workflows in manufacturing and product lifecycles. It’s often at odds with the enterprise processes supporting the product itself outside the innovation garage.

Blending the manufacturing processes of things with software

Even enterprise software development teams familiar with agile may have a hard time with IoT projects because IoT isn’t comprised of a single app. IoT solutions are truly distributed applications spanning several layers of hardware, software and transport borders. These layers have different lifecycles but have tight dependencies on each other. It can be difficult to make changes to one layer without having to consider the full stack. Here are, for example, are some high-level layers you can expect in an IoT solution, all relying on firmware and software to some extent:

  • Device and sensors
  • Local area transport
  • Gateways/concentrators
  • Wide area transport
  • Data ingestion and back-end integration services
  • Dashboard/reporting interfaces

Ideally, there are clear delineations between dev, QA and production, but early on it may not be practical or cost effective to have completely segregated environments across all of these layers. Some low-power, low-cost devices may sacrifice over-the-air update functionality. This means they may be permanently tied to a specific build pointing to a specific environment limiting iterative improvements that are core to the agile process. This can have a significant impact on feature validation and testing timelines for early iterations. Transport layers such as cellular or LPWAN may have to be shared, early on, precluding testing that may risk stability. Some of the back-end services or integrations may be too complex or time consuming to set up in parallel across each party involved. While these aren’t best practices, they are the practical reality that enables IoT projects to get off the ground. That’s not to say the product is reliable or, more importantly, safe. The checks and balances expected in manufacturing automation, financial transactions and medical device require due process control, but I’d argue getting off the ground is just as important early on.

Embrace agile failure in pilot and respect waterfall quality for commercial rollout

The transition from pilot to production is where attention to detail becomes top priority. No matter how flexible or progressive an enterprise may be, some level of process and control will become necessary. Arguably as important as innovation, enterprises have an underlying core responsibility to brand reputation. Failure, which is a core tenant of the agile process, is counter to brand reputation. That’s not to say enterprises can’t be innovative, it’s simply a salient reality that everyone involved in commercializing a product must appreciate. Enterprises passionately want to avoid failure when the product hits customer’s hands. This is why waterfall still plays an important role. It’s most apparent when an IoT product moves beyond version 1.0.; this is when agile starts to take a back seat to waterfall and the integration into all of the supporting business systems begins. All of those integrations collectively make up the end customer experience and the profitability of the business. Therefore, the pace slows, scrutiny increases and the infamous red tape is put up. It’s at this point where the agile teams feel the jarring affect because they’re still under pressure to commercialize fast in an environment that expects hard delivery dates.

The fact is, it’s hard enough to manage resource planning for manufacturing. Adding dependencies on a software layer or several software layers, especially for teams unfamiliar with software development styles like agile can be absolutely jarring. It’s natural for software teams to update the product as soon as an improvement is available, but it would be unthinkable to pull a completed product sitting in a warehouse to change the physical shape of a widget. Enterprises leverage rigid, gated processes because that’s what ensures they get it right the first time and deliver a quality product, on time at scale. Orchestrating and planning across the entire spectrum is just plain hard when things are changing on the fly … but changing on the fly is what makes software great.

Getting agile and waterfall cultures to work well together takes an extra level of commitment to communication and appreciation for each team’s responsibilities. Ironically, one of the most popular agile software development methodologies called “Kanban” came from the large-scale manufacturing world. It’s certainly a testament to the fact that common ground exists. The key is to acknowledge there will be a transition as the solution matures and work to compromise wherever possible, ultimately leveraging the strengths of both approaches. My recommendation is to stop thinking agile versus waterfall and embrace agile and waterfall.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 5, 2016  10:01 AM

Why democratizing IoT is essential for its survival

Ryan Lester Profile: Ryan Lester
connected, Internet of Things, iot, Vendor, Vendors

Every day companies are hearing about the internet of things — maybe it’s inquiries from their customers, their board, their investors or the like. These groups are hearing the market projections and seeing examples of how IoT is changing business for the better, and they want to get in on it. The problem is that for the most part, IoT feels only achievable to those companies with unlimited resources to make it happen.  Unlimited research and development budgets, unlimited resources, unlimited ability to make mistakes and try again. These requirements leave out 90% of the companies out there. And even the remaining 10% that are ambitious enough to embark on an IoT journey quickly encounter technology and development hurdles that stall or even cancel their IoT projects.

This is a big problem for our industry. IoT has been this futuristic concept for years. We’ve been dreaming about a connected world ever since George Jetson pulled up that first video chat with Mr. Spacely. Nearly half a century later, we are finally to a point where some form of the Jetson reality can become ours. We have the know-how to create a connected world, but adoption is growing slower than any of us would like. And there is a good reason for that: It doesn’t feel accessible for companies looking to start connected product projects, and it doesn’t yet feel like a necessity for people buying them.

Let’s dig in on that a bit deeper.

Because of the small number of companies that have resources to put toward IoT projects, there are only a few examples making their way onto store shelves. We see smart products — like Nest Thermostats or the Tesla car — and think “that’s cool,” because connected products aren’t everywhere yet. Right now, there are so many “gadgets” making their way to market, but they don’t all work together. Because companies are willing to connect anything — whether the product adds value or not, IoT is still viewed as a novelty rather than a way of life.

In order to deliver on the true promise of IoT, more companies need to be able to see their IoT visions come to life. There are still way too many companies sitting on the sidelines looking at IoT as an insurmountable challenge. It’s up to the vendor community to democratize IoT and make it more available and accessible to companies of all sizes. The more brainpower we have out there connecting products, working with standards and showing the real value of IoT, the quicker we’ll see real mainstream adoption from consumers and businesses alike. Then, larger concepts like connected cities can become real-life.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 4, 2016  12:19 PM

How IoT is making distributed computing cool again

Adam Wray Profile: Adam Wray
Distributed computing, Internet of Things, iot, Sensors

IoT is making distributed computing cool again. The distributed computing lexicon has historically been relegated to conversations within the walls of military organizations, tech enterprises and the halls of academia. ARPANET technology in the 1960s begot the internet. Salesforce helped make “software as a service” a household term in 2000. Researchers have talked about distributed computing for years. Today, those distributed computing concepts will be critical to the success of internet of things initiatives. Investments like Ford Motor Company’s $182.2 million into Pivotal, a cloud-based software and services company, signal distributed computing’s migration from the halls of academia to the boardroom.

Enterprises are starting to place their bets on how they will capitalize on the significant IoT opportunities that are starting to emerge. These investments will have ramifications on a company’s ability to function and deliver the experience its customers demand. The applications that result from these multimillion dollar bets need to provide an always-on, reliant, accurate and cost-effective service. In order to do this, it will be essential that the C-suite understand the distributed computing lexicon.

If you’re not yet familiar with terms like “eventual consistency,” “vector clocks,” “immutable data,” “CRDT’s” or “active anti-entropy,” you should ask yourself the following questions to ensure you’re approaching distributed data properly. These are all terms familiar to those involved in the science of distributed systems. This two-part series will examine the answers to these questions, and help illuminate how organizations can develop cost-effective distributed architectures that ensure resiliency, usability and accuracy.

How can you architect to ensure your data is available?

The distributed world’s guiding principle is Eric Brewer’s (tenured professor of Computer Science at UC Berkeley) Consistency, high Availability and tolerance to network Partitions (CAP) Theorem. The CAP Theorem suggests that a distributed computer system can have, at most, two of those three properties. In a distributed system, availability refers to the idea of independent failure. When one or more nodes fails, the rest of the system continues to function so that the information the system processes is always available to the user. Though it predates the CAP Theorem, ARPANET is an example of a distributed system architected for availability. It was constructed to link smaller networks of computers to one another to create a larger, latticework network that researchers and scientists could access even if they were not located near a mainframe or network hub. If one of the network computers went down, researchers would still be able to access the data crisscrossing the network. Availability has been thrust to the forefront in the internet age. Highly trafficked sites such as Facebook and Amazon have favored availability over consistency. After all, it’s not as if you won’t get annoyed with Amazon if the latest product review isn’t available within subseconds. You are likely to be annoyed if you can’t log onto the site, however.

In today’s customer-centric business world, IoT initiatives are bringing back the idea of high availability and architectures built to withstand failure. A city government may choose to implement an IoT-enabled traffic grid. Each traffic light (equipped with a number of sensors) must communicate with the other traffic lights around it, smart vehicles in the vicinity and a local computing node that processes or reroutes the sensor data depending on its use. The system will likely employ a number of nodes throughout the traffic grid to collect the data and make it available to the applications. If one node fails, however, the data it collects and processes must still be available to the rest of the system and possibly to other central applications. Boardrooms typically assume their data will always be available to the application that needs that data, even in a complex distributed architecture. If they wish to implement IoT-enabled systems, they must understand those systems have to be built with failure in mind.

How do you minimize latency and performance degradation to achieve usability?

Distributed systems fight physics. A system can only move so much data before the system slows down and latency grows to an untenable point. E-commerce websites were some of the first to use distributed architectures to achieve usability. They keep product information for each item in their inventory in centralized data stores. They’ll also take the most-used portion of their product assortment — the top 25% best-selling items, for instance — and cache that information in the cloud at the edges of the network. Replicating and storing the most-accessed data in a distributed location helps keep the website transactions from overwhelming the central database and helping ensure their users get fast response times. Distributed e-commerce websites are designed with end users in mind. If the central database becomes overwhelmed and the site slows down, customers will leave before making their purchases.

Today’s IoT initiatives have adopted distributed computing concepts to ensure the data they generate and analyze remains usable, even when the data must traverse large geographic distances. Companies must also design their IoT initiatives with the end user in mind. A weather company’s sensor network generates data from each sensor. The company must analyze and send some of that data in real-time, to the weather application on users’ local mobile devices. Weather sensors take readings frequently at local sensors. It sends some of that data back to the core for analysis but must process some of the high-frequency readings near the sensor. These are the readings that look for conditions like sudden barometric pressure drops that warrant weather alerts. To ensure usability, weather companies institute a distributed infrastructure with nodes that facilitate data analysis for a cluster of sensors. They also perform edge analytics to determine which data is worth sending back for further analysis.

Your data is available and usable. Now what?

Organizations must architect their systems under the assumption of failure to achieve availability. They must architect their systems under the assumption that data analysis in one centralized location could render data unusable for distributed end users. Even if organizations are able to architect for availability and usability, other issues remain.

With so many different applications pouring data into, and pulling data from, distributed infrastructures, accuracy will be an issue. How do you know that the data you use to generate predictive insights is giving you a useful picture of the future? How do you know all of your applications are running smoothly?

The next part of this series will discuss how to architect for accuracy. And, most importantly, it will examine how to develop a distributed data system that is cost-effective. Boardrooms are making multimillion-dollar investments in today’s infrastructure tools because IoT is making distributed computing cool again; those tools must assure a strong ROI if a modern infrastructure is ever going to receive approval.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 41 of 54« First...102030...3940414243...50...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: