IoT Agenda

Page 30 of 69« First...1020...2829303132...405060...Last »

May 9, 2017  12:26 PM

Five of the biggest trends in IoT this year

Gavin Whitechurch Profile: Gavin Whitechurch
Consumer IoT, Enterprise IoT, Internet of Things, iot, IoT analytics, IoT data, iot security, Machine learning, Partnerships, Smart cities

2016 was the year that IoT went mainstream. Connected devices are no longer the stuff of industry expert conversations or surprising national news stories; consumers have begun to welcome them into their homes in the form of devices like Amazon’s Alexa.

2017 promises to be the year that we see this acceptance and appetite drive significant evolution, with the smart home market alone predicted to grow to more than 1.4 billion units by 2021, up from 224 million in 2016.

I look forward to many of these innovations being launched at this year’s Internet of Things World later this month. Having worked closely with market leaders and innovators in the IoT ecosystem to bring together this year’s agenda, I wanted to share my thoughts on what the five biggest trends in IoT we should be expecting to come to fruition over the remainder of the year.

1. Security will be of paramount importance — With so many IoT-focused DDoS attacks hitting the headlines over the last 12 months, the vulnerability of a broad, distributed and heterogeneous network of connected devices has become apparent. It’s forcing vendors and leading service providers to join forces to address issues and breaches. Highlighting security capabilities is likely to become a more prominent selling point, especially for those purporting to offer end-to-end IoT solutions.

2. The formation of more strategic partnerships — The end of last year saw several big names announce strategic partnerships to generate new value for their customers and, in turn, themselves. Bosch and SAP announced they’d be aligning their respective cloud and software expertise around IoT, while Cisco said it would be building an intelligent network tailor-made for the IoT market and then allow all of its channel partners to tie in to it. We’ll certainly be seeing more announcements like these over the course of 2017 as organizations team up to open up more opportunities to learn and earn.

3. Better use of big data and machine learning to unlock new opportunities — The real value of IoT is, of course, in the data produced. According to McKinsey, IoT has a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025 — if analyzed and used correctly. Over the forthcoming months we’ll start to see further integration of IoT data streams with AI and machine learning engines in order to do so. We’ll also see a shift towards enabling processing and analytics to the IoT network edge, minimizing the need to transport large amounts of data back to the network core before triggering an action or alert.

4. Public sector services start to make IoT waves — Last September, the White House announced a $160 million Smart Cities Initiative funding pot to help communities tackle challenges and improve services through IoT. This is the year that we’ll start to see public sector services make use of that money and make IoT waves. The City of Chicago is already starting to make positive progress with its Array of Things project, whereby it is using connected sensors to measure data on air quality, climate and traffic to act as a “fitness tracker” for the city. Similarly, the District of Columbia has multiple IoT-based projects underway. The most public of these projects are participating in the Global Cities Team Challenge sponsored by US Ignite and the NIST Cyber-Physical Systems Group. Both organizations will be talking about their latest work at this year’s IoT World in May.

5. Leaning on developers to drive innovation — With so much potential to be gleaned from connected devices, enterprises are naturally quick to want to develop an innovative strategy. But as always with technology, their success depends hugely on their understanding and ability to implement. Enterprises will start to realize the value of involving developers more strategically in these efforts, making concerted efforts to reach out to them and make more of their expertise and “startup spirit.” We’re already seeing regular IoT hackathons from established players like Intel and Google, but this will start to infiltrate non-tech sectors.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 8, 2017  1:00 PM

The pros and cons of pull and push models for processing IoT data

James Branigan Profile: James Branigan
data flow, Data Management, data pull, Enterprise IoT, Internet of Things, iot, IoT applications, IoT data

Architects of industrial IoT systems face a crucial decision when choosing which model of data flow to use in their design. Their first option, the “push” model, can be simpler to build, reducing the amount of time to release an initial system on which to build enterprise IoT applications. The alternative, a “pull” model, adopts a more centralized approach to data management, reducing the amount of logic required in each application making use of the IoT data.

Advantages of push

In the “push” model, incoming data is streamed to all users and integrated systems. Each application and downstream system receiving the data has its own rules around what data is valid or not and how to clean the incoming flow for use. For simpler systems whose core purpose is around real-time alerts, the “push” model can be the right choice. Each application can look for a specific type of data and trigger actions based on set limits. Rules can be added to each application to transform or ignore data from certain sensors if problems are found, none of which will change the data flowing to any other application.

The power of pull

In the “pull” model, downstream systems using incoming IoT data must ask for what they want, and pull it from a single source: the system of record. Rules about data cleaning and transformation are operationalized at the system level rather than within each application. This system of record provides a single source for everything that happens in the system — including what events have resulted in flagging data as clean or dirty. Rather than stream all incoming data to all users and applications, each consumer only receives the data they specifically request.

For IoT systems whose purpose is to go beyond simple alerts and become learning systems, where operations are not just monitored but also optimized and mined for additional revenue opportunities, data quality is critical. For these systems, the “pull” model for data sharing makes it easier to keep dirty data from reaching downstream applications and jeopardizing data quality across your enterprise environment.

Maintaining control

Clean, trusted data from a central source is important for systems with integrated analytics and machine learning tools. In highly regulated industries like food and healthcare, certain data is required to be collected and reported to government agencies. Especially in IoT, hardware, firmware and software bugs will cause errors in incoming data. In the “push” model, once you push dirty data to all users, you’ve lost operational control of it. You can’t retract it and you can’t clean it. If you have a chain of custody you may have virtual breadcrumbs that allow you to find and clean data once you realize it’s dirty, but you’d have to find and clean it everywhere it went. That’s difficult because a “push” model also encourages you to save copies of the data across your system. How do you clean data that has been distributed “shotgun-style” across multiple data stores? It’s not easy. Finding and cleaning dirty data across your IoT system will be virtually impossible. Fixing a problem in one data store doesn’t fix the same problem in another.

For example, a company operating a fleet of refrigerated tractor-trailers may need to track cargo temperature on a continual basis along its journey. A unified system of record ensures a definitive source of truth for answering these regulatory questions. With a “pull” model, a fleet manager can report where the truck has stopped, what time the doors were opened and at what time the temperature in the cargo hold went out of range of the regulatory standard. When reports show unexpected results, or a device malfunctions, the manager can also track down when a sensor failed, what data from it was used by downstream systems (throwing off calculations), and then clean this data retroactively. In a system based on a “push” model, alerts will be sent when specific triggers are activated (i.e., temperature too high), though the chain of events leading to the situation may be difficult to determine and any false alarms will be problematic to explain to regulators with access to the pushed, dirty data.

A design to match your goals

When evaluating which model of IoT data flow — “pull” or “push” — is more appropriate for your production system, it is most important to match the pros and cons of each to your overall goals. A focus on real-time alerts and independent applications without aspirations for machine learning can be brought online more quickly through a simple “push” design. For systems expected to generate increasing value over time and enable deeper insights into the enterprise, are integrated with back-end CRM, ERP and other critical systems, or operate in highly regulated environments, a “pull” model is likely to provide the most flexible, compliant and long-lived system.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 5, 2017  3:43 PM

How to select the right IoT platform

Francisco Maroto Profile: Francisco Maroto
Enterprise IoT, Internet of Things, iot, IoT platform, Open source, platform, Vendor, Vendor selection

When in late 2013 I decided to launch OIES Consulting, I thought the selection of IoT platforms would be one of the most useful services that we would offer and certainly one that would bring more benefits to the clients wishing to accelerate the adoption of IoT. At that time I had identified about 60 IoT platform vendors and despite some analyst reports specialized in this subject, the confusion was brutal. Today is worse, there are more than 700 platform vendors.

I was tempted to maintain, classify and publish my own list of IoT platform vendors, but looks like almost an impossible task these days. There are other bloggers and reputed industry analyst firms that also try to maintain an updated list. For now, I include below some useful links ad sources:

Nobody doubt that the IoT platform market need a quick and urgent consolidation.

What it is an IoT platform?

ABI Research in “M2M Software Platforms” differentiates between connected device platform (CDP) players, application enablement platform layers and IoT middleware.

Beecham Research has long studied platforms, focusing on the services they enable. Beecham refers to this business and technology area as service enablement services (SES).

Machina Research, now part of Gartner, discussed “The critical role of connectivity platforms in M2M and IoT application enablement.”

MachNation believes that communication service providers can triple their IoT/M2M revenues with an IoT application enablement platform. The company offers a research article with five best practices describing how carriers can most effectively leverage their relationships with IoT AEP vendors.

It is not strange to be confused when defining what an IoT platform is. Luc Perard from ThingWorx in “Are you confused about IoT platforms?” prevents us from comparing apples with oranges.

We find out there are a large number of companies that offer us IoT platforms in the cloud or on-premises, horizontal or vertical, for embedded software development or industrial applications development, with data capture and analytics in real time, able to manage all types of devices and protocols, with connectivity to any network, platform for developing applications for smart home, for smart cities, for connected to the car, for wearables…

The current generation of IoT platform represents the second iteration in this space, but we can already see marked differences between different types of platforms. As an organization looking to embrace an IoT platform, this initial diversity can result very confusing.

Buying vs. building an IoT Platform: How to make the right decision

The eternal dilemma of whether to build from scratch or buy an off-the-shelf IoT platform to fulfill the needs of an enterprise will continue for a while. Here’s what you need to know about both approaches before making this critical project decision.

  • Step 1: Validate the need for an IoT platform — Focus on validating that a business need exists prior to deciding, and estimate the return on investment (ROI) or added value.
  • Step 2: Identify core business requirements — Involving the right business people will determine the success of the process.
  • Step 3: Identify architectural requirements — It is extremely important to identify any architectural requirements and follow the status of the confusing IoT standards world before determining if an off-the-shelf or custom solution is the best choice.
  • Step 4: Examine existing IoT platforms — At this point, a business need has been pinpointed, ROI has been estimated, and both core business and architectural restrictions have been identified. You should now take a good look at existing IoT vendors (a short list of IoT platforms, to be more concrete).
  • Step 5: Do you have in-house skills to support a custom IoT platform? — It takes many skills to design and deploy a successful IoT platform that is both scalable and extensible.
  • Step 6: Does an off-the-shelf IoT platform fit your needs? If your organization does not include a development group comprised of personnel experienced in designing IoT solutions to support your enterprise wide business solutions, an off-the-shelf IoT platform will probably provide the best long-term ROI.

Open source IoT platform vs. proprietary IoT platform

There are many people that believe IoT needs open source to be successful. The rate of innovation is supposed to be faster with open source IoT platforms.

The IoT Data Management (IoTDM) project is an open source middleware solution started at the Linux Foundation under the auspices of the OpenDaylight project. IoTDM is compliant with the oneM2M effort which provides an architectural framework for connecting disparate devices via a common service layer where a given application can be associated with a dynamic network of users, devices and sensor data.

Eclipse Open Source IoT platforms project is another source of information to look at.

Regarding IoT proprietary platforms, this is a nightmare with some tech giants leading the market and many startups adapting quickly.

Vertical vs. horizontal IoT platforms

My prediction is that only tech giants or industrial giants will be able to maintain horizontal IoT platforms; for the medium-size companies the best approach is to differentiate in verticals.

Criteria for choosing an IoT platform vendor

Below some must-consider criteria for selecting an IoT platform vendor.

  • Business stability – Ask a few questions related to the corporate background and stability of the IoT provider.
  • IoT standards and consortiums – Examine which technology standards the IoT provider has adopted and if it uses proprietary technology.
  • Hosting model – How it provisions environments for customers and which providers it leverages for this.
  • Integration – Ability to develop on top of a platform is important for customization. Ask how extensive its API coverage is and to what extent is it standardized.
  • Analytics/edge analytics – Again, flexibility is the key here. Look at how data is stored and how flexible the storage model is in addition to extraction and reporting tools that might be available.
  • Edge computing — Quicker response times, unencumbered by network latency as well as reduced traffic, selectively relaying the appropriate data to the cloud.
  • Security and trust – Ask a few questions about end-to-end security, device security, device-to-cloud security, cloud security and application security. Also ask about policies and track record for security and privacy of user data.
  • Device communication – How it supports connections and communications to IoT devices both in the cloud and local.
  • Device management – Does its platform and hardware modules (if available) make it easy to support and maintain IoT device remotely including over-the-air updates?

Final thoughts

The internet of things is going to transform the way we live, work and interact with each other, and is going to transform the way the global economy functions. But to succeed, we need secure, scalable, robust, easy-to-integrate IoT platforms.

As with any technology decision, make sure you have a full understanding of business and technical constraints and requirements and feed those into your evaluation of IoT providers. This important step will inform the relative importance you place on different criteria and therefore help to focus your efforts leading to a more targeted decision.

Thanks in advance for your likes and shares!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 5, 2017  2:26 PM

Where do the smarts live in an IoT architecture?

Gordon Haff Profile: Gordon Haff
"ARM architecture", Data Management, Internet of Things, iot, IoT analytics, IoT data, IOT Network, Network architecture

There’s been an ebb and flow to where intelligence and data resides throughout the history of computing. Sometimes the dominant architecture pushes smarts (processing power and applications) and state (data) to the network edge. Other times they migrate to the core.

Once, everything sat in the glass room except for “dumb” terminals and keypunch machines. Early sensor and control networks were similarly centralized (even if some local processing was delegated to intermediate nodes in a network).

Fast forward to the PC revolution and a lot of smarts and state moved out to the client. Then it seemed as if the cloud would pull the smarts back to the data center. Web browsers on the edge would handle the presentation of applications. The logic, processing power and data would be off in a data center someplace.

But computing has not fully recentralized. Industrial IoT, for example, often uses an architecture that’s a combination of edge devices, intelligent gateways and back-end servers. All these layers can have smarts and state.

There are reasons for this. Consider (among other factors) device management, networks and processing horsepower trends.

Device management

There can be good reasons for a centralized architecture. Perhaps you need to aggregate information before you can do something useful with it. Or you worry about the security implications of locking down distributed devices.

But if we consider the driving forces behind approaches like thin clients and software as a service, it’s often been application and device management. It’s a lot easier to just fire up a browser than it is to be a system admin for a personal computer with lots of apps installed. Running applications in a browser often isn’t better in terms of performance and functionality, but it is simpler.

Given modern networks and software technology, cloud-based apps do work well a lot of the time. However, app stores and container technology have also made it much easier to provision and manage applications locally. When it makes sense to distribute smarts and state, there’s less reason to do so merely to simplify the management of applications on remote devices.

As a result, today we’re in a better position to design architectures that optimize for performance, security and cost rather than just ease of management.

Networks

There are a variety of definitions for IoT. But pretty much everyone agrees that data is an important part of IoT. And not just some data. Big data.

And big data doesn’t like to move around much. Dave McCrory first coined the term “data gravity” in 2010 to describe how “data if large enough can be virtually impossible to move.” He’s since expanded on the concept in various ways.

The implication for IoT is that if you collect a lot of data on the edge, you may not want to ship it all back home. For one thing, network bandwidth can get expensive. Furthermore, to the degree that sensor data is used to control physical devices like switches, there may simply not be enough time to wait for a remote computer to make a decision. (Network outages can also be an issue in this case.)

Intelligent gateways are one architectural approach to providing autonomous responses to local events. They can also filter and aggregate data so that only that data useful for predictive analytics and trend analysis need be sent all the way back to a centralized repository. (There is a general trend toward saving more and more data, but the details depend in part on how easily and cheaply all the sensor data can be transmitted.)

Processing horsepower

A third reason why intelligence gets distributed throughout the network boils down to “because we can.” With small, cheap and low-power processors and sensors, there is often no good reason not to move the processing out to where the data is collected and acted upon.

Some tradeoffs remain, especially in situations where sensors can’t be connected to a power source. Furthermore, endpoint devices that are full-fledged computers running an operating system present security challenges that dumber devices may not.

Nonetheless, low-cost ARM and other microprocessors mean that we can treat the centralization of intelligence and data, should we choose to do so, as an architectural decision. Not one that’s imposed on us by economics or physical limitations.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 5, 2017  12:37 PM

Predicting the not-so-distant future of connected cities

Guy Courtin Profile: Guy Courtin
Connected car, Internet of Things, iot, smart city

Be careful, that lamppost is listening to your conversation.

Well, not yet at least. However, one of the great hopes for the internet of things is to be able to connect infrastructure — water and sewage systems, the power grid, roadways, buildings trash bins just to name a few. National and local governments salivate at the possibility that a connected city will allow for a more efficient city. With greater visibility and insights into the functions of a municipality, governments could aspire to greater control and management of their day to day responsibilities — identifying areas of waste, anticipating outages before they happen and gaining insights into operations.

Improving upon existing operations makes sense, but what are some more futuristic areas of disruption that could happen if your city was connected?

  • No more cars — better urban planning? A recent Economist article examined the amount of city real estate dedicated to cars. It’s not just roads. Consider all the space we devote to parking our cars — cars that are only used 5% of the time! And when those cars are in use, think of the possibilities when information from the vehicles and the roads themselves combine to streamline the flow of traffic. All of a sudden we reduce emissions from idling vehicles (and frustration from idling drivers), improve police and emergency workers’ response times, and find and fix dangerous roads and intersections. Would this level of connectivity be the tipping point to the adoption of fully autonomous vehicles? Rather than have cars sitting idle 95% of the time, could you reduce the inventory of cars and have them connected to the city grid where they are dispatched on an “on demand” basis? With more efficient usage of cars, how much space could a city free up? Real estate developers would certainly jump at the possibility of all the new space freed from parking garages.
  • Consumers no longer have to carry money or their IDs. This might be a leap, but if your infrastructure and city is all connected, why do we need to carry devices or even money around? Could our data simply follow us on the grid? If the connected grid leveraged facial recognition, touch ID or even retinal scanning technologies, couldn’t we then validate our identity and ability to pay for goods and services anywhere? Imagine walking into your local Starbucks and the facial recognition identifies you as soon as you enter the store. All your information, preferences and payment methods are immediately fed into the POS system, and the barista even knows how to spell your name — and how to pronounce it. Companies such as Panasonic are already working on leveraging facial recognition in their POS systems to pull up your loyalty information as you stand in front of the system. How much more could be done once we have greater connectivity?
  • Airbnb on steroids. If our personal data can follow us around, what about data related to services such as lodging? Airbnb works on the model that willing participants put their excess inventory on their network which is then open to other participants who can search for that inventory. Could a fully connected city mean real-time access to excess lodging? Not only could this feed more private homes into the Airbnb inventories but completely transform how hotels and motels are run. It might even change how housing is managed and viewed in general and bring down the barriers to accessing public or private meeting spaces, or even offices.
  • Hyper-efficiency for fulfillment. One of the biggest challenges for retailers is consumers demanding hyper-granular fulfillment. They expect and demand getting their products when, where and how they want them. It’s a challenge for any brand to meet these expectations. Even some of the most technologically savvy retailers can only serve a customer within a certain time window, can only deliver to specific places, and are limited to people-powered last-mile delivery methods. One of the major hurdles is the lack of visibility of customer availability, dealing with traffic and access issues within an urban setting. What if there was a real-time, constantly updated view of what was happening in the city? Retailers could have instant updates and access to what is the best route to fulfill orders, if they had access to the autonomous cars mentioned above use these to deliver items. Could cities also start leveraging connected lockers? These could serve a flexible delivery drop points for retailers and consumers. Some are already experimenting with one-hour delivery and the use of drones. But for most urban-dwelling customers, deliveries still rely largely on when they (or a doorman) are available, and only a few limited choices related to shipping.
  • Increased safety. With a fully connected city, would crime go away? That might be a stretch. However, cities such as San Diego are leveraging technologies such as Shotstopper to pick up gun shots, allowing the police department to get a jumpstart on what could be firearm-driven crime. Not far behind will be the day when infrastructure will warn the fire department that an electrical panel is being overburdened or that the grease in a restaurant’s kitchen ducts is getting too thick. Connected infrastructure of the future will warn the fire inspector before it becomes an issue. Additionally, greater connectivity will warn medical resources if a patient suffers a falls or is involved in an accident. The smart grid could be expected to sense and provide warnings to the authorities when and where these occurrences take place.

While it seems the biggest limitation to what the IoT can deliver is limited to our imagination, we’ve only just scratched the surface of what’s possible as more cities, customers and businesses become connected. Of course there’s also a host of underlying issues and concerns that must be addressed — privacy being one of the main issues. What would be the rights and limitations of the data sharing and collection? How will we ensure the security of the data?

When it comes to new urban planning the possibilities provided by connected cities are endless: better usage of space, the free flow of data and information, increased safety and improved lives. The reality is the initial impacts we will experience are incremental improvements to existing processes. What the future holds remains to be seen, but we certainly can dream.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 4, 2017  3:27 PM

Minimizing IoT security casualties through API management

Suraj Kumar Profile: Suraj Kumar
API, API management, Internet of Things, iot, iot security, Version Managment

It’s no secret that the adoption of IoT is rapidly growing across industries. In my last post, I discussed IoT security strategies rooted in blockchain, an evolving topic I find important and oftentimes overlooked. In the grander scheme of things, the emergence of blockchain technology is just one of the many ways that IoT devices can be protected. Successful IoT security architecture will require multiple control layers. In this post, I’d like to take a look at IoT security processes and hone in on another step that businesses can take to better protect connected devices and customer data from attacks and breaches.

This may come as a shock to some people, but the epicenter of some of the largest most recent IoT security blunders started with a poorly secured application program interface (API). An API is defined as a set of routines, protocols and tools for building software applications. Here’s a quick example of how poorly secured APIs can negatively impact an organization through the eyes of Nissan. It was recently discovered that the Nissan Leaf, the world’s best-selling electric car, was vulnerable to hackers who could obtain private information about a vehicle’s operations and travels and even control key vehicle functions. This info was uncovered when a Nissan Leaf owner noticed that the request to the API endpoint didn’t require any authentication other than the easily hacker-accessible Vehicle Identification Number (or VIN), giving an attacker the ability to control functions of someone else’s car extremely easily. That’s not a very exciting feature for one’s new car.

In situations like this, something as crucial as securing an API endpoint was overlooked and in turn put drivers at risk of getting their vehicles hacked. With the proliferations of devices and emergence of new device types across industries, it is far too easy for businesses to overlook API security when implementing IoT devices, but the reality is that an improperly secured API can cause major headaches and can introduce serious vulnerabilities into the products. With the growth of connected devices, device interaction through APIs is one of the primary use cases for APIs — and one that will grow exponentially over the coming years according to IBM. Further, 44% of API providers in 2016 said IoT would drive the most API growth in the next two years. With the continued rise of IoT devices in use, business leaders must make secure API management a central part of their security strategies. Otherwise, they risk the safety and security of their organization’s connected devices and customers’ data.

This threat spans the B2C industry and B2B enterprises as we move into a future where EVERYTHING will be connected. In this overly connected world, a unified approach to IoT device security is inherent. But, there are many disjoint or incomplete API management solutions in the market that are contributing to security issues with API and connected devices.

Yet, with all these risks and looming IoT security hurdles, there are several steps that organizations’ decision-makers can follow in order to maintain device security and device data security through APIs:

  • Integrate a full API lifecycle management tool. Thinking about APIs as part of a digital security strategy is a newer concept for some organizations (case in point, companies like Nissan), and for some it might not be. But too many companies overlook the very simple, critical first step to API security: managing the full API lifecycle. The API development processes — between API design to creation to runtime to product management and to API governance — must be approached in a holistic manner with a security mindset. Rather than each developer, department or solution create their own API governance and security strategies, corporate API security policies and best practices must be enforced. Implement strong authentication and authorization for access to connected devices.
  • Implement wide security policies. IoT software architectures, protocols and standards vary based on use cases and devices. Ensure the API management solution supports the required variety of architectures from on-premises to cloud to hybrid, and treats the IoT protocols as a first-class citizen. The IoT data from disparate sets of devices in motion must be protected via secured APIs.
  • Monitor for proper API version management. With the proliferation of IoT devices and different firmware versions, the potential for multiple versions of APIs to proliferate is an inherent risk. Best practice requires all IoT devices to be upgraded to the most recent firmware and a single or highly limited number of API versions should be utilized. New versions or equivalent APIs with similar capability can lead to explosion in number of APIs and aging. Evaluating the available APIs and version management to retire old or duplicative APIs need to be implemented via an enterprise API audit process.

Having a strong hold on the full API lifecycle results in many positive impacts on a business implementing IoT technologies. For example, security where customers aren’t vulnerable and their data is safe when using connected devices or business services. Or the ability to scale and improve device functionality based on how customers are using connected devices (something organizations sometimes overlook). Organizations that want to focus their efforts on the IoT market can’t afford to overlook the importance of API management and security, especially as IoT evolves toward greater autonomy.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 4, 2017  1:24 PM

Three reasons why edge architectures are critical for IIoT

Tony Paine Tony Paine Profile: Tony Paine
Bandwidth limits, Cloud Computing, Connectivity, Edge computing, IIoT, Industrial IoT, Internet of Things, IOT Network, iot security

One of the many promises of the industrial internet of things is that it can help companies generate massive amounts of data. However, this data is only valuable if it can be accessed and acted upon quickly, efficiently and safely. Effectively accessing data can be especially challenging when you have “things” — such as sensors, devices, flow computers and more — that live on remote areas of the network. Often referred to as the “edge,” these remote areas can host trillions of machines that contain important industrial data.

The edge could be remote tools in the field, machinery on a plant floor across the world, or any other asset that provides data in a location far from where data is acted upon. The data from these remote sites has the potential to generate valuable business, but is often too far away, too expensive or too insecure to transmit for time-critical operations. Paradoxically, the importance of processing edge data grows as that edge gets farther away and harder to access — such as on an offshore oil platform, where accessible data can help proactively address high-risk safety issues and high-cost maintenance concerns.

Kepware_IIoT_EdgeComputingEdge computing devices can solve the challenge of making this data available in real time. Here are three reasons why edge computing should be a key component to your IIoT plan:

1. Edge computing is the next evolution of cloud computing

Companies often look to the IIoT to bridge the gap between information technology and operational technology. Many data-rich resources live in the cloud, but are not directly accessible between these two vital departments. Edge computing is the key to the proverbial data kingdom that exists within the cloud.

Edge computing pushes the intelligence, processing power and communication capabilities of a gateway or appliance directly into devices. These devices can then use edge computing capabilities to determine what data should be stored locally and what data should be sent to the cloud for further analysis. As the IIoT grows in capability and connectivity, there will be a move away from cloud computing and a move towards edge computing. Increasingly, edge devices will begin to handle their own processing and storage, while the cloud will morph into the strategic “brains” behind it all. These devices will send only the most important data to the cloud and the cloud will analyze and then share what it learned with all the devices.

2. Edge computing alleviates network bandwidth limitations and cuts costs

Transmitting large data sets over a wide network area has a high financial cost. The common solution is to store the same data twice — locally and at the enterprise data center. But this often requires new expensive levels of required bandwidth, resulting in service degradation, data latency and security concerns — all which still require additional cost.

Edge computing eliminates the need for costly bandwidth additions. Low-cost edge gateways can keep computing and data storage on the edge and host localized and task-specific actions to analyze edge data in near real time. This means much less data needs to be transmitted back to the core server — where enterprise-level applications reside, saving on bandwidth requirements.

3. Edge computing addresses security concerns

As discussed in my previous TechTarget post, data in transit is data at risk. Sensors and things on the edge can be especially vulnerable to security threats. Because industrial equipment is typically designed to last for decades, the majority of connected devices today — and for the foreseeable future — will be from legacy equipment already operating in the plant or field. Many of these protocols for industrial communications are not secure by today’s standards; some were specifically stripped down and designed for low bandwidth networks back in a time of simpler security threats.

Gateways designed to work with the edge can alleviate security concerns by keeping sensitive data within a local network and analyzing it within a secure system. Edge computing also accelerates awareness and response to possible security threats by eliminating a round-trip to the cloud for analysis. And instead of using unsecure legacy device protocols for communications across a wide area network, companies with edge computing can use more modern communication techniques — such as MQTT, OPC UA, AMQP and CoAP — designed for secure and efficient network communications.

Getting an “edge” on connectivity

Edge computing devices can improve safety, efficiency and productivity for companies looking to seamlessly analyze and act on data from the edge of their network. Collecting and processing data closer to where it is produced can help optimize IIoT initiatives to realize unforeseen benefits. As the industry moves more towards enterprise-wide connectivity, edge computing capabilities are key for success in creating a technology architecture built towards the future.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 4, 2017  11:28 AM

Why are IoT developers confused by MQTT and CoAP?

Jonathan Fries Profile: Jonathan Fries
CoAP, Enterprise IoT, Internet of Things, iot, IOT Network, MQTT, Protocols, wireless communication

Recently, at Exadel, we encountered an interesting challenge for IoT developers. Because IoT apps have gained so much momentum, there is more and more choice in how to develop them. For device communication, two specialized, competing protocols stand out: Message Queue Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP). They’re both designed to be lightweight and to make careful use of scarce network resources. Both have uses, in the correct setting, but the problem is that, due to the relative infancy of IoT development, people don’t know exactly what these protocols are or when to use them.

These are not standard web protocols that everyone uses.

In light of our own internal conversations, I decided I’d like to help demystify these a bit. First, let’s look at what these protocols actually are.

What is MQTT?

To the layperson, MQTT is a lot like Twitter. It is a “publish and subscribe” protocol. You can subscribe on some topics and publish on others. You’ll receive messages on topics you subscribe to and those who subscribe to the topics you publish will receive those messages. There are differences, of course. For instance, you can configure the protocol to be more reliable by guaranteeing delivery. The publish/subscribe system utilizes a broker, which, to further draw out the analogy, would be the Twitter platform itself — filtering the messages based on your subscription preferences.

What is CoAP?

CoAP is more like going to a traditional website-based business, like Amazon. You request resources (pages and search results in the Amazon example) and occasionally also submit your own data (make a purchase). CoAP was designed to look like and be compatible with HTTP which powers most of the internet as we currently know it. CoAP can either utilize proxy servers and be translated into HTTP or communicate directly with a special server designed to use CoAP, depending on the environment constraints.

When do you use them?

The question you’re probably all asking is, “if they’re so similar, when should I use one versus the other?”

MQTT is ideal for communicating between devices on a wide area network (WAN, internet) because of the publish/subscribe architecture with the broker in the middle. It is most useful in situations where bandwidth is limited, such as remote field sites or other areas lacking a robust network. MQTT is a part of Azure and Amazon service offerings, so it has a lot of established architecture, making it easily adapted for current developers.

In the case of CoAP, the strongest use case is its compatibility with HTTP. If you have an existing system that is web service-based, then adding in CoAP is a good option. It is built on User Datagram Protocol (UDP) which can be useful in some resource constrained environments. Because UDP allows broadcast and multicast, you can potentially transmit to multiple hosts using less bandwidth. This makes it good for local network environments where devices need to speak with each other quickly, which is traditional for some M2M settings.

If an IoT developer is working with a device that is going to leverage an existing web server architecture, the developer will use CoAP. But if the developer is building something where a device is really “report only” — that is, it is dropped on the network and just needs to report data back to a server — CoAP will be better for that. Other uses, such as cloud architecture, will probably best be done with MQTT.

The future of MQTT and CoAP

Over time, for other protocols, usage or industry adoption has tended to migrate toward the more free and inclusive platform, unless the non-inclusive one is much better. Both MQTT and CoAP are open standards which anyone can implement. CoAP was started by a standards body as opposed to MQTT which was originally designed by private companies, including IBM. CoAP has been designed to handle resource-constrained environments and it may be that it becomes the winner, but for the time being MQTT seems like it is in the lead. There is significant momentum behind MQTT — the big cloud players have picked it, or at least picked it initially. Additionally, many commercial use cases need the features of MQTT (store and forward, centralized host). However, one possibility is that some software development that has standardized around HTTP (mobile app development for instance) could start to leverage CoAP both for working with peripherals and to communicate to the back end to help reduce bandwidth on bad connections.

Ultimately, these protocols can be effectively deployed in different applications through the internet of things. We know that there are specific use cases in which each is best served, but we also know that IoT and IoT devices will continue to grow in complexity and ubiquity. For developers, understanding the key differences in application can not only enable a better initial deployment, but build a strong foundation onto which future development can be execute.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 3, 2017  3:10 PM

Open data, the underpinning of the internet of things

Brian Zanghi Brian Zanghi Profile: Brian Zanghi
City, Data Management, Internet of Things, iot, IoT data, Open data, smart city

Although the concept of smart cities is relatively new, it has jumped into the forefront of the conversation about future urban environments. Last year, the United Nations predicted that two-thirds of the world’s population will live in a city by 2030. With this growth and innovation across all sectors constantly expanding, it is critical that cities stay tuned to society’s evolving needs. It’s no longer common in many places to carry a paper map or even pick up a newspaper that was left at the bottom of the driveway. Instead, urban residents expect to be linked to their cities and fellow citizens through convenient applications, technological innovation and the connectivity of IoT in order to accomplish their daily routines.

Promoting innovation and idea development is key to becoming a smart city but needs to begin with opening up data for public access. For example, allowing access to current bus location data would allow developers to create apps that alert users when buses are nearby. With an open data policy, cities are able to join the smart city movement that integrates technology and information into the heart of urban development.

This data-first approach has been the defining factor driving cities to become smarter and more innovative environments. According to the Sunlight Foundation, the five largest American cities — Chicago, New York City, Los Angeles, Houston and Philadelphia — allow public access to data and have only continued to grow as exemplary smart cities.

New York City achieves this by encouraging organizations to innovate new ideas, ultimately contributing to the city’s evolution into a smart city. Projects like the Displacement Alert Project by the Association for Neighborhood and Housing Development uses open data to create a web visualization of neighborhood and residential building conditions to increase awareness about the affordable housing crisis and to locate areas of severe displacement pressures. As shown by this app, open data provides New York with the capability to address issues like threats against the well-being of its residents and helps to simplify solutions, furthering the smart city efforts.

Hoping to capitalize on the benefits of open data in a similar way, universal access to data aided in Boston’s development of the BOS:311 app, which allows residents to report non-emergencies to the Constituent Service Center, who then dispatches the appropriate agencies to the issue.

As New York and Boston both prove, connecting citizens to a smart city requires access across all sectors — and can only be accomplished with a data-first approach.

The internet of things further enables open data initiatives by providing granular and real-time data for innovations like air quality sensor, public transit location devices and disaster warning signals. Bridging the gap between the people and the city with comprehensive data allows for better monitoring of the behaviors and needs of the city’s citizens, and permits solutions that improve urban conditions and alleviate inconveniences.

Boston’s beta test of a new data portal that will trial a more user-friendly display of available data proves that making data comprehensive and easy to interpret is essential. In addition to creating a portal, cities also need to encourage agencies to leverage and share data. This involves ensuring data is available in a universally understood format, as Boston hopes to do with its new overhauled data system.

Promoting innovation is only made easier with usable data. For example, in 2013 New York’s Department of Transportation rolled out a new mode of transit with Citi Bike, a bike sharing system, just after New York joined the open data movement. Since then, several private sector companies have been using New York’s open data, hoping to develop an idea that would improve the Citi Bike model. Opening data of popular bike commutes has underscored the gaps in the Citi Bike system and is allowing innovators to fill those gaps. Companies like Spin and Mobike are weighing in with their own solutions and ideas for bike sharing, such as eliminating docking stations for an even easier commute.

Recently, the Department of Transportation emphasized the importance of urban technology by organizing the Smart City Challenge, in which cities were asked to propose plans that combined innovation and connectivity to win the funding necessary to execute. The winner, Columbus, Ohio, took on a large project proposing a new transportation system that included an autonomous shuttle system, a universal app for all transit modes and a data analytics plan. Giving the city’s tech scene a confidence boost, Columbus hopes that the grant will encourage businesses to innovate and contribute to development of technology that solves some imperfections in Columbus’ urbanity, illustrated through the public data.

In order to accommodate their citizens, cities must provide a technological ecosystem by capitalizing on the capabilities of IoT and innovation. Opening data up to the public promotes development in technology and problem solving by the public and private sectors, because it fosters idea development based on measurable problems in society. This will allow the process of becoming a smart city to take place organically and seamlessly.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


May 3, 2017  1:29 PM

Revisiting the ‘internet of nothings’ — where we are today

Geoff Webb Profile: Geoff Webb
Internet of Things, iot, iot security, privacy, Protocols

Picking a fight with someone else’s past predictions seems like a crappy thing to do. Especially for someone who also makes lots of forward-looking statements that could come back to haunt him. Nevertheless, that’s what I want to do.

Back in 2014, The Economist published an article called “The internet of nothings.” In it, the writer took on what they felt were the breathless and overblown predictions on the impact of IoT. And much of what they offered up as evidence was exactly right.

For example, the article pointed out the lack of standards and security for moving data around:

“These unglamorous middleware issues of standards, interoperability, integration and data management — especially privacy and protection from malicious attack, along with product liability, intellectual-property rights and regulatory compliance — are going to take years to resolve.”

Yet, while those points are correct, I would differ now, and at the time, in the conclusions.

“Only when they are will IoT have any chance of transforming society in a meaningful way. That day is a long way off.”

It’s hard to imagine one would argue today that IoT has had little effect on society to date. If nothing else, the explosion of smart devices hitting the consumer market reflects a growing awareness that “smart things” are going to be the new frontier of devices. The escalating battle for the home hub between Amazon and Google is a clear indicator that owning the heart of the IoT interface will put any vendor in a dominant position.

The (completely understandable) misconception about the impact of IoT on society is that we’ll be able to point to one thing, to one event, and say “there it is.” But we won’t. The impact of IoT will be like a potter, gradually shaping clay, not like a hammer hitting a vase. Society will conform slowly to the opportunities and pressures of IoT, and manufacturers from home automation, medical device, automotive, and sports and leisure, in addition to nearly every other stripe of enterprise, are starting to try to own and mold that new shape.

While the potential use cases for smart devices exist, it seems we’re well into the “years to resolve” with IoT middleware issues. I’d love to live in a world where a lack of standards for security and insufficient capability to defend against attack somehow hold back the adoption of technology. But I don’t — neither do you — and no other technological development has made this fact quite as clear as IoT has.

Let’s be honest here — poor security and privacy controls haven’t slowed us down in the past, and I really can’t find it in my perennially optimistic heart to think that they will now. The pressure to IoT-stuff, all kinds of stuff, is just going to get more and more urgent. Let’s reference the adoption of cloud delivery to help illustrate how this compares to IoT. If you attended a technology trade show five or six years ago, you’d be intimately familiar with the phenomenon of “cloud washing” — in which every conceivable product or service became attached to the concept of cloud delivery.

IoT-washing is going to make the cloud-washing look more like a quick spritz with water. The competitive pressure to take any number of normal objects and attach sensors to them is going to drive all kinds of odd product launches. Simply being able to make the claim that the device is “smart” enables companies to establish a differentiator, regardless of the actual value of that capability. As a society, we are so comfortable with the expectation that “tech” equates to “improved” that it’s not a hard sell to make people believe that a smart toaster is better than a dumb one, even if the end-resulting toast is no better. Making your product “smart” is going to define the new Wild West for all kinds of markets. Unsurprisingly, we’re not waiting for products to hit the market. Instead we are facing a potential deluge of every bizarrely connected product imaginable. And these things will have an impact on the way we think about and interact with technology.

And, as we’ve already seen, flooding the market with millions of poorly secured connected devices will have significant impact — just not the kind we want.

This intersection is where I find myself agreeing with the lack of IoT standards and security, as addressed in The Economist article. In the past, we’ve seen security improvements driven by either the pressure from the public when a breach occurs or from legislative pressure to meet compliance and regulatory standards. It’s hard to know what will help with IoT. It’s so diverse, covering so many markets, products and capabilities. In individual cases we can help protect the infrastructure itself (for example by enforcing standards on something like smart grid technology), but the scale and complexity of IoT make taking a holistic approach daunting in the extreme. It’s simply too easy to launch a connected product, and then the rest of society foots the bill for poor security and privacy controls. I wish there was an easy answer. (Actually, at this point, I’d settle for a difficult answer, if it was feasible.)

The internet of things is very, very real. And the impact will be immense. We won’t connect to the internet with a terminal, a laptop or even a smartphone. We’ll live inside it. IoT is already forming around us, and is growing at an incredible, unpredictable and uncontrolled rate. No one is in charge, of course, and no one has a plan beyond individual products. It’s just happening and happening much, much faster than we could have ever expected.

And if there is a “nothing” in IoT — it’s likely to be quaint concepts like “security” and “privacy.”

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 30 of 69« First...1020...2829303132...405060...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: