IoT Agenda

Page 2 of 7012345...102030...Last »

November 16, 2017  2:34 PM

The rise of general-purpose over single-purpose IoT

Tom Maher Profile: Tom Maher
Enterprise IoT, IIoT, Industrial IoT, Infrastructure, Internet of Things, iot, IoT platform, IT, platforms

When we look at the industrial IoT landscape, much of the early industry attention had been centered on single-purpose use cases. However, we’re now beginning to see the various components of the IoT ecosystem evolving to enable general-purpose use cases.

In reality, it’s the general-purpose use cases that underpin the potential for digital transformation, which is where the real long-term business value lies. If companies fail to adopt a general-purpose approach to IoT, they will fail to evolve the digital capability in their enterprise, which ultimately enables the innovation and business model optimization that digital transformation promises.

Single purpose for a single problem

Single-purpose (or at least limited purpose) yields highly vertically integrated platforms which solve very specific problems at scale. The purchasing decision in enterprises for such technologies tended to be mainly the domain of the operational technology (OT) folks. Many segments start out single purpose for the obvious reasons; there’s a very specific return on investment argument (i.e.,” if we can do this, it’s worth that”) and being specific reduces the scope to being something achievable.

However, over time, enterprise IoT architects will encounter multiple single/limited uses cases and will need to assemble a robust architecture that addresses use cases where the specifics aren’t known, i.e., general purpose or programmable. Over time, the general-purpose architecture becomes more attractive and the “legacy” single-purpose use cases get ported to the general-purpose architecture. When this happens there is also a natural handing over of responsibility to the IT teams within the enterprise. Increasingly, we are witnessing this shift of responsibility for IoT projects from OT to IT.

Keep it Software, Stupid

We can think of single purpose and general purpose in terms of Hardware and Software. Almost everything now is driven by software (small ‘s’), however, not everything is designed to be programmed for the applications we haven’t thought of yet. Software (big “S”) as an approach or mindset comes with a philosophy of being able to create the next thing or improve existing things over time. When software is “frozen,” it fundamentally loses that essential attribute and is hardware-ified (aka firmware).

We therefore, with our Software mindset, can distinguish between what is programmable and what “was programmed” but is no longer essentially “Software.”

The majority of today’s end-to-end IoT platforms have a vertical focus and, as a result, are single or limited purpose. While they’re made mostly of software (programmed), they aren’t general purpose (programmable).

General-purpose IoT: Solving problems today and tomorrow

General-purpose industrial IoT is about using the advantages of Software over Hardware, i.e., programmability. With general-purpose industrial IoT, a business can invest not in a specific purpose, but instead in the ability to create and evolve a digital capability which enables the digital transformation that is unique to their business.

The key enablers of general-purpose industrial IoT are the general-purpose cloud platforms, like AWS IoT and Azure IoT and all the infrastructure behind that, including analytics, machine learning and so forth. These “cloud” elements are already in place and are proven as the “general components” of a range of specific use cases.

Other key enablers are sensors and the myriad of local area networking technologies (wired and wireless, powered and low-power, etc.), coupled with intelligent IoT gateways that have at least the capability to fulfill the promise of “Software”.

What’s needed now is a general-purpose edge network. This edge network, to be general purpose, needs to be both programmable (software defined) and connected to the general-purpose cloud. It goes without saying that it needs to also be secure and trustworthy. Most importantly, it must be delivered as a service.

IoT needs a software-defined edge network

A general-purpose edge network needs a software-defined access network securely connecting the physically remote assets to whatever computer resources are determined by the application.

In the same way that the cloud needed a software-defined network to underpin the connectivity requirement of the software which is its essence, general-purpose industrial IoT needs a software-defined access network. The edge network needs to be virtual in the sense that the edge network figures out how to overlay the software-defined topology onto the underlying physical network pieces. The edge network needs to be intelligent, which is to say, it can just work given the constraints.

When the access network is designed or purchased for a specific use case, the imposed constraints, for example, access technology or access costs (MB tariff plans), remove the ability to execute general-purpose use cases over the network.

General-purpose industrial IoT includes the ability to deploy new software, either new use cases or improving existing use cases.

For CIOs, this means looking beyond single purpose and instead investing in creating the platform for continuous digital transformation. Deploy intelligent gateways, for example, Dell Edge, which is a PC in a box, rather than “dumb” or “appliance” gateways. Assume the access network (first mile) is heterogeneous. Purchase data tariffs which are in line with general purpose, i.e., you don’t know how much data you’ll use, but over time it’s likely to increase, and factor in not just the application data usage, but also the usage to continuously deploy new software.

For enterprises to successfully embrace digital transformation, they will need to consider how they can decouple the access network from the cloud and enable the virtual intelligent edge. They will need to look at connectivity management platforms that support secure, intelligent over-the-top, edge-to-cloud and cloud-to-edge application connectivity. Ideally, these intelligent network connectivity services will be truly virtual, be available on-demand and can scale with the businesses’ requirements.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

November 16, 2017  1:04 PM

View from the edge: The action is out on the edge

Dave Laurello Profile: Dave Laurello
Edge computing, IIoT, Industrial IoT, Internet of Things, iot, IoT data, IoT devices, IOT Network, Predictive maintenance

In all the excitement over IIoT, the domain of computing activity that may do most to translate IIoT technology into lasting business value has been somewhat overlooked. Yes, we’re talking about edge computing.

Of course, computing on the edge — technology infrastructure that’s located on or near production operations for data collection, data analysis and data storage — has been going on for decades. Processes like keeping an assembly line running smoothly, delivering clean water continuously and making trains run on time have long depended on edge data being gathered efficiently, with only limited connectivity to data centers. But from a computing standpoint, the edge has often been seen as something of a sleepy backwater.

All that has changed recently thanks to secular (industry-agnostic) trends that have driven dramatically more investment in computing infrastructure at the edge, followed by increased reliance on edge-gathered data for cutting-edge applications. These trends include the criticality of data to business success; the demand for real-time analysis of data in order to make better business decisions; and the increasing interconnection of “things” of all kinds in order to gather ever more and higher-quality data.

As a result, analysts estimate that 5.6 billion IoT devices owned by enterprises and governments will utilize edge computing for data collection and processing in 2020 — up from 1.6 billion in 2017. And by 2019, 40% of all IoT-collected data is expected to be stored, processed, analyzed and acted upon close to or at the edge of the network.

Real benefits, real opportunities

These trends present the opportunity to reap significant benefits for those organizations that can take advantage of them.

Consider the case of a manufacturer looking to improve decision-making and overall productivity. Most manufacturers are already operating at the edge. While their plant operations may be centralized, the data gathered by unmanned machinery or unattended workstations may be only minimally connected to their data centers and business networks. As a result, the time it takes to gather, process and analyze data on machine performance makes it difficult to identify problems, diagnose them and respond to them promptly.

With today’s edge computing infrastructure, in contrast, manufacturers can now automate the collection of large volumes of machine data (available from IoT sensors), compare it to their own historic performance or industry-wide standards, and derive usable analysis right on the shop floor. This approach drives predictive maintenance to maximize machine uptime, streamline production processes and reduce costs.

Keeping pace with the evolving edge

Meanwhile, still other business and technology trends are reshaping how computing happens at the edge and driving the need for a new edge infrastructure. Key requirements include:

  • To meet the demand for real-time analysis and support more rigorous decision-making, much more computing power at the edge is a necessity.
  • To cope with the exponential growth of data, smarter networking and data-storage processes are a must.
  • To guard against all the new intrusion points and attack vectors that are created by IIoT interconnection, highly secure connected-edge environments will be required.

IT and operations technology professionals — both of whose domains include some responsibility for computing on the edge — will have their hands full, evaluating a variety of “next-generation” technologies that are proposed as the edge infrastructure of the near future. Among these are micro-data centers, ruggedized edge servers, edge gateways and edge analytic devices. One useful screen to help triage them may be to ask: Can they — under the harshest of conditions, and often with life-or-death consequences — “self-manage”?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 16, 2017  12:16 PM

The internet of things, database systems and data distribution, part two

Steven Graves Profile: Steven Graves
architecture, Data Management, Database, Database Systems, Databases, DDS, Distributed computing, Edge computing, Internet of Things, iot, IoT data

In part one of this two-part series, I covered where, in the internet of things, data needs to be collected: on edge devices, gateways and servers in public or private clouds. And I discussed the characteristics of these systems, as well as the implications for choosing appropriate database management system technology.

In this installment, I’ll talk about legacy data distribution patterns and how the internet of things introduces a new pattern that you, or your database system vendor of choice, need to accommodate.

Historically, there are some common patterns of data distribution in the context of database systems: high availability, database clusters and sharding.

I touched on sharding in part one. It is the distribution of a logical database’s contents across two or more physical databases. Logically, it is still one database and it is the responsibility of the database system to maintain the integrity and consistency of the logical database as a unit. Exactly how the data is distributed varies widely between database systems. Some systems delegate (or at least allow delegation) of the responsibility to the application (for example, “put this data on shard three”). Other systems are at the other end of the spectrum, deploying intelligent agents that monitor how the data is queried and by which clients, and moving the data between shards to colocate data that is queried together, and/or to move data to a shard that is closer to the client(s) most frequently using that data. The database system should isolate applications from the physical implementation. See Figure 2.

data distribution, logical database, physical databaseThe purpose of high availability is as its name implies: to create redundancy and provide resilience against the loss of a system that stores a database. In a high availability system, there is a master/primary database and one or more replica/standby database to which the overall system can failover to in the event of a failure of the master system. A database system providing high availability as a service needs to replicate changes (for example, insert, update and delete operations) from the master to the replica(s). In the event of the master’s failure, a means is provided to promote a replica to master. The specific mechanisms by which these features are carried out are different for every database system. See Figures 3a and 3b.

database system mechanisms

The purpose of database clusters is to facilitate distributed computing and/or scalability. Clusters don’t use the concept of master and standby; each instance of a database system in a cluster is a peer to every other instance and works cooperatively on the content of the database. There are two main architectures of database system clusters. See Figure 4a and 4b.

database system clusters architecture

database system clusters architecture

As illustrated in Figure 4a, when clusters are used, and unlike sharding, where each shard contains a fraction of the entire logical database, each database system instance in a cluster maintains a copy of the entire database. Local reads are extremely fast. Insert, update and delete operations must be replicated to every other node in the cluster, which is a drag on performance, but overall system performance (i.e., the performance of the cluster in the aggregate) is better because there are N nodes executing those operations. Nevertheless, this architecture is best suited to read-heavy usage patterns versus write-heavy usage patterns.

High availability is sometimes combined with sharding, wherein each shard has a master and a standby database and database system. Because all of the shards represent a single logical database, the failure of a node hosting a shard would make the entire logical database unavailable. Adding high availability to a sharded database improves availability of the logical database, insulating it from the failure of nodes.

Replication of databases on edge devices adds a new wrinkle to data distribution. Recall that with high availability, and depending on the architecture, cluster, the contents of the entire database are replicated between nodes and each database is a mirror image of the other. However, IoT cloud server database systems need to receive replicated data from multiple edge devices. A single logical cloud database needs to hold the content of multiple edge device databases. In other words, the cloud database server’s database is not a mirror image of any one edge device database, it is an aggregation of many. See Figure 5.

Cloud database server aggregationFurther, for replication in a high availability context, the sending node is always the master and the receiving node is always the replica. In the IoT context, the receiver is most definitely not a replica of the edge device.

Also, for replication in a cluster environment, as depicted in Figure 4a, the database system must ensure the consistency of every database in the cluster. This implies a two-phase commit and synchronous replication. In other words, a guarantee that a transaction succeeds on every node in the cluster or on none of the nodes in the cluster. However, synchronous replication is neither desirable nor necessary for replication of data from edge to cloud.

So, the relationship between the sender and receiver of replicated data in the IoT system is different than the relationship between primary and standby, and between node peers in a database cluster, so the database system must support this unique relationship.

In part one of this series, I led with the assertion: “If you’re going to collect data, you need to collect it somewhere.” But, it’s not always necessary to actually collect data at the edge. Sometimes, an edge device is just a producer of data and there is no requirement for local storage of the data, so a database is unnecessary. In this case, you can consider another alternative for moving data around: data distribution service (DDS). This is a standard defined by the Object Management Group to “enable scalable, real-time, dependable, high-performance and interoperable data exchanges.” There are commercial and open source implementations of DDS available. In layman’s terms, DDS is publish-subscribe middleware that does the heavy lifting of transporting data from publishers (for example, an edge IoT device) to subscribers (for example, gateways and/or servers).

DDS isn’t only limited to use cases in which there is no local storage at an edge device. Another use case would be to replicate data between two dissimilar database systems. For example, one embedded database used with edge devices and SAP HANA. Or, between a NoSQL database and a conventional relational database management system.

In conclusion, designing the architecture for an internet of things technology includes consideration of the characteristics and capabilities of the various hardware components and the implications for selecting appropriate database system technology. A designer also needs to decide where to collect data, where data needs to be processed (e.g., to provide command and control of an industrial environment or to conduct analytics to find actionable information and so on), and what data needs to be moved around and when. These considerations will, in turn, inform the choice of both database systems and replication/distribution solutions.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 15, 2017  3:46 PM

Edge computing and AI: From theory to implementation

Ken Figueredo Profile: Ken Figueredo
ai, Artificial intelligence, Edge computing, Internet of Things, iot

The huge coverage devoted to the topics of AI and edge computing sparked an idea when I recently visited JFK Airport. My journey coincided with a severe weather storm that disrupted travel along the East Coast. This situation illustrates how customer service agents assist passengers (at the edge) when dealing with uncertainty and changing circumstances (relying predictive analysis and intelligent decision-making under uncertainty).

In broad-brush terms, edge computing moves intensive computing for decision-making into the field (i.e., closer to the edge). It reduces the need to transfer copious amounts of data to a cloud-hosted application. It also reduces the impact of transmission delays and cost of data transport. An example might be a video processing unit that uses a CCTV feed to detect anomalies (for example, intruder sensing). Edge processing aims to extract features from a video stream and trigger an action locally (e.g., sound an alarm) and to communicate metadata to a cloud application.

In simplified terms, AI covers a wide range of activities which involve the process of analyzing data to find patterns. Common techniques include deep and/or machine learning. AI also encompasses the application of rules, some of which may be multilayered depending on the complexity of individual situations, to trigger an action.

In the case of predictive maintenance, sensor data from a machine feeds a learning system in order to detect anomalies. A change in noise frequency, for example, might point to excessive wear or an absence of lubricant. After comparing these anomalies against “healthy” behavior as predicted by a reference model or digital twin, a rule-based system would alert the machine’s operator. It might automatically schedule a repair after checking on the availability of suitable facilities in a workshop where there are qualified technicians and available spare parts.

From theory to implementation practice

While all of this sounds fine in theory, let’s consider some of the real-world implementation issues in such a scenario. Coming back to JFK Airport, think about a situation where your flight has been cancelled or rescheduled. Focus on your experience with a customer service agent at an airline’s desk. You will be speaking to an airline employee who is processing your instructions to reschedule a flight for you. This involves a lot of real-time visual and auditory processing at the edge of a wider ticketing system.

In this scenario, there is only so much an agent can do. Yes, there is a lot of edge processing to interpret your annoyance and frustration. You might get an empathetic hearing, but there’s no guarantee that you will get home in any less time. This kind of edge processing is of limited benefit when operating conditions depend on external and uncontrollable factors. In reality, this is where service providers need real value delivered to justify their investment in edge processing devices.

In practice, the agent works within a constrained set of rules based on the type of booking you hold (e.g., premium, flexible, no-changes and so forth), your status and available airline capacity. Almost certainly, he is working with partial information. How often have you heard an agent say that the airline’s booking system (in the cloud) is not showing any explanatory flight delay information?

Reasoning and decision-making under uncertainty are core AI capabilities. They need edge and centralized systems to collaborate. Sometimes, the edge agent has a degree of autonomy to make rebooking decisions, and some agents may have more autonomy than others. However, these decisions cannot occur in isolation. It’s essential to update central capacity management systems so that the airline has a clear picture of its overall capacity to optimize load-balancing decisions. This is a dynamic process which may involve multiple parallel iterations as several agents deal with many irate travelers. That’s a lot of multi-edge to cloud communication.

What this example illustrates is that single-point technologies, like AI and edge computing, deliver value in relatively straightforward use cases. These involve deterministic situations where the consequences of a problem and prescribed course of action justify the cost of dedicated hardware and software.

Airports, buildings and cities, to list a few examples, represent more complicated operating environments. These typify the applications that are more likely to appear on solution providers’ agendas. Normal operations and problem situations for these use cases depend on collaboration and frequent communications across technical and commercial boundaries. Single-point technologies are one element in a larger system.

If you are a customer, beware of simple technology recommendations for complex application scenarios. If you are a seller, have a ready response to explain how your technology fits into the architecture of an overall system. And, everybody should spare a thought for the customer service agents who are masking a great deal of underlying complexity from you.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 15, 2017  1:38 PM

New technology in sports: Stadiums of the future, today

Adam Young Profile: Adam Young
Internet of Things, iot, IOT Network, sports

Since opening in 2017, Mercedes-Benz Stadium in Atlanta has set the standard for what a stadium experience in the 21st century should look like. It is one of the most technologically advanced stadiums in the world, tricked out with a fully integrated Wi-Fi network and a revolutionary screen on the roof — but it’s not the only one making waves. Stadiums and arenas around the country are jumping on the future-tech bandwagon, offering dazzling, one-of-a-kind experiences to fans. Here are five more you should know about:

  1. City of Champions Stadium: Inglewood, Calif.
    When the new home for the Los Angeles Rams and Los Angeles Chargers is completed in 2020, it’s projected to be the world’s most expensive stadium. A whopping $2.6 billion later, the arena will be built with more than 36,000 perforated aluminum panels that are specially designed to work with the Southern California climate, and therefore create the effect of being outside. It also aims to have the most unique video system in the NFL. Nicknamed “Oculus,” an oval-shaped, two-sided board will hang from the roof of the facility, and it will be 50 feet tall and 120 yards long. This multipurpose complex will also feature a performing arts center, a hotel and even a lake.
  2. Golden 1 Center: Sacramento, Calif. 
    When this indoor arena in downtown Sacramento opened in 2016, it was touted as “the world’s most connected indoor sports and entertainment venue.” The free Wi-Fi connections at the arena are said to be 17,000 times faster than the average home network, allowing for instant uploading and sharing. It also includes the largest screens in the NBA, delivering a “4K Ultra HD” broadcast on massive 44-by-24-foot boards.
  3. David Beckham’s Major League Soccer Stadium: Miami, Fla.
    This South Florida super venue won’t have a parking lot. Yes, you read that right. This 25,000-seat Major League Soccer stadium is coming at a time when eco-friendly technology (and being a die-hard MLS fan) is more than just a trend in the U.S. The stadium is expected to open in 2021, and you can get there via Metromover, Metrorail, water taxi or maybe even a dinner-cruise boat down the Miami River.
  4. Raiders Stadium: Las Vegas, Nev.
    The Oakland Raiders have been approved for a new home on the Las Vegas strip, making it Sin City’s first NFL team. Its new stadium is a work in progress, and it’s attracting a lot of positive attention for bringing a futuristic feel to the strip. There will be a massive retractable wall of windows overlooking the city, as well as a giant video screen on one side of the stadium for tailgaters to get in on the fun. And keeping in theme with the town, fans will be able to legally place bets on their phone from the comfort of their seats.
  5. Levi’s Stadium: Santa Clara, Calif.
    Levi’s Stadium isn’t a new venue, but it’s taken steps to make a technological name for itself. The home of the San Francisco 49ers is in the heart of Silicon Valley — the mecca of the modern tech world — so it’s only appropriate that the team would have one of the most high-tech sports venues around. One of its biggest draws is allowing 70,000-plus fans to connect to Wi-Fi and 4G, giving them the chance to have an integrated and personalized in-stadium experience in the palm of their hands. Designers laid out over 400 miles of cabling to connect Wi-Fi routers and 1,200 antennas, amounting to 40 times more internet bandwidth capacity than any known U.S. stadiums. Spectators can download an app to watch replays during downtimes, guide them to a bathroom with the shortest line and even order food and drinks right to their seats.

Source: Cassiohabib, Shutterstock

While these venues are setting the stage for a new era in stadium technology and advancements, this is hardly a comprehensive list. New plans for multimillion-dollar building projects are appearing all the time. Be sure to keep an eager eye out for high-profile renovations of arenas looking to dive into the future.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 15, 2017  11:21 AM

How IoT is affecting mobile app development

Ritesh Mehta Ritesh Mehta Profile: Ritesh Mehta
Applications, apps, Internet of Things, iot, IoT applications, IoT devices, Mobile app, mobile app development, mobile apps

The internet of things is impacting mobile app development in a huge way. With the growing trend of connecting all kinds of physical devices to the web, controlling them through a smartphone is gaining fast momentum — and for a good reason.

The benefits of connected things are fairly obvious and plentiful. By connecting devices to a smartphone, one can have complete control of the features provided by the different gadgets and machines. Running devices through the internet means that applications can push notifications straight to the phone and allow update parameters as well, thus making it possible to switch the systems on and off remotely.

Mobile phones and mobile app programming provide scope to access IoT-enabled devices. Various sectors, such as healthcare, education, retail, travel and more, are using mobile connectivity and applications to access IoT ecosystems. There are multiple benefits of a mobile app approach:

  • Mobile apps boost the involvement of customers as they can easily use applications and can provide valuable opinions to help a business grow.
  • Creating mobile applications is very affordable; they reduce the costs of sending newsletters and SMSes. Applications can integrate communication directly to customers via messages.
  • It’s a faster option than web browsing; it won’t take even a minute to access applications as customers do not have to wait for the site to load.
  • Mobile app development companies try their best to create mobile apps that are IoT-friendly and reform the mobile app development and mobile app development services world.

In addition, there are many reasons why a business should opt for an IoT mobile app:

  • Customers today use their smartphones to acquire information on just about everything. Unlike traditional websites, mobile apps come up with quality browsing and buying options, and they’re considered more important nowadays for business growth than websites.
  • Companies can reach their target audience through a mobile app, spreading information to all customers in seconds. By using features such as push notifications, one could connect to customers and remind them of updates.
  • Mobile apps can promote the brand and win over the most customers. Furthermore, one could earn a great deal of customers via the applications that persuade them to buy products and services.
  • Mobile apps prove to be excellent social platforms. This is because they come with features, such as likes, comments and so forth, which can boost social media presence.
  • The soaring demand of mobile apps is due to their flexible accessibilities. Mobile apps can be access on a smartphone 24/7, from any part of the world.

The evolving IoT trend and mobile app development

The IoT trend is evolving with each passing day. However, it’s also true that the internet of things is a beginner industry and will take some time to prosper. Improvements in mobile and IoT apps can boost the connection between people all over the world. A lot of businesses are already running numerous devices online, adding another security layer as well as growing productivity and response times. Security is a key pivot point of any enterprise app, and IoT could help improve overall defense barriers by allowing physical devices to be the first entry point.

More than anything, the fast rise of IoT will drive mobile application development, leading to a mobile application development explosion. Without a doubt, IoT devices will soon be everywhere. Hardware spending by consumers and businesses on IoT devices will total almost $3 trillion by 2020. All of the investments in IoT will drive the development of mobile applications. Among the most popular IoT devices are smartphones, which let people not only access apps and the internet, but also provide a huge amount of data that could be tapped into and used by businesses all over the world.

In fact, IoT has already changed mobile app development. In another 10 years, expect hundreds of thousands of jobs in this field. Nonetheless, to achieve the ultimate goal of making people’s lives easier, developers must first undergo the pains of building infrastructure and platforms from the ground up. Developing a mobile application is similar to developing a web application and has its roots in software development that is more traditional. Nevertheless, there is one major difference: Mobile apps often are specifically written to benefit from the unique features mobile devices offer.

Creating IoT-friendly mobile apps

The time is now for app development companies and businesses to come together to build mobile apps that are IoT-friendly. For example, an IoT-friendly mobile app could help build a mechanism in which information is transmitted by objectives through integrated sensors that would be received by the app in real time.

Mobile applications have revolutionized the mobile field, gaining more power in the hands of customers, as well as more business for organizations. In the dynamic technology field, the mobile application development market is at its peak. It is the need of the hour and the latest trend in business.

The future of mobile applications looks both challenging and bright with enticing and innovative possibilities. Remember that quality will always remain a constant and a major component in the development of mobile applications that are futuristic for both mass and niche markets.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 15, 2017  10:48 AM

Why manufacturers make insecure IoT devices and how you can protect them

Pedro Abreu Profile: Pedro Abreu
Compliance, Internet of Things, iot, IoT devices, IOT Network, iot security, manufacturers, Manufacturing, Network monitoring, Regulations, security in IOT, Threat detection, visibility

Forty-eight percent of U.S. companies with IoT devices on their network have been breached, according to a recent study.

All industries, including healthcare, retail and finance, rely on IoT for its efficiency and productivity benefits to beat out competitors. As a result, IoT manufacturers are working diligently to keep up with the supply and demand logistics of this increased adoption. Unfortunately, more often than not, these devices lack proper security, ultimately creating an opportunity for an adversary to hack through the device and infiltrate the broader network.

A recently discovered vulnerability dubbed Devil’s Ivy showcases exactly how security flaws impact IoT devices. The Devil’s Ivy vulnerability was found in a toolkit called gSOAP, which is a bundle of reusable code that software engineers or device manufacturers use so devices can talk to the internet, and was located deep within the communication system of Axis smart cameras. Researchers discovered that the gSOAP toolkit has been used by many big name manufacturers; there are currently one million devices using gSOAP that carry the Devil’s Ivy vulnerability.

So, what’s the solution? While it might seem simple — to stop using vulnerable development toolkits and create stronger security systems — unfortunately, manufacturers face many challenges when developing these devices. Let’s take a look.

Four reasons why IoT devices are insecure

Lack of experience
Manufacturing organizations are not in the business of cybersecurity. As such, they are unknowingly making it easier for cybercriminals to breach a network. We saw this first with the PC industry. PCs have been manufactured by engineers that are experienced in hardware and software development for over 25 years, and while they might be attempting to build them with proper security, they have ultimately been unsuccessful. But now, businesses operate in digital and physical environments that continue to grow as new technologies, including IoT, are added to the network. As a result, the complexity of the environment increases. So, IoT manufacturers face the same challenge PC manufacturers did — they might attempt to make their devices secure, but since this is not their area of expertise, they are failing to do so.

Margins
Organizations are financially motivated, as is the case with the manufacturing industry. While some businesses are able to scrape together funding to back a security division, most manufacturers don’t prioritize it and cannot finance the efforts. When thinking about the magnitude of IoT devices connected today — roughly 8.4 billion according to Gartner — and the increasing demand, manufacturers are incentivized to bring devices to market as quickly and cost-effectively as possible. Therefore, security is an afterthought, if even thought of at all.

Keeping in line with compliances and regulations
One industry where regulations can hinder IoT security is healthcare. For example, the FDA requires continuous communication with manufacturers so they can be alerted when a new vulnerability is discovered. Then, the manufacturer must make an update and patch the device. However, this can be an incredibly slow process and may take up to 60 days, leaving the devices open to attack.

Innovation
Manufacturers are constantly trying to keep up with competition by producing new IoT devices to address growing interest. In turn, businesses are drawn to these new flashy devices to reap their benefits. The competition to develop the latest and greatest technology prevents manufacturers from slowing down to ensure that security is embedded properly from step number one. Building security from scratch takes time, and in the eyes of the manufacturers, slows them down from developing the next big thing.

How you can protect your IoT network

All this gloom and doom aside, there are some simple and efficient processes you can start to prevent IoT breaches in your enterprise environment. To begin, you’ll need visibility to see what devices are connecting to your network. Next is the ability to manage those devices — i.e., restrict access to a non-compliant device, block internet access, quarantine any device based upon anomalous behavior, and/or notify its owner of a security concern. Finally, you’ll want to implement a mitigation plan when malicious behavior is detected. Once those processes are in place, dedication via continuous and thorough monitoring will be the most effective way to keep your organization’s IoT devices, and entire networks, safe on an ongoing basis.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 14, 2017  4:25 PM

Winning the industrial AI game: Why labeled failure data, not algorithms, is key

Rick Harlow Profile: Rick Harlow
ai, Artificial intelligence, IIoT, iot, IoT analytics, IoT applications, IoT data

Artificial intelligence is slowly but steadily embedding itself into the core processes of multiple industries and changing the industrial landscape in so many ways — be it deep learning-powered autonomous cars or bot-powered medical diagnostic processes. The industrial and energy sectors are not immune to the disruption that comes with embracing AI. As upstream and downstream companies gear up for AI, there is one important lesson I want to share that might seem counterintuitive. For the successful execution of an AI project, the data matters more than the algorithm. Seems odd, right?

Let me start by sharing a recent experience. Flutura was working with a leading heavy equipment manufacturer based in Houston that has numerous industrial assets deployed on rigs globally. These rotary assets were quite densely instrumented; they have great digital fabric consisting of pressure sensors, flow meters, temperature sensors and rpm sensors all continuously streaming data to a centralized data lake. The problem the manufacturer was trying to solve was how to “see” typically unseen early warning signals of failure modes in order to reduce multimillion-dollar downtimes.

In order to do this, every time a piece of upstream equipment went down, we needed to label the reason why it went down. It might have been motor overheating, bearing failures or low lube oil pressure, but until we know the specific reason why equipment goes down, it’s difficult to extract the sequence of anomalies leading to the failure modes. While this company had a massive sensor data lake, running into terabytes, the information was useless until the failure labels were embedded within the assets’ timeline. In order to tag all “failure mode” label blind spots, we configured an app that helped institutionalize this process. Every time a maintenance ticket was generated for unplanned equipment downtime, the app would step through a workflow at the end of which the failure mode for the asset was tagged onto the timeline.

So, here are three questions to ask your team before you embark on an AI project:

  1. Top three failures: Which are the top three high-value failure modes that are most economically significant?
    Rationale: All failure modes are not the same. Isolating and prioritizing the vital few failure modes from the significant many saves money.
  2. Tagging process: When equipment goes down, is the failure mode automatically generated by the asset or does it need a “human in the loop” to tag failures?
    Rationale: Some machines are programmed to record the failure mode event as a historian tag, others need an external process.
  3. Breadth and depth: What is the breadth and depth of equipment data available in the data lake?
    Rationale: In order to model the entire set of data, one needs to have maintenance tickets, sensor streams and ambient context. In order to “see” sufficient instances of a failure, the sensor data lake needs to have at least one to two years of operational data.

To conclude, it’s easy to get carried away by the hype surrounding AI and algorithms. But the key to winning the game is finding the answer to the above three data-tagging questions. Good luck as you introduce AI to unlock gold in your data.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 14, 2017  2:26 PM

Do IoT awards help discover IoT innovators?

Francisco Maroto Profile: Francisco Maroto
award, awards, Internet of Things, iot, IoT devices, start-ups, Startups

Many IoT startups apply to win prizes that event organizers or IT industry giants have created to reward the best or most innovative IoT products of the year in different categories.

Let’s take a look backstage of the IoT awards:

The organizers and the nomination process
For most of us, the nominating process is not well-known, but it is also true that reading all published information related to the nomination process is a tedious work. Let´s be honest, when we are allowed to participate in nominations, we usually give our vote to our company, friend or colleague, or the most popular.

A characteristic of some nomination processes is that they allow you to check how the results go. Is it good or bad know in advance if our vote will count for something? Do we lose the interest if we were wrong? Do we try to influence others to follow us in our mistake? There is no such things as a “perfect nomination process,” but those processes that increase transparency, fight against voting manipulation, solve technical issues quickly and have measurable selection criteria get my vote.

The awards committee
The origins of the jury system are 11th Century England. The concept was that people were entitled to a jury of their peers. Somehow, over the centuries, we turned that upside down.

Nominees expect their submitted work to be judged by an ethics committee. The choice of a committee is not easy as we have seen in movies or TV series; organizers of the awards have to convince national or international experts in the fields of M2M communications and IoT, avoiding employees of nominated companies or people recognized by the community by their dubious reputation for impartiality. Many times, organizers offer paid travel and expenses for their work. The most difficult task? The organizers must ensure that members of the committee are kept confidential and not influenced by others.

We all expect that an awards committee to experience some pretty heated deliberation before announcing the winners that deserve the award. Although I do not remember a single award given that has not ended in disappointment.

The heaven of winners
If you are familiar with IoT awards, you will be with me about the different degree of happiness and behavior when we hear or read the names of the winners.

If the winner is a mega-vendor or works for a mega-vendor, we feel that the organization and the award committee was taking the easy way out. The winners accept the award sometimes with indifference or little emotion.

When the winner is an entrepreneur or startup, we all move our attention to them; we want to know more about them. These winners touch the heavens during the speech. New opportunities and doors are immediately open to them.

The hell of losers
Speaking of losers is always very subjective and controversial. For many new nominees, just being nominated is a win. Of course not all companies or individuals can keep in mind that it is expected they win by a landslide. If this does not happen, disappointment is greater. But such is life, a fair and transparent process and an ethical jury can blow the surprise.

There is life after the awards
What happens after the awards? The world goes on, life goes on, and both winners and losers have big challenges ahead. Winners and losers should avoid complacency, keeping focus on core business and proof they can escalate.

Do awards help discover IoT innovators?
There are many reasons for startups to participate in IoT award programs. The jury and the organizers receive hundreds of requests and their criteria filters until they’re left with the best. Being part of this selected group is a great recognition. But we must not wait for all startups to present innovative products; sometimes it is enough they solve a business need in a different way or discover a new untapped opportunity.

Maybe the most important outcome for startups is that they move from relatively unknown companies to a bigger audience, including investors and multinationals that will help grow their ideas. For multinationals, IoT awards are an excellent opportunity to invest in startups, discover new talents and capture new ideas that move dinosaur organizations in the right direction in the digital transformation of the company.

Thanks in advance for your likes and shares.

Thoughts? Comments?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 14, 2017  1:08 PM

‘I can’t sleep without IoT!’: IoT in healthcare

Medhat Mahmoud Profile: Medhat Mahmoud
Connected Health, Consumer IoT, Healthcare, Internet of Things, iot, IoT devices, medical IoT

I wasn’t really sure how fast and how deep IoT technology was penetrating our daily lives until my friend told me about his snoring and his sleepless nights.

My friend, like many other millions, has been suffering from sleep disorder problems for the last couple of years. But in addition, he has also been noticing low-energy levels during the day, causing him fatigue, excessive sleepiness and lack of focus. Just recently, he also realized he has been snoring; he surprisingly found out when his family dared to tell him that he loudly snores every night.

IoT in healthcareTroubled with the news, he immediately started to look for solutions to his snoring, sleep disorder and daily low-energy levels. He didn’t know they were all related.

My friend decided to seek medical advice, where he was asked to undergo a sleep test. A few days later, he was diagnosed with sleep apnea, a common sleep disorder that occurs when breathing is briefly interrupted while asleep due to blocked airways in the throat area, in most cases. The person with sleep apnea is generally not aware of these short breathing pauses that can occur hundreds of times a night. One of the consequences is that the brain and the rest of the body may not get enough oxygen during sleep. If left untreated, sleep apnea can result in a number of health issue, sometimes serious heart problems and deterioration in the person’s quality of life, producing excessive daytime sleepiness and morning headaches, as well as lack of energy and focus during the day. While loud snoring is often a symptom, not everyone who snores has sleep apnea, and not everyone who has sleep apnea snores.

My friend told me he was lucky to be diagnosed with a “moderate” level of sleep and has a few good available remedies, one being a CPAP machine, an advanced small apparatus that can be plugged in and hidden next to the bed. It has a very quiet air pump and nose mask worn while sleeping to ensure the user’s airways do not become blocked. The machine reads and registers daily usage data, such as sleep interruption, air blockage, leakage, rate of short breathing pauses and all other details to measure the machine’s effectiveness.

Advanced machines have also a built-in wireless connection, so the data can be sent wirelessly in real time to a central healthcare system that monitors the progress of the treatment and report any abnormalities. In general, these machines are covered by insurance, with a condition that the patient uses the machine for an average of at least 70% and four hours per night during the user’s normal sleeping hours. This condition is to ensure that the machine is used effectively for the person’s best treatment and well-being. For the insurance company, it helps optimize growing machine costs and usage effectiveness among patients while reducing unnecessary administrative follow-up costs.

After receiving the machine, my friend used it for a couple of days. Then he got disappointed that it didn’t bring noticeable improvement to his sleep disorder, nor did it boost his energy level or focus during the day, as he expected. He became even more frustrated as he couldn’t manage to sleep well with mask attached and pump next to him. The only good news was that his family didn’t hear him snoring anymore!

Although the usage condition of four hours per night and 70% of sleep time was stated in the machine’s instructions, my friend didn’t take it seriously. He also didn’t realize how reliably the usage data report was transmitted automatically to the monitoring center. After the first couple of days of disappointing results without noticeable improvement to his sleepless nights, he turned the machine off, eventually putting it away, fully disconnected.

The same week, he received an automated call warning that he wasn’t using the device properly! He was reminded to use it per the instructions to achieve the effectiveness of the remedy. He was also reminded that the machine wouldn’t be covered by insurance if he didn’t adhere to the usage condition.

This effectively pushed my friend to retry the machine, following the detailed instructions. After a week of trying, he quickly learned to modify his sleeping behavior and adapted to using the machine every single night.

A couple of weeks later, he started to feel a positive difference, experiencing much better deep-sleep patterns at night followed by days full of energy and focus, as well as a fresh feeling every morning that lasted with him throughout the work day. Since, his sleeping behavior has been modified positively. The machine also reported improvements to the rate of short breathing pauses.

“Now, with these amazing results, I can’t sleep without the machine anymore,” my friend said with a big smile on his face.

Happy for him, I started to explain to my friend that this automated process is the basis of IoT technology. It helped him accurately track his daily usage, guiding him to modify his treatment behavior track while satisfying the insurance needs.

“Without this wireless automatic reporting technology, the machine could have stayed on the shelf and you would have lost interest in getting the right treatment and continued your suffering,” I told him.

My friend didn’t really seem to care about understating what or how IoT works, but he smiled and said, “OK, anyway. I can’t sleep without this IoT anymore!”

IoT sleep

My friend’s story reminded me that we, as individuals or as a society, must start harvesting the benefits of IoT technology to let it have positive impacts on our daily lives in a way or another. Behavior modification is just one outcome of many.

In recent years, the value of IoT in the health sector has been successfully validated in major business areas, such as remote patient monitoring and wellness and healthcare operations. In the very near future, when devices are all connected and generating explosive amount of data and combining with more advanced technologies such as artificial intelligence, individuals and societies will have better visibility and control of their health and well-being.

Early adopters from healthcare providers and industry players are investing in IoT initiatives and seeing real value emerging, including improvements in consumer experience and operational cost-savings.

The healthcare IoT industry is projected to be the third most advanced in IoT implementations, with more than 75% of healthcare organizations worldwide introducing IoT into their operating models by 2019. Major growth of the IoT healthcare market will be driven by the evolution of artificial intelligence technology, the rise in investments and the increasing penetration of connected devices in healthcare settings.

Accelerating innovation in this sector will definitely create step-change improvements in patients’ lives while introducing new business models for the entire healthcare industry.

The important lesson I got from my friend’s story is that most people don’t really care what IoT is. What really matters is how it improves a person’s quality of life with a positive outcome being noticeably achieved.

My friend is happy; I’m glad to hear him say, “Yes, I can’t sleep without IoT anymore!”

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 2 of 7012345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: