IoT Agenda

Page 5 of 41« First...34567...102030...Last »

April 24, 2017  3:54 PM

NFC-enabled technology paves the way for award-winning IIoT solutions

Paula Hunter Profile: Paula Hunter
Consumer IoT, Enterprise IoT, IIoT, Industrial IoT, Internet of Things, iot, Manufacturing, NFC, smartphone, Wireless

If you are reading these words on IoT Agenda, then I’m almost 100% certain you are familiar with the internet of things. What you might not be as familiar with, however, is IIoT — or the industrial internet of things.

The IIoT opportunity is enormous and many companies are using NFC-enabled devices to create solutions currently being deployed in IIoT environments. Some of the numbers and predictions recently published by Juniper Research and Accenture show the IIoT opportunity:

  • IIoT will add a total of $14.2 trillion to the global economy by 2030, because of its potential to drive growth and productivity;
  • An estimated 38.5 billion connected devices by 2020 (a 285% increase from 2015);
  • Between 50% and 75% of the world’s legacy industrial systems are not yet network-attached or IIoT capable.

Connecting the unconnected

Still, despite the fact that billions of machines are now network-connected, there remains a sizable IIoT gap. As stated above, most of the world’s legacy industrial systems — 50% to 75% — are not yet network-attached, meaning that they are unable to reap any benefit from IIoT. This problem is compounded by the longevity — usually measured in decades — of the typical industrial system. Adding connectivity to a machine may require a complete redesign of the industrial system, which can be costly, complex and inconvenient.

These challenges presented by legacy industrial systems compelled the engineering team at KEOLABS to search for a better way of providing network connectivity with a customizable user interface — while ensuring data security. Based in France, KEOLABS is a member of the NFC Forum and is a manufacturer and supplier of application development and testing tools.

The KEOLABS team solved many of these challenges by developing and patenting IoTizeā„¢, a solution that enables IIoT by adding NFC connectivity to any existing electronic system — without modifying its initial design — and can be used with NFC-enabled smartphones. IoTize has been commercially available since early 2016 and has proven so successful that KEOLABS recently spun IoTize off into its own company.

IoT solution: An NFC-enabled plug-in module

The main component of IoTize is the connectivity module, which is about the size of an American quarter. It provides easy one-tap connectivity between the IoTize module and the user’s smartphone, using NFC technology. NFC technology enables the collection of machine data and uploads it to the cloud. Since it only functions when the user has actively initiated the connection, this NFC-enabled solution offers good security for the user.

IoT award-winning innovation

In Feb. 2016, the solution won an award for innovation in hardware at Embedded World. Currently KEOLABS continues to ramp up its mass production of the module and is developing models that highlight the other capabilities of IoTize. For more information, click here.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

April 21, 2017  1:45 PM

Best practices in software development for IoT solutions

Danny Aponte Profile: Danny Aponte
Enterprise IoT, Internet of Things, iot, Software design, Software developer, Software development

IoT solutions provide many challenges to software development, with connectivity impacting the many layers of the software stack. Ideally, the best approach results in software that drives IoT solutions for the future, with the users’ evolving needs in mind, while delivering on the promise and benefits of connectivity. From our many years of designing software that drives IoT solutions, we wanted to share some of our best practices in software development for IoT.

Team of software collaborators

A process that leverages a team of software collaborators puts speed and structure around software development for IoT solutions.

Testing and gathering critical feedback is what makes product development a success. Ideally, you want to get a version of the software in as many users’ hands as soon as possible and be prepared to address their feedback. Traditional linear development methodology, known as waterfall, can often fall short in this area. The formal process and change controls used with waterfall fail to meet the reality of rapidly changing user requirements. The drawback of waterfall projects lie in the lack of ongoing customer feedback. If your requirements aren’t on point, expectations will not be met.

Iterative, cross-platform software collaboration is more advantageous due to its flexibility and direct connection to the end user. An iterative approach is micro-goal driven, with an easy process in place to adjust to changing requirements. This improves your team’s ability to accurately and efficiently manage costs, and it provides the quickest route to market.

How is this accomplished?

Born out of the agile methodology, iterating through vertical slices in the software stack is not a sequential, straight-line process. Every component in the stack is addressed in a series of continuous, quick-moving sprints that deliver a market-ready product at the end of each. The project moves in manageable pieces. There are no surprises.

Communication is vital to collaboration. Developers are in ongoing contact with the customer through daily scrums, biweekly planning meetings and biweekly reviews. This accelerates decisions, including changes, throughout the process. There is no opportunity to go down deep rabbit holes or misstep on the scope implementation. This level of governance keeps the variance narrow.

The stack revisited: Vertical-slice software collaboration

Take another look at the software development stack for IoT solutions. The ideas hold true, but by slicing vertically you will enrich the software creation process. For example, we’ve already discussed the importance of scalability. What happens if your database needs to be changed during deployment? When you develop in slices, variances are negligible and the client gets exactly what they need.

Here’s how slicing works with the software stack you’re already familiar with:

  • User interface: Wire framing should be used to model the basic structures of the program. With each sprint, usability requirements can be fleshed out for the specific functionality assigned to the slice.
  • Client platform: A slice calls for a specific feature on a particular device, an incremental addition to the existing capabilities of the application.
  • Communication: Connectivity to the back end is an essential part of the slice, enabling a full round trip to the server and data storage to empower the client front end. New IoT product complexities further intensify the importance of security testing.
  • Servers: Server interfaces are built according to the requirements of the client, using scalable and flexible frameworks, conserving costs and avoiding missteps in scope. This is achieved with the user always at the center of changes.
  • Databases: Data models required by the front end are developed as needed, but require an architectural vision to prevent the buildup of technical debt.

For each functionality set, the user interface team will build a click-through prototype that you can touch, feel and test early in the process. Then, as the two-week milestones are delivered and approved, the application becomes shippable as a revenue-generating product.

With a process focused on a team of software collaborators combined with a vertical slice approach, the speed and structure around software development for IoT solutions is greatly improved, resulting in successful final products.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 21, 2017  11:33 AM

The internet of trustworthy things: Can we trust emerging technologies?

Lucus Darnell Profile: Lucus Darnell
Consumer IoT, Enterprise IoT, Internet of Things, iot, iot security, trust, Trust relationship

With the expansion of the internet of things, objects that were once “dumb” now have the capacity for “intelligence” and their capabilities are becoming integral in our everyday lives whether we like it or not. But what happens if those things can’t be trusted?

In January 2016, Google-owned Nest Labs fell under scrutiny when its self-learning thermostats stopped working during one of the coldest times of the year, leaving thousands of users without heat in their own homes. Then, in July of the same year, Nest thermostats stopped working again, this time during a widespread heatwave across the United States.

In December of 2016, police in Bentonville, Ark., issued a warrant demanding that Amazon hand over any audio recordings from an Echo customer when they believed it was possible that the Echo might have overheard and recorded evidence of a homicide (or at least foul play leading up to the death of the homeowner’s friend).

Despite the growing number of devices now available for in-home use, the internet of things doesn’t end when we leave our homes. In fact, more and more objects around us will continue to be embedded with tiny sensors and radios that allow them to connect to the internet and to one another. For example, the cars we drive today are joining the IoT movement and are beginning to communicate with the cloud, other cars and even objects in the environment such as street signs, traffic lights and the roads themselves.

Most of us are likely familiar with the smart car company Tesla Motors and its fleet of all-electric vehicles. What many people aren’t familiar with is the autopilot feature now available in many of its available models. This feature became available when Tesla remotely issued an over-the-air update that was downloaded and installed by the cars directly. Autopilot allows drivers to basically sit back and relax while their cars drive themselves; similar technology is also being manufactured by BMW, Audi, Volvo and Google, among others.

Traditionally, computers and other electronics must follow a predefined set of instructions provided by their creators. If a certain function or task isn’t provided by programmers in a predefined list of instructions, then the “thing” isn’t aware of that function and therefore can’t perform that task. This makes it possible for us to trust that our electronics will do only what they have been programmed to do and nothing more. After all, computers can’t lie — or can they?

For things to truly become “smart,” technologies (including those mentioned above) are now utilizing a type of artificial intelligence called machine learning. This allows them to adapt to unforeseen circumstances and to more or less evolve beyond their initially programmed capabilities. For example, there is no way for auto manufacturers to identify and program for every possible scenario a self-driving car might experience. Instead, auto manufacturers utilize machine learning by training algorithmic models using data they do have (such as what a street sign looks like compared to what a person on a bicycle might look like). They then feed these algorithms as much known data as possible and test their models using more data (which the algorithms were not trained with) to determine how well they can perform with unfamiliar input, such as that found in the real world.

This doesn’t mean that computers and other electronics now have the ability to rise up and take over the world like pop culture wants us to believe. But it does mean that those things are now capable of doing more than they can under traditional programming paradigms. It also means that those things are now capable of doing something completely unexpected and unintended — something programmers haven’t planned for.

As an example, what happens if a self-driving car learns to be aggressive and to take necessary actions to avoid damage to itself? Since there are no hardcoded rules in self-driving cars that specifically define what a street sign looks like versus a person on a bicycle, it is up to the car itself to learn the difference and to treat them accordingly (i.e., the street sign is not expected to move but the person on the bicycle is expected to move). What happens if the car learns that damage to itself is minimized by hitting the person on the bicycle in place of the sign if there are no other possible actions to take? Or what happens if the car cannot distinguish the difference between the two at all and doesn’t stop or swerve to avoid hitting a moving bicyclist or the sign? Who would be held accountable in this situation? The bicyclist? The car manufacturer? The cloud service provider who developed and trained the algorithmic models? All of the above? Typically, the driver of the car could be held accountable, but there is no human driver in this picture. And what if the human passenger doesn’t own the vehicle (such as with a taxi or Uber). What if there are no passengers in the car at all?

Going back to the cases where IoT thermostats failed during harsh weather, who should be held accountable if such failures lead to injury or even death? And since many manufacturers now build devices with required internet-connection dependencies, who bears responsibility for malfunction? If the internet connection fails, rendering the device incapable of performing, should the internet service provider be held accountable?

As exciting as emerging technologies appear on paper, there is still an underlying concern about whether we can trust them. For example, during commercial flights, the pilots will sometimes engage the autopilot system and it never concerns us for two reasons: first, we don’t know when the autopilot has been engaged since we can no longer see into the flight deck, and second, we don’t concern ourselves because we feel comfortable knowing and trusting the fact that there is a (hopefully) well-trained human pilot there to take over in the event something goes awry. In the Tesla autopilot example, the driver can also take over control of the car at any time if he feels like the car is performing erroneously. However, not all scenarios will include human failsafe measures.

Both the autopilot-enabled aircraft and autonomous cars are required to go through rigorous development and testing procedures to meet certain requirements before they can be trusted for safe and guaranteed operation by the public. Nevertheless, systems can and do fail. As more things become dependent on the cloud and are equipped with the ability to think and plan for themselves, we must continue to question whether they can be trusted. Therefore, it is up to us to put pressure on product manufacturers and service providers to make sure the “things” we are bringing into our homes and trusting with our lives will do the right thing, or at least continue to remember what the right thing is. Questions such as who should be held responsible when something goes wrong are continuously up for debate. We all need to voice our concerns, demand transparency and require accountability so that every iteration is better than the last.

Whether it is thermostats that learn the patterns of their owners and know when to turn on/off and at what temperatures, or home assistants such as the Amazon Echo or Google Assistant that literally listen to every word we say, the things that we assume will simplify our lives are becoming smart and therefore a bit creepy. With respect to privacy, security and dependability, manufacturers of these devices tell us there is nothing to worry about and that we should “just trust them.” But does it really have to be that way? Should we just take the word of the device manufacturers and service providers at face value? Should we “just trust” that the things are actually only doing what the manufacturers are telling us? Should we “just trust” that our cars won’t learn that it is better to hit a pedestrian than to cause self-harm to the vehicle instead? Should we “just trust” that our private data won’t be somehow used against us? Food for thought.

Even though technologies will continue to improve as time progresses, it is still up to us to provide feedback, hold people accountable and contribute to the safe operation of those technologies. It is also up to us to help answer the ethical and legal questions with respect to their use, misuse and abuse. It is up to us to make sure the “machine uprising” doesn’t happen and that our things continue to enrich our lives — and not destroy them.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 20, 2017  5:09 PM

Bridging the gap between innovation and regulation in IoT

Art Swift Profile: Art Swift
Embedded Systems, Internet of Things, iot, IoT hardware, iot security, Open source, Regulations, regulatory compliance, Virtualization

The internet of things is rapidly turning a new generation of products “smart” by adding computing power, network connectivity and sophisticated software. From cars to routers and drug infusion pumps to drones, they now offer a wealth of possibilities for tech-savvy owners keen to push their device capabilities to the limits. But at the same time there are logical reasons why lawmakers and regulators need to lock down certain functionality — for the safety and well-being of their citizens.

Joseph Steinberg’s recent assessment of IoT security being one of the biggest tech battles that will be fought in the year ahead is very astute and an issue which the prpl Foundation has been helping to settle by working with manufacturers, developers and regulators and educating the public. While the rules laid out by regulators effectively work to lock down the firmware on consumer devices so it can’t be altered, sending them on a collision course with consumers, there has been little in the way of technology innovation to address this conundrum.

But there doesn’t have to be this divide. Regulators can get what they want to be able to control safety aspects and, equally, consumers should be able to tweak and customize technology that they buy to get what they want. And it can be done securely. The problem at the moment is that current IoT systems simply aren’t architected in a way which will allow for this kind of granularity. With open source development, secure boot-based on a root of trust anchored in the silicon and hardware virtualization that are all laid out in the prpl Security framework, it can keep both regulators and consumers happy.

The framework covers three major areas:

Open source: Too many proprietary systems rely on “security by obscurity.” But this concept simply doesn’t work any longer. Firmware binary code can often be found online, or reverse engineered with debugging tools like JTAG and interactive disassemblers like IDA. Given the increasing complexity of code, we need to get as many eyeballs on it as possible. The focus should be on creating a top quality, highly usable, secure and robust end product.

Secure boot: The method of updating firmware in embedded systems is fundamentally flawed because this software is typically not cryptographically signed. This means an attacker could reverse engineer the code, modify it, reflash the firmware and reboot to execute arbitrary code. We must ensure IoT systems only boot up if the first piece of software to execute is cryptographically signed by a trusted entity. It needs to match on the other side with a public key or certificate which is hard-coded into the device. Anchoring the “root of trust” into the silicon in this way will make it tamper-proof.

Hardware-assisted virtualization: Security by separation is one of the fundamental rules of IT security. Yet lateral movement within the hardware is possible on most IoT systems, opening up yet more vulnerabilities to exploit. Hardware-level virtualization will prevent this lateral movement and preserve security by separation. With the help of a secure hypervisor it can provide a foundation to containerize each software element, keeping critical components secure and isolated from the rest. Secure inter-process communication allows instructions to travel across this secure separation in a strictly controlled mode.

Building security into the hardware of embedded systems in this way will help regulators lock down specific harmful functions whilst allowing consumers free reign to tweak other parts of their product. Technology advances only if innovation is allowed to thrive. And with a blueprint for an open, hardware-led approach to securing embedded computing, we can finally achieve it.

It’s a win-win for innovation and regulation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 20, 2017  3:27 PM

Five free marketing tips for emerging IoT companies

Kevin Kostiner Profile: Kevin Kostiner
advertising, Internet of Things, iot, Marketing, promotion

IoT-dedicated events are the hottest trend in the trade show world. If you are reading this, it’s likely you have exhibited at one of these many events or are budgeting to do so in the very near future. Being a veteran of these events, I can tell you one obvious fact about these shows: they are populated by a lot of what I call “10 by 10s.” This is my designation for the standard 10′ x 10′ booth that is the entry point for exhibiting at any trade show. Probably no other industry has such a high concentration of these “beginner” booths than the internet of things.

Most people would shrug this off as a statistic only important to those in the trade show business. But it’s actually a clear indication of the state of the internet of things: it’s made up of lots and lots of very small companies trying to gain attention for their “thing” in the vast ecosystem of innovation that is IoT. A fair number of these 10′ x 10′ exhibitor companies are made up of no more than two or three people with very limited budgets. Many will exhibit at one show and fade away before the next. That is the IoT world today, an innovation freight train that’s hard to stay on and easy to fall off!

IoT marketing tips

So you’re a 10 by 10 and know you have something great to share with the industry and world. But, like most 10 x 10s, your talents are technically based — engineering, software, cloud, machine learning, etc. You have no solid idea or experience in how to market and sell yourself correctly outside of a website and basic social media efforts. And of course your finances, whether funded or not, are limited.

So when it comes to marketing and promoting your “thing” efficiently, effectively, clearly and with limited to no budget, what do you do? How do you do it? When you do try something, do you end up spending money only to watch as your budget evaporates away with no tangible results? Do you wonder how your competitors break into new markets and build traction?

Here are five tips to help your 10 x 10 company promote, market and drive awareness of your “thing” without breaking the bank and help you grow out of that beginner booth.

1. Find a role model

If you want to achieve something in life, you could blindly go into it without taking any form of advice. We all want our ideas to evolve into successful products and companies. One of the greatest challenges of IoT is overcoming the embedded complexity of this ecosystem to convey your message in a fashion that anyone, especially the non-technical, will understand.

The likelihood is that you will make several mistakes unless you find someone who has done it before successfully and implement some of their tools and techniques. This does not mean copying, but analyzing what made something work and see if it can be utilized in your business sector. It matters not if your role model is Cisco, IBM, Verizon or other 10 x 10-ers that have moved up to 10 x 20s.

For example, take a look at the campaign that Bosch ran that exhibited many benefits of IoT in an entertaining and easy-to-understand way. This ad represents why Bosch has become one of the leading companies in IoT. You could learn from Bosch and make a fun, creative video that engages your audience and leaves them wanting more.

Not sure how to find a role model? Do a search on YouTube as to how other companies have marketed their products. Why not start with Bosch to really get your “outside of the box” juices flowing?

2. Networking

Through successful networking, companies can rapidly make contact with other organizations, groups and individuals who share similar interests to themselves. I’m sure you already use sites such as Facebook and LinkedIn, which offer great opportunities to get you started at no cost whatsoever.

But do you know how to really use these and others tools the most effective way?

Why not start your own group on LinkedIn to attract other members who can offer solid advice and who share similar interests? Or join some of the IoT-focused groups that already exist and become an active member to meet others and share a steady stream of content while seeking guidance from the group. However, if you are new to all of this, as some entrepreneurs still are, why not follow tip number one and find out how other people network and use these sites effectively?

3. Use Copybook

A little known platform, but one that I have used with great results, is Copybook. This is a global business network that allows any company to add unlimited information, pictures, videos and links to your social media streams. You can even add yourself to real trade shows and events worldwide and it is all for FREE. The moment we added our company, our business profile was shared across Copybook’s network and we instantly had over 500 connections, some of which led to inquiries and new business.

This has been the most exciting discovery (which came to me through practicing #1, finding a role model). Where LinkedIn connects business people globally, Copybook is about connecting companies globally. Check it out. Signing up and creating your company profile is easy and quick. Then learn how to get the most out of this valuable tool to drive global awareness for your company.

4. Ask your prospects and customers

All too often people are reluctant to ask their prospects and customers for input on your offerings and what else they would like to see you provide. This is crazy as it is precisely your valued clients who you are trying to satisfy.

Learn about their business and level of understanding about IoT. Remember, IoT is not a place where you just build a “thing” and expect a ready market to say thank you by buying everything you produce. You must first understand your target market, their needs and how you can best fill them.

Ask them how you are doing and what else your company could offer that would meet additional needs they may have. Be a consultant first and you will be amazed at the guidance your prospects and customers will give you!

5. Learn from what happens

Frequently, companies try different methods but do not keep an accurate record of what works. Then, in a few months’ time, they find themselves in a similar situation and often make the same mistakes all over again.

Each time you try any form of marketing or advertising strategy, keep an accurate record of what you did and what the results were. Note in detail everything about your efforts including timing (day, week, time of day, etc.), audience (who was your campaign aimed at), size (how large an audience), exact description of what you did, etc. Remember, the “devil is in details” — and that is so true of all marketing campaigns.

Even those 10′ x 10′ trade show booths are expensive. Would you exhibit at an IoT trade show and NOT track every possible metric from the show? If you answered no to this, you need to hire me right now as your IoT consultant so I can get you on track and keep you from wasting money. The metrics you must generate from every trade show represents the same type of effort you should put forth with every marketing, sales and promotional program used by your company.

Your time is valuable. Your money is finite. Use both wisely as the success, or failure, of your business depends on you doing so. Remember, being a 10 by 10 is just the starting point. Good luck, here’s to your success!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 20, 2017  11:53 AM

The three tiers of a successful enterprise IoT system

James Kirkland Profile: James Kirkland
architecture, cloud, Data Center, Enterprise IoT, GATEWAY, Internet of Things, iot, IoT devices, Open source

I once heard the internet of things referred to as a cyber-physical system. This really hit home for me, as it encapsulated everything I understood it to be but had not yet found a way to articulate. It described the system as one which relies on actuators and sensors to automatically collect data from the physical world and synthesize it digitally — faster and more accurately than humanly possible. Describing it this way emphasizes the fact that an IoT solution is indeed a system — a set of components that all need to work together to provide any value. It captures how everything is physically connected and needs to be secured, monitored and controlled as data is sensed and processed 24 hours a day, 365 days a year.

One could compare IoT to the human body’s nervous system which collects information from the senses via nerves, processes the information in the brain and then tells the muscles what to do. Different tiers in the IoT system similarly have specific functions, such as data collection, information processing and action management. Yet each layer or tier of the system’s architecture needs to integrate and communicate with the others, something that can be facilitated through the use of open source software.

What are the layers of an IoT architecture?

The Eclipse Foundation’s IoT Working Group recently discussed this at length in the white paper, “The Three Software Stacks Required for IoT Architectures.” The IoT technology stack consists of three tiers: sensor devices, gateways, and the data center or cloud IoT platform. As explained in the paper, “a typical IoT solution is characterized by many devices (i.e., things) that may use some form of gateway to communicate through a network to an enterprise back-end server that is running an IoT platform that helps integrate the IoT information into the existing enterprise.”

The device tier focuses on information gathering via sensors. Because sensors are so tiny and inexpensive, they can be embedded in many different types of devices, including mobile computing devices, wearable technology, and autonomous machines and appliances. They capture information about the physical environment, such as humidity, light, pressure, vibration and chemistry. Standards-based wired and wireless networking protocols are used to transmit the telemetry data northbound from the device to the gateway. Northbound data, if you remember from my previous post, is data going from the device through the gateway up to the cloud. It is typically telemetry data, but can be command and control requests. Southbound data, on the other hand, is generally command-and-control data that goes from the cloud to the gateway or from the cloud, through the gateway, to the device.

IoT system tiers

The gateway, sometimes referred to as the control tier, acts as an intermediary that facilitates communications, offloads processing functions and drives action. Because some sensors generate tens of thousands of data points per second, the gateway provides a place to preprocess the data locally before sending on to the data center/cloud tier. When data is aggregated at the gateway, summarized and tactically analyzed, it can minimize the volume of unnecessary data forwarded on. Minimizing the amount of data can have a big impact on network transmission costs, especially over cellular networks. It also allows for critical business rules to be applied based on data coming in. The control tier is bidirectional. It can issue control information southbound, such as configuration changes. At the same time, it can respond to northbound device command-and-control requests, such as a security request for authentication.

The data center/cloud tier performs large-scale data computation to produce insights that generate business value. It offers the back-end business analytics to execute complex event processing, such as analyzing the data to create and adapt business rules based on historical trends, and then disseminates the business rules downstream (southbound). It needs to scale both horizontally (to support an ever growing number of connected devices) as well as vertically (to address a variety of different IoT solutions). Core functions of an IoT data center/cloud platform include connectivity and message routing, device management, data storage, event processing and analysis, and application integration and enablement.

Just like any system, it all needs to work together

Think of an IoT implementation as a living, breathing entity that uses its senses to see, hear and feel the environment. If information derived from a finger says something is too hot, it needs that information to be delivered quickly to the brain so the hand can be pulled back. In order for an IoT solution to work, all the software, hardware and networking components need to interoperate and communicate seamlessly. As pointed out in the IoT working group’s white paper, “communication between the stacks should be based on open standards to ensure interoperability.”

It can be very difficult to make an IoT solution work when dealing with independent siloed applications designed by a single vendor. The vendor may not have all the components required and may not interoperate with a required piece to the puzzle. The Eclipse IoT community offers open source solutions that provide the capabilities that each tier in the architecture requires; a complete stack for constrained devices, gateways and IoT cloud platforms. They even offer cross-stack functionality. Red Hat is pleased to be working with their community to bring these open source IoT technologies to fruition.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 19, 2017  3:08 PM

IoT and regulatory compliance: The value in a contextual perimeter

Chris Witeck Profile: Chris Witeck
Compliance, Data privacy, GDPR, Internet of Things, iot, iot security, privacy, Regulations, regulatory compliance

Next year, the European Union’s General Data Protection Regulations (GDPR) take effect. While recent research from the Ponemon Institute indicates that 67% of organizations are aware of GDPR, there is a lot of worry from organizations that they are not prepared. From that same research, 74% of organizations indicate that GDPR will have a significant negative impact to them. And it is not just GDPR, 74% of organizations also indicate that any type of compliance mandate on critical infrastructure protection will have a significant negative impact.

Part of this worry is attributed to the changing ways in which users expect access. Fifty-five percent of organizations say that of all the age groups, Millennials represent the biggest risk to sensitive and confidential data in the workplace. When asked why, the most common answer was usage of unapproved apps and devices in the workplace. All of this leads to potential difficulties in ensuring not only compliance with GDPR, but really any type of regulatory compliance measure that focuses on data privacy and data security. Security and privacy requirements may differ depending on local regulations. This creates challenges for large global organizations in crafting security and access policies that span any region they operate in. But there are commonalities that can be leveraged in how sensitive data can be protected. Whether we are talking GDPR (which focuses on EU residents’ personal data), or HIPAA (which focuses on securing patient data in the United States), a theme common amongst the variety of regional compliance regulations is a focus on who can see data, where the data can be seen, and where the data can and should go.

Is this something that the internet of things can assist with? Often people frame the conversation of IoT and compliance more along the lines of the security risks of IoT, whether it is the risk of new device types accessing and sharing data in a non-compliant manner or the potential for IoT devices to introduce new backdoors to your network to allow theft of compliance-governed data. These are legitimate concerns, something I have talked about before in terms of the evolving options for securing IoT in the enterprise.

However, IoT can also serve as a strong component to your overall strategy for ensuring compliance with privacy regulations. It can do this by providing a mechanism for collecting more contextual information about users and what users are accessing. This is not a new requirement, but fits into the evolution of network security. For example, think back to the days of network firewalls and fixed network perimeters. Access then was mainly focused on the “who” (user identity) and that was about it.

With BYOD and with remote access needs to accommodate things like telecommuting, the fixed network perimeter needed to evolve to become more flexible. This led to the access model evolving to be a combination of the “who” and then adding in the “what” (what device are you using for access). From there, access models built with risk profiles were formed, which essentially asked “why” does the user need access based on the “who” and the “what.” See Google’s BeyondCorp for a good description of this approach. What this model does is start to insert context into the access equation by asking for more information than just who the user is, but adding in how much we trust the device they are using.

With IoT, the security perimeter is pushing much more closely to the applications themselves, as the rapid growth of things and devices accessing and exchanging information makes it even more challenging to defend a fixed network perimeter. But at the same time these same things and devices also can provide useful mechanisms in capturing valuable contextual information about access. This information can be very useful in ensuring access happens not only to authorized users based on who the user is, but expanding the concept of authorization to cover user location and user activities. This information can also be helpful in classifying data according to compliance requirements as well, better segmenting which data should be considered personal and protected data. What you start to see is a concept of a “contextual perimeter” protecting your apps and data, with access across that perimeter based on a trust model expanded to the five W’s: “who” needs access, “what” devices are they using and “what” applications and data are they accessing, “where” are they when they attempt access, “when” is it they need access and then, based on their role, “why” do they need access.

Just think of the value of this contextual information with regulatory compliance. Imagine a medical doctor entering a patient space. You know they are entering the space because of a combination of user authentication along with the doctor’s phone interacting with smart devices. You know the patient is in the room because the phone queried the doctor’s schedule in order to preload the patient record to a secure terminal in the patient space, or the patient wristband contained information that identified the patient to the room. You know when the doctor leaves the room and when a nurse enters the room, automatically changing the view of the patient data to the intended audience based on workflows that integrate with the medical record and smart devices in the room.

By controlling access beyond just user identity, you can help users from inadvertently violating compliance rules as well as help ensure that data that moves throughout the organization does so in a compliant and secure matter. In the example above, you can help ensure that the patient’s record, protected by a variety of different types of compliance regulations around the world is controlled based on who can see it, when and where they need to see it.

Additionally, and possibly more importantly, there is an associated big data opportunity tied to this. Capturing of user context can be a wealth of information for organizations in conducting compliance-based risk analysis. This information can be used to better evaluate how sensitive data flows throughout their organization and design workflows with compliance and security in mind. Correlating access events across multiple applications, physical systems like badge readers as well as knowing user location may help identify unknown compliance or privacy issues that can be easily rectified perhaps through automation or user education, or both.

Summary

GDPR has the potential to be very disruptive to organizations, and the data reinforces that. According to the Ponemon Institute study, 74% of respondents say GDPR will have a significant and negative impact on business operations, and 65% are worried about new penalties of up to 100 million euros or 2-4% of annual worldwide revenue. Yet in many ways we are better prepared than ever before to protect sensitive data because of the contextual information available about users, devices and the data itself. While BYOD and IoT have contributed to the overall negative hype behind regulatory compliance — because in many ways they have enabled the ability for data to be available anywhere — they also enable the means in to better protect sensitive data because they can also collect information about the who, what, where, when and why related to data access, and that information can be instrumental in creating a contextual perimeter for your data, and for your organization.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 19, 2017  12:39 PM

Embedded device analytics: Benefits and use cases

Alan Clark Profile: Alan Clark
Big Data analytics, Data Analytics, Data Management, Embedded devices, Internet of Things, iot, IoT analytics, IoT data

Many internet-connected devices send metrics to the cloud, where analytics software is used to extract meaningful data for trend analysis, display and alerting. If data is time-varying then it must be sampled and reported more frequently, which increases network traffic and cloud storage. As IoT deployments increase in scale, this becomes more costly and less practical. For example, an IoT deployment with 10 million devices each reporting 2 kilobytes of data every 30 seconds would generate 60 terabytes of data per day and a database load of over 600,000 IOPS.

The concept behind embedded device analytics is quite simple: divide analytics functionality into that which analyzes the data provided only by the single device and that which takes a global view across devices, and embed the first of these directly into the device. This has numerous benefits:

  • Time-varying data can be sampled frequently without requiring a high volume of reports to be sent to the cloud, reducing network load and storage requirements
  • A set of related metrics can be analyzed together in real time, which requires very little computational load for a single device but enables more sophisticated metrics to be sent to the cloud
  • These more sophisticated analytics can be used within the device to prove meaningful local feedback, which makes devices more intelligent

There are a number of real world examples of embedded device analytics at work.

Smart AMI electricity meters sample usage data every five minutes but report every hour, and also support “last gasp” outage reporting. An electrical utility with 2.5 million customers collects about 4 terabytes of usage data per month, however if we consider reported outages then only about 1 gigabyte of this data is actually used.

Voice over IP devices use embedded agents, such as VQmon, that use multistate Markov models to learn about the distribution of lost and discarded IP packets and sophisticated analytics to correlate this with models of the codec and playout buffer in order to report accurate quality of experience scores. Reports are sent at the end of each VoIP call that distills the entire call into a set of metrics that reflect user experience and everything affecting it, enabling large deployments of VoIP devices to be cost-effectively managed. This VoIP embedded device analytics model is widely deployed in over 500 million IP phones, residential gateways and other devices.

The trend is to increase both the scale of deployment and the frequency with which data is sent. For example, smart meters are already sampling usage every five minutes, however there are proposals to reduce this to six seconds, the rationale being that this would allow individual appliance-level usage to be tracked. Reducing usage sampling intervals from five minutes to six seconds will increase the amount of data stored by a factor of 50. If, however, a Markov model (as used in VoIP analytics) was used to track usage then usage could be sampled every second and the resulting metrics could be sent less frequently and would be smaller in size but would contain much more detail on usage over time.

Currently the planet Earth stores about 2,500,000 terabytes of data per day, which equates to 300 megabytes of data per person per day — we are storing much more data than we can possibly comprehend. While IoT represents a small proportion of this today, growth in IoT and the desire for more detailed metrics will soon make IoT a major contributor. The use of embedded device analytics can help to reduce data volume, improve the quality and resolution of reported data and economize on storage.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 18, 2017  10:40 AM

The top five IIoT challenges facing industrial organizations

Jason Cline Profile: Jason Cline
cybersecurity, IIoT, Industrial IoT, Internet of Things, iot, IoT data, iot security, IT, Operational technology

The physical world is being digitized. There has been an explosion of smart devices that are in constant communication with one another and churning out large volumes of data. This data is changing the way businesses are run today. The basis of the industrial internet of things, the real-time dynamic of data analytics, is creating both new opportunities and challenges for business leaders. In one report, half of the executives surveyed across industrial and healthcare sectors said they lack the talent required to consolidate and interpret the massive volume of disparate data that exists across their facilities. Yet within the next year, 72% of those companies fear they will lose market share if they are unable to implement their big data strategy. So what’s holding them back? Below are the top five challenges currently facing organizations in the age of IIoT.

1. Asset-level visibility

Improved capacity is one of the benefits of state-of-the-art information systems. To achieve production targets, operators need to be able to monitor assets in real time and ensure those assets are performing at an optimal level. Operators also need increased visibility and better insights on the health of the machine so they can detect anomalies and fix issues before they occur. Asset performance management can provide operators with answers to critical questions, including how often equipment fails so it can be prioritized, how equipment should be maintained and how unexpected failures and downtime can be avoided.

2. Technology integration

Traditionally, management of industrial technology has been split between information technology (IT) and operational technology (OT). IT works from top down, deploying and maintaining data-driven infrastructure, whereas OT is built from ground up, starting with equipment and assets, and moving up to monitoring and industrial control systems. With smarter machines and the pervasiveness of IIoT, the worlds of IT and OT have converged. IT and OT, developed separately with independent systems architectures, need to securely integrate without data loss or the introduction of vulnerabilities.

3. Aging workforce

According to the Bureau of Labor Statistics, by 2024 the median age of U.S. workers is expected to be 42.4 years old, so it comes as no surprise that this aging workforce will impact a number of industries. Retirement of experienced workers is expected to create a skills gap, and while younger generations will bring new skills, it is crucial that the knowledge accumulated by more senior employees is captured and made accessible to the new workforce before retirement. Organizations must prepare for this impending change, and can do so by using digital technologies to help ease the transition.

Advanced cloud computing and software technology is transforming the data management process and adapting to a younger, more digital-savvy generation. Data management and analytics technology with a simple, mobile-enable interface dramatically increases productivity across the organization and reduces the costs required for manual data organization and review. Further, the ability to better predict maintenance issues and eliminate unexpected equipment issues could save industries billions of dollars per year.

4. Data islands

Keeping up with a flood of information is difficult for any organization. Most companies struggle with data deluge driven by lower-cost storage, sensing and communications technologies, and few have figured out how to properly leverage data. Big data that is neither structured nor contextualized is difficult to store and analyze in its entirety through traditional computing approaches in a cost-effective way — and can lead to data islands.

Data islands are either a byproduct of operational decisions being made without the context of a larger data strategy, or by layering legacy systems with newer technologies without a data governance system in place. Data then gets siloed, and this fragmentation presents complex technical and organizational challenges. When the data is scattered throughout the plant and the enterprise, for example, integrating and analyzing it manually becomes resource-intensive and time-consuming. By the time data is actually organized, its value may have already been lost.

5. Cybersecurity

As billions of assets get smarter and are networked to store information on the cloud, they become exposed to digital privacy risks. Cyberattacks pose a range of threats — from personal devices to corporate IT systems — making both individuals and institutions vulnerable to financial and operational damage. There is growing awareness among business leaders to mitigate these risks. Vendors are now deploying solutions to prevent cyber events, but as industrial organizations continue to invest in digital technologies, security capability must be considered in the selection criteria.

IIoT challenges facing industrial companies today may seem overwhelming. These challenges, however, offer game-changing business opportunities to improve productivity and growth for those companies willing to embrace a systematic approach of applying contemporary intelligent data management and analytics systems.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 17, 2017  2:10 PM

Catching the great wave: Business transformation and IoT

Tejas Vashi Profile: Tejas Vashi
Digital transformation, Internet of Things, iot, IT workforce, Reskilling

Digital transformation is moving fast and furious, and shows no signs of stopping. It’s a sink-or-swim situation for organizations and individuals in today’s economy. To succeed requires riding the digital transformation wave full on.

But how can this actually be accomplished? Well, it requires the successful balance of three key elements.

Customer experience is one of them. Customers have come to expect always-on, personalized service and will not tolerate slowness or indifference. An Accenture survey found that two-thirds of respondents switched companies due to poor customer service experiences.

The second element is innovation. Gartner’s 2016 CEO Survey found that half of 396 leaders in 30 countries expect digitization will soon make their industries fully or mostly unrecognizable. A survey from The Global Center for Digital Transformation found certain companies are at a higher risk of going out of business due to digital disruption. These companies are in industries like travel, media, manufacturing, technology and healthcare.

The third element is workforce experience, which has become every bit as important as the customer experience and keeping up with the rapid technology evolution. Businesses that get it wrong lose their productivity. A Gallup report found that 87% of employees in 142 countries are disengaged. And one disengaged employee costs an organization $3,400 for every $10,000 spent in salary. Yet the same study found that just a 10% hike in worker satisfaction boosts earnings per share by 50%.

To succeed in digital transformation, all three elements must line up. This can only be accomplished by recognizing the equal importance of talent, technology and teamwork.

IT’s roles are expanding

IT is the linchpin of digitization, but it cannot work by itself. As IoT evolves, IT will reach into all aspects of digital organizations to impact current and emerging business models, customer engagement and insight, products and services, end user processes, the supply chain and partners. IT must fit in everywhere.

The challenge for today’s IT professionals is to branch out beyond traditional roles to help drive better business outcomes. And non-technical business people will interact more with IoT-based IT. Network control engineers, for example, will be part of operations. Software programmers will collaborate with business development teams. Business analysts will drive software requirements.

IT professionals must look beyond merely making the technology work and broaden their base of skills to drive business outcomes. They strive to become more articulate communicators, business consultants, cloud specialists, data scientists, design leads, enterprise architects, expert collaborators, program managers, software programmers, security practitioners, systems analysts, systems integrators and technology futurists.

New approaches to learning and development are vital to this transformation of skills, and organizations will need new approaches to hiring as they bring in new talent.

The critical talent factor

As user and connected device numbers are exploding — and as security threats are expanding — so are traffic and transaction volumes. Digital business applications are more demanding. Direct customer interactions are rising. Demand for data collection and distribution and networked resources is off the charts. Team collaboration is happening more and more. These rapid shifts in technology create the need for top talent, fed by continuous learning.

It’s not easy to find IT professionals with digital-ready skill sets. According to the 18th annual Global CEO Survey from PricewaterhouseCoopers, the lack of key digital skills is one of the biggest concerns of 73% of business leaders. McKinsey’s Cracking the Digital Code report found that a lack of talent was respondents’ top challenge in meeting priorities for digital projects. The same report concluded that managing talent precisely is one of the keys to digital success.

Employees understand that in order to flourish in today’s environment, they need to keep current on their skills. In its Being Digital report in 2015, Accenture found that 64% of employees surveyed are proactively learning new skills to prepare for digital changes. Eighty-one percent saw digitization transforming the way they work in three years. And 40% said that shift would be significant.

One of the avenues by which organizations will be able to take ownership of the talent pool they need is by providing the right training for employees to acquire the right digital skills quickly. The best learning experiences are current and relevant. They are convenient and practical; their focus is collaborative. They are also standardized and, most importantly, continuous.

Because skills will keep diverging across vertical industries, geographic location and systems, organizations and IT professionals also need a credentialing system to validate new job-related skills and training that focuses on specific skills. And, with a workforce more and more diverse, instruction formats are quickly evolving. They are moving toward video-based, gaming-like formats that offer flexible learning options. Instruction can be accessed as needed, any time, any place, using any smart device.

When it comes to digital transformation, the future is clear. Companies who want to compete will embrace the change, and individuals, particularly in IT, will need to adapt to new needs. As IT job roles change, the right skills training and credentials become more important than ever before. For successful transformation in the digital age, businesses must put equal emphasis on developing technology, talent and teamwork.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 5 of 41« First...34567...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: