IoT Agenda

Page 20 of 58« First...10...1819202122...304050...Last »

May 4, 2017  11:28 AM

Why are IoT developers confused by MQTT and CoAP?

Jonathan Fries Profile: Jonathan Fries
CoAP, Enterprise IoT, Internet of Things, iot, IOT Network, MQTT, Protocols, wireless communication

Recently, at Exadel, we encountered an interesting challenge for IoT developers. Because IoT apps have gained so much momentum, there is more and more choice in how to develop them. For device communication, two specialized, competing protocols stand out: Message Queue Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP). They’re both designed to be lightweight and to make careful use of scarce network resources. Both have uses, in the correct setting, but the problem is that, due to the relative infancy of IoT development, people don’t know exactly what these protocols are or when to use them.

These are not standard web protocols that everyone uses.

In light of our own internal conversations, I decided I’d like to help demystify these a bit. First, let’s look at what these protocols actually are.

What is MQTT?

To the layperson, MQTT is a lot like Twitter. It is a “publish and subscribe” protocol. You can subscribe on some topics and publish on others. You’ll receive messages on topics you subscribe to and those who subscribe to the topics you publish will receive those messages. There are differences, of course. For instance, you can configure the protocol to be more reliable by guaranteeing delivery. The publish/subscribe system utilizes a broker, which, to further draw out the analogy, would be the Twitter platform itself — filtering the messages based on your subscription preferences.

What is CoAP?

CoAP is more like going to a traditional website-based business, like Amazon. You request resources (pages and search results in the Amazon example) and occasionally also submit your own data (make a purchase). CoAP was designed to look like and be compatible with HTTP which powers most of the internet as we currently know it. CoAP can either utilize proxy servers and be translated into HTTP or communicate directly with a special server designed to use CoAP, depending on the environment constraints.

When do you use them?

The question you’re probably all asking is, “if they’re so similar, when should I use one versus the other?”

MQTT is ideal for communicating between devices on a wide area network (WAN, internet) because of the publish/subscribe architecture with the broker in the middle. It is most useful in situations where bandwidth is limited, such as remote field sites or other areas lacking a robust network. MQTT is a part of Azure and Amazon service offerings, so it has a lot of established architecture, making it easily adapted for current developers.

In the case of CoAP, the strongest use case is its compatibility with HTTP. If you have an existing system that is web service-based, then adding in CoAP is a good option. It is built on User Datagram Protocol (UDP) which can be useful in some resource constrained environments. Because UDP allows broadcast and multicast, you can potentially transmit to multiple hosts using less bandwidth. This makes it good for local network environments where devices need to speak with each other quickly, which is traditional for some M2M settings.

If an IoT developer is working with a device that is going to leverage an existing web server architecture, the developer will use CoAP. But if the developer is building something where a device is really “report only” — that is, it is dropped on the network and just needs to report data back to a server — CoAP will be better for that. Other uses, such as cloud architecture, will probably best be done with MQTT.

The future of MQTT and CoAP

Over time, for other protocols, usage or industry adoption has tended to migrate toward the more free and inclusive platform, unless the non-inclusive one is much better. Both MQTT and CoAP are open standards which anyone can implement. CoAP was started by a standards body as opposed to MQTT which was originally designed by private companies, including IBM. CoAP has been designed to handle resource-constrained environments and it may be that it becomes the winner, but for the time being MQTT seems like it is in the lead. There is significant momentum behind MQTT — the big cloud players have picked it, or at least picked it initially. Additionally, many commercial use cases need the features of MQTT (store and forward, centralized host). However, one possibility is that some software development that has standardized around HTTP (mobile app development for instance) could start to leverage CoAP both for working with peripherals and to communicate to the back end to help reduce bandwidth on bad connections.

Ultimately, these protocols can be effectively deployed in different applications through the internet of things. We know that there are specific use cases in which each is best served, but we also know that IoT and IoT devices will continue to grow in complexity and ubiquity. For developers, understanding the key differences in application can not only enable a better initial deployment, but build a strong foundation onto which future development can be execute.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 3, 2017  3:10 PM

Open data, the underpinning of the internet of things

Brian Zanghi Brian Zanghi Profile: Brian Zanghi
City, Data Management, Internet of Things, iot, IoT data, Open data, smart city

Although the concept of smart cities is relatively new, it has jumped into the forefront of the conversation about future urban environments. Last year, the United Nations predicted that two-thirds of the world’s population will live in a city by 2030. With this growth and innovation across all sectors constantly expanding, it is critical that cities stay tuned to society’s evolving needs. It’s no longer common in many places to carry a paper map or even pick up a newspaper that was left at the bottom of the driveway. Instead, urban residents expect to be linked to their cities and fellow citizens through convenient applications, technological innovation and the connectivity of IoT in order to accomplish their daily routines.

Promoting innovation and idea development is key to becoming a smart city but needs to begin with opening up data for public access. For example, allowing access to current bus location data would allow developers to create apps that alert users when buses are nearby. With an open data policy, cities are able to join the smart city movement that integrates technology and information into the heart of urban development.

This data-first approach has been the defining factor driving cities to become smarter and more innovative environments. According to the Sunlight Foundation, the five largest American cities — Chicago, New York City, Los Angeles, Houston and Philadelphia — allow public access to data and have only continued to grow as exemplary smart cities.

New York City achieves this by encouraging organizations to innovate new ideas, ultimately contributing to the city’s evolution into a smart city. Projects like the Displacement Alert Project by the Association for Neighborhood and Housing Development uses open data to create a web visualization of neighborhood and residential building conditions to increase awareness about the affordable housing crisis and to locate areas of severe displacement pressures. As shown by this app, open data provides New York with the capability to address issues like threats against the well-being of its residents and helps to simplify solutions, furthering the smart city efforts.

Hoping to capitalize on the benefits of open data in a similar way, universal access to data aided in Boston’s development of the BOS:311 app, which allows residents to report non-emergencies to the Constituent Service Center, who then dispatches the appropriate agencies to the issue.

As New York and Boston both prove, connecting citizens to a smart city requires access across all sectors — and can only be accomplished with a data-first approach.

The internet of things further enables open data initiatives by providing granular and real-time data for innovations like air quality sensor, public transit location devices and disaster warning signals. Bridging the gap between the people and the city with comprehensive data allows for better monitoring of the behaviors and needs of the city’s citizens, and permits solutions that improve urban conditions and alleviate inconveniences.

Boston’s beta test of a new data portal that will trial a more user-friendly display of available data proves that making data comprehensive and easy to interpret is essential. In addition to creating a portal, cities also need to encourage agencies to leverage and share data. This involves ensuring data is available in a universally understood format, as Boston hopes to do with its new overhauled data system.

Promoting innovation is only made easier with usable data. For example, in 2013 New York’s Department of Transportation rolled out a new mode of transit with Citi Bike, a bike sharing system, just after New York joined the open data movement. Since then, several private sector companies have been using New York’s open data, hoping to develop an idea that would improve the Citi Bike model. Opening data of popular bike commutes has underscored the gaps in the Citi Bike system and is allowing innovators to fill those gaps. Companies like Spin and Mobike are weighing in with their own solutions and ideas for bike sharing, such as eliminating docking stations for an even easier commute.

Recently, the Department of Transportation emphasized the importance of urban technology by organizing the Smart City Challenge, in which cities were asked to propose plans that combined innovation and connectivity to win the funding necessary to execute. The winner, Columbus, Ohio, took on a large project proposing a new transportation system that included an autonomous shuttle system, a universal app for all transit modes and a data analytics plan. Giving the city’s tech scene a confidence boost, Columbus hopes that the grant will encourage businesses to innovate and contribute to development of technology that solves some imperfections in Columbus’ urbanity, illustrated through the public data.

In order to accommodate their citizens, cities must provide a technological ecosystem by capitalizing on the capabilities of IoT and innovation. Opening data up to the public promotes development in technology and problem solving by the public and private sectors, because it fosters idea development based on measurable problems in society. This will allow the process of becoming a smart city to take place organically and seamlessly.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 3, 2017  1:29 PM

Revisiting the ‘internet of nothings’ — where we are today

Geoff Webb Profile: Geoff Webb
Internet of Things, iot, iot security, privacy, Protocols

Picking a fight with someone else’s past predictions seems like a crappy thing to do. Especially for someone who also makes lots of forward-looking statements that could come back to haunt him. Nevertheless, that’s what I want to do.

Back in 2014, The Economist published an article called “The internet of nothings.” In it, the writer took on what they felt were the breathless and overblown predictions on the impact of IoT. And much of what they offered up as evidence was exactly right.

For example, the article pointed out the lack of standards and security for moving data around:

“These unglamorous middleware issues of standards, interoperability, integration and data management — especially privacy and protection from malicious attack, along with product liability, intellectual-property rights and regulatory compliance — are going to take years to resolve.”

Yet, while those points are correct, I would differ now, and at the time, in the conclusions.

“Only when they are will IoT have any chance of transforming society in a meaningful way. That day is a long way off.”

It’s hard to imagine one would argue today that IoT has had little effect on society to date. If nothing else, the explosion of smart devices hitting the consumer market reflects a growing awareness that “smart things” are going to be the new frontier of devices. The escalating battle for the home hub between Amazon and Google is a clear indicator that owning the heart of the IoT interface will put any vendor in a dominant position.

The (completely understandable) misconception about the impact of IoT on society is that we’ll be able to point to one thing, to one event, and say “there it is.” But we won’t. The impact of IoT will be like a potter, gradually shaping clay, not like a hammer hitting a vase. Society will conform slowly to the opportunities and pressures of IoT, and manufacturers from home automation, medical device, automotive, and sports and leisure, in addition to nearly every other stripe of enterprise, are starting to try to own and mold that new shape.

While the potential use cases for smart devices exist, it seems we’re well into the “years to resolve” with IoT middleware issues. I’d love to live in a world where a lack of standards for security and insufficient capability to defend against attack somehow hold back the adoption of technology. But I don’t — neither do you — and no other technological development has made this fact quite as clear as IoT has.

Let’s be honest here — poor security and privacy controls haven’t slowed us down in the past, and I really can’t find it in my perennially optimistic heart to think that they will now. The pressure to IoT-stuff, all kinds of stuff, is just going to get more and more urgent. Let’s reference the adoption of cloud delivery to help illustrate how this compares to IoT. If you attended a technology trade show five or six years ago, you’d be intimately familiar with the phenomenon of “cloud washing” — in which every conceivable product or service became attached to the concept of cloud delivery.

IoT-washing is going to make the cloud-washing look more like a quick spritz with water. The competitive pressure to take any number of normal objects and attach sensors to them is going to drive all kinds of odd product launches. Simply being able to make the claim that the device is “smart” enables companies to establish a differentiator, regardless of the actual value of that capability. As a society, we are so comfortable with the expectation that “tech” equates to “improved” that it’s not a hard sell to make people believe that a smart toaster is better than a dumb one, even if the end-resulting toast is no better. Making your product “smart” is going to define the new Wild West for all kinds of markets. Unsurprisingly, we’re not waiting for products to hit the market. Instead we are facing a potential deluge of every bizarrely connected product imaginable. And these things will have an impact on the way we think about and interact with technology.

And, as we’ve already seen, flooding the market with millions of poorly secured connected devices will have significant impact — just not the kind we want.

This intersection is where I find myself agreeing with the lack of IoT standards and security, as addressed in The Economist article. In the past, we’ve seen security improvements driven by either the pressure from the public when a breach occurs or from legislative pressure to meet compliance and regulatory standards. It’s hard to know what will help with IoT. It’s so diverse, covering so many markets, products and capabilities. In individual cases we can help protect the infrastructure itself (for example by enforcing standards on something like smart grid technology), but the scale and complexity of IoT make taking a holistic approach daunting in the extreme. It’s simply too easy to launch a connected product, and then the rest of society foots the bill for poor security and privacy controls. I wish there was an easy answer. (Actually, at this point, I’d settle for a difficult answer, if it was feasible.)

The internet of things is very, very real. And the impact will be immense. We won’t connect to the internet with a terminal, a laptop or even a smartphone. We’ll live inside it. IoT is already forming around us, and is growing at an incredible, unpredictable and uncontrolled rate. No one is in charge, of course, and no one has a plan beyond individual products. It’s just happening and happening much, much faster than we could have ever expected.

And if there is a “nothing” in IoT — it’s likely to be quaint concepts like “security” and “privacy.”

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 3, 2017  12:15 PM

Securing the IoT revolution

Ger Daly Profile: Ger Daly
cybersecurity, Enterprise IoT, Internet of Things, iot, IoT devices, iot security, security in IOT, trust

The internet of things has arrived — everyday objects are getting smarter and internet connected, enabling rich information streams that can be shared seamlessly between devices, networks, industries, organizations and users. Today, billions of connected, smart devices are active around the world — and this number will only continue to rise. Gartner predicts 8.4 billion connected devices (“things”) will be in use this year, and almost 21 billion by 2020.

The rapid adoption of IoT-enabled devices brings with it a new set of challenges, which raise questions about where and how these devices should be used. The security implications of IoT for both government agencies and public services are particularly interesting, as governments grapple with how to manage and deploy this technology to become more efficient and innovative. Traditional security and IoT security are very different. For example, traditional IT security processes can assume that systems or devices can be taken off air or reset at short notice to apply security patches. This may not be possible with an IoT device, where in general high availability is assumed, and resetting a device at short notice may have safety or financial consequences. As such, securing the IoT requires new ways of thinking and new end-to-end planning measures.

For government organizations, the IoT revolution carries two significant security considerations. First, when deploying IoT technologies to enhance public services, government agencies must not only understand the benefits of such technology, but also the security risks that such networked technologies bring with them. Second, as IoT (and industrial IoT) become an integral part of critical national infrastructure (CNI), government must develop defense and security measures and procedures to address threats to the CNI — as well as society more broadly — that may emanate from this technology. IoT is not just an extension of an existing infrastructure to be managed, but rather incorporates radically different protocols that must be planned for. Ultimately, the IoT market will only reach its full potential once these security challenges have been addressed, and governments have established actionable recommendations for securing the digital economy.

Security is not keeping up with innovation

Controlling cyberthreats is a critical concern for citizens, businesses and governments alike. However, not all organizations have the right tools or systems in place to protect sensitive data. Even though most IoT connections rely on secure wireless networks, data can still be vulnerable. For example, unlike data from smartphones — which are part of IoT — information from other IoT devices doesn’t always start and end its journey with a human who can make decisions about access. Increasingly, devices are sharing data directly, without first evaluating the data for quality, integrity and security. Ultimately, IoT data needs to be secured on the connected device, on servers in the cloud, when shared between IoT devices and at every point in its journey.

For these reasons, there is rightly an increasing concern about the security of IoT-enabled devices and their ability to provide reliable, trustworthy data for decision-making, for example in the areas of healthcare, policing, justice and revenue. Not all cyberattacks aim to steal or destroy data; some seek only to manipulate data — often with equal, if not worse, consequences. The need to protect data from malicious or accidental manipulation is and must remain a priority for organizations that use connected devices to support decision-making.

Over the coming years as sensors, biometrics, healthcare monitors and autonomous vehicles become far more prevalent, they will bring with them an array of new challenges for governments. The impact of data breaches on these devices could be dire for citizens and governments alike, so steps must be taken to ensure device and data security now and in the future.

Steps to manage IoT security

  • Engineer trust and understand the threat landscape — The security challenge for hardware manufacturers and service providers that specialize in machine-to-machine connectivity is significant. These types of IoT devices are usually easily hackable because they are designed to be accessed over a local network and often come with unsecured, hard-coded default passwords. While the adoption of IoT in the home and workplace is inevitable, device manufacturers must build security into products and solutions to provide added security and resiliency. Accenture research has found that by addressing cybersecurity proactively, an organization’s ability to thwart cyberattacks increases by an average of 53%. Organizations that use threat-assessment models that are tailored to their specific digital posture will improve their ability to detect security breaches and limit damage. Companies can enhance existing security by using IoT devices for authentication, or by allowing security teams to monitor employees’ digital behavior for potentially harmful deviations, whether intentional or not.
  • Government and industry collaboration — The private and public sectors must work together to develop and implement universal standards for application development that place security, privacy and trust at the center of new product design and deployment. IoT devices would benefit from security protocols that offer enhanced authentication requirements and increased supervisory control. Data capturing technology and embedded analytics are also important to extract the full value from data shared across IoT devices, whether the data is travelling to the cloud for processing or remaining on the device where analytics tools can be applied. Together, industry and government can address end-to-end security requirements for the IoT market, including application development, device and application testing, embedded hardware and software, and connected products and platforms.
  • Secure critical infrastructure — Governments should devise a national strategy or stance on managing security risk around critical infrastructure, recognizing that malicious actors are seeking to exploit vulnerabilities. This will be especially true in the IoT age, as connected devices become a core part of critical infrastructure (e.g., smart meters in homes). Some public-sector agencies are already taking steps in this direction. The U.K. Centre for Defence Enterprise recently expressed concern about IoT security challenges, especially as these relate to the protection of critical national infrastructure such as hospitals, power networks and telecommunications systems. Meanwhile, the U.S. government has made significant progress in developing policies, programs and technologies that help protect North America’s critical infrastructure. According to a U.S. Government Accountability Office Report, the Department of Energy, the Department of Homeland Security and the Federal Energy Regulatory Commission have implemented 27 electrical grid resiliency programs since 2013, which are designed to address a variety of security concerns.
  • Educate citizens and employees — In today’s connected world, citizens and employees must understand the inherent online risks and take steps to strengthen their defenses. Effective cybersecurity depends on citizen and workforce awareness, education and an ability to understand, prevent and respond to increasingly sophisticated cyberthreats. Employers must also understand the implications of BYOD for internal security, personnel privacy and data protection, and work to develop policies that balance the reality of the digital device age with necessary restrictions on using personal devices and accounts in the workplace. Citizens must also take steps to protect their personal or organizational data to ensure they don’t fall victim to social engineering hacking and ransomware incidents which have become an unfortunate daily occurrence today. Government also has a responsibility to inform and help educate citizens about trusted sources of information online, as well as where to go to for help should their personal data be accessed or stolen. Equally, law enforcement and public services agencies must understand how to handle the reporting of crimes, fraud or mismanagement of IoT devices and their data.


IoT is creating new and exciting opportunities for businesses, consumers and service providers alike, but at the same time is also introducing significant challenges. The internet is not a secure environment, and any device connected to it is a potential target for cyberattack. We know from the number of successful cyberattacks that conventional cyberdefenses are no longer sufficient to keep determined cyberattackers at bay. With the IoT revolution comes great opportunities but also increased security risks, which must be addressed by using new, creative and agile thinking to defeat the cyberattackers, and to ensure IoT technology can deliver on its potential.

This article was co-written by Kevin O’Brien, Security & Intelligence Lead for Accenture Health & Public Service at Accenture.

Dr. Kevin A. O’Brien is the senior principal leading security and intelligence efforts within Accenture’s Global Health and Public Services practice. Prior to joining Accenture in February 2015, he served as an intelligence program manager, as well as a senior advisor on analytic and operational transformation, in the Canadian Department of Public Safety. Among other roles, he led projects and programs on cyberthreats, digital intelligence exploitation, counterterrorism, and protective and preventive security. Prior to joining the Canadian government, he served as Director of Alesia PSI Consultants Ltd from 2005-2009, which provided security and intelligence advisory services to the U.K., U.S., Australian and Canadian governments; Deputy Director of the Defence and Security Programme in RAND Europe from 2001-2005, with responsibility for its public security and intelligence work; and Deputy Director of the International Centre for Security Analysis and Visiting Lecturer in the Department of War Studies at King’s College London from 1997-2001. From 1997-2006, he was a Special Correspondent and Contributing Editor on Information Operations and Cybersecurity for Jane’s Intelligence Review.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 2, 2017  3:53 PM

Industrial IoT revolutionizing embedded development

Bill Krakar Profile: Bill Krakar
Embedded devices, IIoT, Industrial IoT, Internet of Things, iot, iot security

As IoT devices continue to gain traction among consumers, more and more companies are recognizing the need to connect their products to the internet and other applications in a secure manner.

Up until now, applications typically haven’t been connected to each other — and if they were, they were usually hard-wired, which was a much easier task than establishing wireless connectivity. The demand for wireless connected products to the cloud is emerging almost overnight, and is posing serious challenges for device OEMs and their industrial and consumer customers. Many of the organizations dealing with this challenge have had long-standing products on the market which never had to be connected to the internet before.

For example, industrial companies are seeing newly emerging demands such as users requesting instant remote access to fielded devices via their smartphones. These types of industrial large node network systems include smart meters, smart energy, agriculture, commercial building automation and lighting. Most notably, there has been significant interest for robust wireless remote device access and control from commercial building operators and medical providers, who often need to remotely monitor buildings and manage patients.

The application environments of these products often contain highly sensitive and valuable data, making security components inherently important. Upgrading industrial environments with IoT capabilities has unique development challenges based on the security implications and wireless complexity associated with large node networks.

Surmising security and connectivity challenges

What becomes glaringly obvious once companies begin embarking on the integration phase of large node network implementations is that the connectivity and security phase is the most complex portion of system development, and it requires a highly specialized skillset.

Typically, it can take three to five years for a person to achieve full proficiency of finely tuned wireless and secure connectivity development skills. Combine this training challenge with the fact that there is a global shortage of security and wireless developers and it becomes clear that hiring an internal team to manage connectivity and security development and integration is extremely costly. Even if a company has the in-house expertise to develop the connectivity and security aspects of the project, the majority of the company’s product development time will be spent on these two utilitarian elements of the product and not on the differentiated product functionality itself.

Wireless technologies are always tough to get working, especially the robust large node networks required in industrial settings. New wireless technologies or integrating new radios with new MCUs/MPUs is a significant integration and system testing exercise which doesn’t add any inherent value beyond the fact that data is connected. Many applications take longer to get the basic wireless connectivity working robustly than is needed to create the rest of the application.

Another challenge to consider is finding a robust and flexible wireless standard. While BLE mesh might gain traction, it is new and unproven in industrial settings. Implementations using the 802.15.4 standard are in use today and will most likely be the backbone for new low-power wireless mesh networks in industrial environments. One such 802.15.4 standard is ZigBee, which will most likely share the market with a newly arriving 802.15.4 standard called Thread.

Some of the increasingly popular non-mesh protocols to watch include the Sub-Gig protocols SigFox, LoRa and wireless M-Bus. Cellular (GSM) has also been widely used for many years and newer implementations such as LTE, CAT 1, CAT M and NB-IoT will become increasingly more attractive to industrial low-power, low-bandwidth IoT use cases where connecting to the cloud by Wi-Fi or Ethernet isn’t practical.

Building from the bottom up

Development teams which make any arbitrary unconnected embedded device today that simply want to connect their device will need to either start with a production-ready secure connected platform, or add deep security and wireless connectivity before they can even start real development of a connected system.

The security and robust connectivity of devices and applications need to be the underlying foundation of the product, versus an added layer after the functionality of the product has already been built. Security must be robust and embedded. Faking security isn’t a good idea and faking solid wireless connections won’t survive a day fielded in the real world. There is no avoiding the need to get these must-have platform components into place. If not approached correctly from a development perspective, these two items will define the critical path and result in the nearly always fatal project dynamic of “can’t get there from here,” where lack of technical sturdiness derails the project very late in the development/integration cycle.

Developers who evaluate, develop, prototype, iterate, field test, install and maintain commercial large node networks based on an out-of-the-box commercial secure connected platform almost certainly will achieve quicker and more deterministic deployment versus starting from scratch.

Each development project is a unique undertaking based on the environment and application, as well as the internal processes within the company footing the bill. Consequently, there are numerous customized ways to approach the problem, yet for many of these companies, a comprehensive out-of-the-box development is key.

Regardless of the solution set you identify for your situation, there are a few key elements you should consider. An open software framework that supports new, emerging and legacy network protocols is imperative. A solution should also have proven multiprotocol interoperability, which gives designers the flexibility to incorporate a wide array of wireless protocols that can work together or independently. Multiprotocol interoperability also enables end-to-end wireless communications in heterogeneous large node networks.

Secure connections from end node to the cloud

As noted earlier, embedded security is a major challenge that changes and evolves on a daily basis. As we all know, there is no silver bullet in security. Regular security updates to any and all connected devices are required — no matter how many embedded security layers are within a device. We and other market leaders are constantly innovating new ways to stay ahead of security threats.

Companies should choose a solution that addresses the latest network security requirements to protect user and system data through encrypted wireless communications, which prevents unauthorized access, as well as interception, man-in-the-middle and replay attacks. Using the proper authentication and encryption measures are also critical feature.

Most companies cannot afford to spend the time and money or acquire the expertise required to build a secure connected wireless system to the cloud with a la carte development boards, which require connectivity and security integration between the components themselves. The system level development platform approach is in its early stages, and we anticipate many more exciting changes in the future.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 2, 2017  11:24 AM

Bricker bot: A silver lining to force accountability for IoT security?

Douglas Santos Douglas Santos Profile: Douglas Santos
Bot, Botnet, Brute force attack, DDOS, Denial of Service, iot security, security in IOT

The Bricker bot made the news a couple of weeks ago for knocking unsecured IoT devices offline rather than hijacking them into other botnets and using them for a DDoS attack like the massive event we saw last year against DYN. This is the third botnet that targets insecure IoT devices, but the only one that is destructive. The second, dubbed Hajime, breaks into IoT devices, but instead of bricking them, it makes them more secure by disabling remote access to the device from the internet. Of course, Mirai was the first, but it has the same purpose as other botnets, which is to enslave IoT devices and use the computing power of its collection of bots for the purposes of the threat actor behind it.

While the Bricker bot may not yet be a worm with mass adoption, it could be a precursor of things to come. It has all the early indications of potentially being very dangerous (even more than it is today) as it gains greater appeal.

There are millions of unsecured devices just waiting for someone to hijack them, with hundreds of thousands more of them coming online every single day. Because so many of these devices have little to no security, they pose a serious risk to the digital economy. As we have seen, because of their pervasive deployment, marshaling them to engage in attacks like the massive DDoS attack last fall would almost certainly bring a considerable segment of the internet to a grinding halt, disrupting business, affecting services and potentially impacting critical infrastructure.

The Bricker bot is different, as it simply disables the internet connectivity of IoT devices. The alleged reason for the Bricker bot, according to its author, is to highlight the vulnerability of IoT devices. The argument goes that if vendors are not keen about making sure they ship devices that are secure by default, and if the owners aren’t concerned about security either, then it is just a matter of time before these devices are breached and become part of a botnet. So, to warn the market about this problem, the Bricker bot author chose to simply knock them offline.

More info about Bricker

The Bricker bot is fairly straightforward. It functions through a couple of TOR exit nodes, continuously scanning the internet for open telnet and SSH devices — more specifically, the “DropBear” version of SSH, which usually ships with “tiny” Linux distros that have what is called a busybox (a couple of Unix-like utilities for embedded Linux distributions).

When it finds such a service, it tries a brute force attack by sweeping for the known passwords that are usually the default password on these devices — oftentimes hardcoded directly into the device. Once the worm gains access, it performs a few crippling commands on the box in order to make it impossible for it to become operational — ever again, in some cases. Even though it is very simple, it is currently able to identify and destroy more than 80 types of devices. This new form of attack even has its own name now: “permanent denial of service” or PDoS.


Figure 1: March – April of 2017 we saw a significant increase in the number of attempted brute force attacks targeted against Telnet/SSH

Other applications of such an attack are very likely. Imagine receiving a message from an attacker that says, “Pay us or we will permanently kill your TV (gaming system, printer, router, smart appliance, internet service or other connected device) permanently.” This sort of ransomware attack could easily be performed instead of simply bricking the device.



Figure 2: Yearly data on attempted breaches per country

The purported reason behind Bricker, per the author’s own words (taken from a hacker forum — identity to be confirmed), is to force a change in the security of IoT devices, which security experts have known for a long time were almost as insecure as leaving your front door open.

This lapse in security encompasses everything, from weak hard-coded passwords to lack of testing for security issues to poor implementation of the networking stack to constant reuse of insecure code across and between vendors of completely different devices. This goes back to Fortinet’s predictions for 2017 that vendors need to accept responsibility for the security of their products, especially as they are becoming more ubiquitous in our lives and the infrastructure that we rely on.

Right now, the ones who are suffering are the individuals and carriers that use these insecure products. But soon enough, this trend is likely to piggyback onto the vendors selling devices built around junk code. And maybe it will be enough to force them to start thinking a little more about security and not just profiteering.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 2, 2017  11:08 AM

Get more meaningful data back from your IoT devices, and pay less for it

CJ Boguszewski CJ Boguszewski Profile: CJ Boguszewski
Cellular, cloud, Data Analytics, Data Management, Edge computing, FOG, fog computing, Internet of Things, iot, IoT analytics, IoT applications, IoT data, Opex

The internet of things depends on data. It seems like something that needn’t be said any longer, but it bears repeating as it’s one of the biggest barriers to IoT use cases heading to scale deployment. The things sense and act, the cloud stores and computes, and the intelligence applies insight and logic to drive action.

We’ve had machine-to-machine communications for a long time, and much of the prevailing mindset of IoT in the early going has been a SCADA-type mentality of command and control from the central system, with plenty of data in round-trips related to the server controlling and the client obeying. Very little of the cloud and intelligence end of IoT has been fully leveraged thus far.

It’s time to revise that approach

With more computing storage and processing horsepower at the edge of the network in today’s IoT devices, use cases have started to incorporate processing out there (with many paradigm names, inevitably: mist, fog, edge and more). It’s becoming more common, for sure. However, while that does bring benefits to certain use cases, the central tenet of IoT remains — plenty of data making its way upstream from devices, sensors and actuators will be the foundation of reaping ongoing benefits.

grid-ball-1914562_1920After all, with the compute, storage and economics of the cloud, why wouldn’t you want to get as much meaningful data as possible upon which the intelligence end of IoT can act?

One big barrier to getting more data is paying the price of carrying it

However, there’s still a big barrier in the way: the cellular Opex meters running on the telecommunications core networks which, to date, still carry the vast majority of IoT data.

When the MB meter runs, the toll on data collection remains in place. But is some data in the transmission from each IoT device/sensor/actuator more meaningful than others?

Clearly, we know that IoT generates a lot of data. Just to give you a mental refresher, here’s a quick back-of-the-envelope:

  • Data packet size: 1 Kb
  • Number of sensors: 1,000,000
  • Signal frequency: 1x/minute
  • Events per sensor per day: 1,440 (minutes in a day)
  • Total events per day: 1.44 billion (a million sensors)
  • Events per second: 16,667 (86,400 seconds in a day)
  • Total data size per day: 1.44 TB per day … for the 1 kb packet.

What that means is if you’re paying $1/MB, you’re handing $1,440,000 per day to the cell networks. That’s a daunting number when you hit scale! And most IoT product lines are still in their early stages, so they haven’t really had to pencil out the economics of their solution at scale yet.

Big for the IoT developer, tiny for the carrier

Believe it or not, that 1 million SIM deployment for $1.44 million/day is very small for the MNOs, whose users of expensive smartphones with high ARPUs remain their focus. And so they frequently look at IoT as a secondary source of traffic and revenue to their core networks.

Also, as it turns out, transmitting that data over the cellular network, due to the security requirements, necessitates surrounding the “useful” data with plenty of extraneous bits that aren’t later useful for the kinds of things we’d like to do in the cloud with the data. While it’s necessary for the sensors to report at the signal frequency that the use case demands, it isn’t necessary to transmit all of that 1 kb of data in each packet.

Why not?

Well, if the sensors were secured onto a virtual private connection, for example, where the device is unreachable from the “internet,” it’d be possible to eliminate a fair portion of that 1 kb. Extraneous packet header and security information can be expunged from the transmission, as those duties can be handled by cloud-side adapters that are tasked with managing those attributes. Each bit eliminated allows each dollar spent on data capture to the IoT application’s data store to be valuable for further processing in analytics, machine learning, business intelligence and other upstream applications.

Strip out the repetitive, extraneous bits — but also leverage edge compute

So building on the point that removing overhead in communication is key when collecting lots of minimized data transmissions, it’s also important to consider adding to that some capability around filtering data at the edge/in the fog/in the mist before sending it out to the cloud. For some use cases it can be more important to reduce data itself at the edge, for example, a surveillance camera sending an alert when a person passes in front of it, whereas cats are recognized, filtered and subsequently ignored. Combining those two approaches — reducing events through edge processing and sending only the important data with minimal overhead — is the way to go.

In summary, let the cellular Opex meter collect its (reduced) toll on meaningful data transmits and wring the repetitive data out of the stream. Architecting with an eye on virtual privacy for the network of devices, sensors and actuators that form the foundation of IoT is crucial to achieving precisely that.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

May 1, 2017  4:40 PM

Incorporating location into the IoT experience

Sudheer Matta Profile: Sudheer Matta
BLE, Bluetooth, devices, Internet of Things, iot, IoT applications, Wireless Access Points

The internet of things is all about connecting people and things and enabling objects to be sensed and/or controlled remotely across any network infrastructure. In a business environment, that often means smarter control of heating, ventilation and air conditioning systems, lights, security cameras, etc. — resulting in huge efficiencies and cost savings.

Bluetooth Low Energy (BLE) is a relatively new technology used in enterprises, hospitals, stores, hotels, etc. for indoor location services. BLE enables contextual engagement with mobile users, such as turn-by-turn directions across a campus and proximity-based messages. It can also be used to track strategic assets (for example, wheelchairs, pallets and forklifts) and people (e.g., children and patients).

BLE is becoming very prevalent in IoT to create some amazing experiences. Below are three real-world examples:

  • Motion sensors can be used in hospitals with BLE-enabled infusion pumps to determine with a high degree of certainty if those devices are located inside (or outside) clean rooms, indicating whether or not they are clean or dirty.
  • BLE-enabled defibrillators can be tracked throughout a mall, triggering local security cameras to monitor the situation and dispatch emergency responders as needed.
  • Thermostats in a conference room can be adjusted based on the temperature preferences of the attendees in the room.

How does it work?

Modern wireless access points (AP) are equipped with BLE antenna arrays that can receive signals (i.e,. beacons) from BLE-enabled devices, such as smartwatches, Fitbits, headsets, badges or tags. From a client device perspective, Google and Apple announced Eddystone and iBeacon as open protocols that use BLE to engage, enabling iOS and Android devices to scan for BLE signals. APs then leverage machine learning in the cloud to determine the location of those devices based on path loss formulas and probability surfaces, and can now deliver contextual services and information for those client devices.

These APs also have a port that can be used to send/receive inputs to/from IoT devices. This creates a common point of convergence for both mobile devices and IoT objects, enabling BLE location data to be used for smarter IoT event handling. This, of course, assumes that intelligent workflows are built on top of the infrastructure to apply the right actions to the information received.

(It is worthwhile to point out that modern access points also support Wi-Fi. In other words, Wi-Fi connectivity, BLE location and IoT can all be integrated together into a common network infrastructure for even greater functionality and cost savings. But for the purpose of this article, we are focusing just on the advantages of using BLE and IoT together.)

As devices become smarter and more connected, it is only natural that location enters into the equation for better contextual experiences. With new wireless networks that support BLE and IoT, this is finally possible today.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

April 28, 2017  3:20 PM

Avoiding industrial IoT digital exhaust with machine learning

Dean Hamilton Profile: Dean Hamilton
Artificial intelligence, Data Management, Data storage, fog computing, IIoT, Industrial IoT, IoT analytics, IoT data, Machine learning, Predictive Analytics

The internet of things is speeding from concept to reality, with sensors and smart connected devices feeding us a deluge of data 24/7. A study conducted by Cisco estimates that IoT devices will generate 600 zettabytes of data per year by 2020. Most of that data is likely to be generated by automotive, manufacturing, heavy industrial and energy sectors. Such massive growth in industrial IoT data suggests we’re about to enter a new industrial revolution where industries undergo as radical a transformation as that of the first industrial revolution. With the Industry 4.0 factory automation trend catching on, data-driven artificial intelligence promises to create cyber-physical systems that learn as they grow, predict failures before they impact performance, and connect factories and supply chains more efficiently than we could ever have imagined. In this brave new world, precise and timely data provided by low-cost, connected IoT sensors, is the coin of the realm, potentially reshaping entire industries and upsetting the balance of power between large incumbents and ambitious nimble startups.

But as valuable as this sensor data is, the challenges of achieving this utopian vision are often underestimated. A torrent of even the most precise and timely data will likely create more problems than it solves if a company is unprepared to handle it. The result is what I refer to as “digital exhaust.” The term digital exhaust can either refer to undesirable leakage of valuable (often personal) data that can be abused by bad actors using the internet or simply to data that goes to waste. This article discusses the latter use, data that never generates any value.

A 2015 report by the McKinsey Global Institute estimated that on an oil rig with 30,000 sensors “only 1% of the data are examined.” Another industry study suggests that only one-third of companies collecting IoT data were actually using it. So where does this unused data go? A large volume of IIoT data simply disappears milliseconds after it is created. It is created by sensors, examined locally (either by an IoT device or a gateway) and discarded because it is not considered valuable enough for retention. The majority of the rest goes into what I often think of as digital landfills — vast storage repositories where data is buried and quickly forgotten. Often the decisions about which data to discard, store and/or examine closely are driven by a short-term perspective of the value propositions for which the IIoT application was created. But that short-term perspective can place a company at a competitive disadvantage over a longer period. Large, archival data sets can make enormous contributions to developing effective analytic models that can be used for anomaly detection and predictive analytics.

To avoid IIoT digital exhaust and preserve the potential latent value of IIoT data, enterprises need to develop long-term IIoT data retention and governance policies that will ensure they can evolve and enrich their IoT value proposition over time and harness IIoT data as a strategic asset. While it is helpful for a business to have a clear strategic roadmap for the evolution of its IIoT applications, most organizations simply do not have the foresight to properly assess the full range of potential business opportunities for their IIoT data. These opportunities will eventually emerge over time. But by taking the time to carefully evaluate strategies for IIoT data retention, an enterprise can lay a good foundation upon which future value can be built.

So how can I avoid discarding data that might provide valuable insight or be monetizable, while still not storing everything? If storage cost and network bandwidth were unlimited, the answer would be easy. Simply sample sensor data at the highest resolution possible and send every sample over the network for archival storage. However, for many, IIoT applications with large numbers of sensors and high-frequency sampling, this approach is impractical. A balance must be struck between sampling data at a high enough rate to enable the responsiveness of real-time automated logic and preserving data at a low enough rate to be economically sustainable.

Is artificial intelligence the answer?

AI software algorithms that can emulate certain aspects of human cognition are becoming increasingly commonplace and accessible to open source communities. In recent years, the capabilities of these algorithms have improved to the point where they can approximate human performance in certain narrow tasks, such as image and speech recognition and language translation. They often exceed human performance in their ability to identify anomalies, patterns and correlation in certain data sets that are too large to be meaningfully evaluated using traditional analytic dashboards. And their ability to learn continually, while using that knowledge to make accurate and valuable predictions as data sets grow, increasingly impacts our daily lives. Whether it’s Amazon recommending books or movies, your bank’s fraud detection department giving you a call, or even self-driving cars now being tested — machine learning AI algorithms are transforming our world.

Many consider artificial intelligence critical to quickly obtaining valuable insight from IIoT data that might otherwise go to waste by surfacing critical operational anomalies, patterns and correlations. AI can also play a valuable role in identifying important data for retention. But employing AI in IIoT settings is not as simple as it sounds. Sure, AI cloud services (such as IBM’s Watson IoT and Microsoft’s Cortana) can be fed data and generate insight in a growing number of areas. However, IIoT poses some special challenges that make using AI to decide which (and how much) data to retain a non-trivial exercise.

The role of AI with fog computing

Companies facing the challenge of choosing between storing all raw IIoT sensor data generated or first blindly summarizing data in-flight (perhaps on an edge gateway) before transmission for long-term storage are often forced to choose the summarization approach. However, choosing the wrong summarization methodology can result in a loss of fidelity and missing meaningful events that could help improve your business. While consolidating data from many machines can allow the development of sophisticated analytic models, the ability to analyze and process time-critical data closest to where the data is generated can enhance the responsiveness and scalability of IIoT applications.

A major improvement over blind summarization would be to have time-critical complex analytic processing (such as algorithms for predictive failure analysis) operationalized on far-edge devices or gateways where they can process entire IoT data streams and respond as needed in real time. Less time-sensitive analytic intelligence and business logic can be centralized in the cloud and can use a summarized subset of the data. The AI algorithm can help to determine how much summarization is appropriate, based on a real-time view of the raw sensor data. This is where fog computing can play a role. Fog computing is a distributed computing architecture that emphasizes the use of far-edge intelligence for complex event processing. When AI is leveraged in a fog computing model, smarter real-time decisions (including decisions about when to preserve data) based on predictive intelligence can be made closest to where the data originates. However, this is not as easy to accomplish as it sounds. Edge devices often do not have sufficient computational and memory resources to accommodate high-performance execution of predictive models. While new fog computing devices capable of efficiently employing pre-trained AI models are now emerging, they may not have visibility into sufficiently large or diverse data sets to train sophisticated AI models. Obtaining large and diverse data sets still requires consolidation of data generated across many edge devices.

A practical compromise IoT architecture must first employ some centralized (cloud) aggregation and processing of raw IoT sensor data for training useful machine learning models, followed by far-edge execution and refinement of those models. In many industrial environments, this centralization will have to be on-premises (for both cost and security reasons), making private IoT data ingestion, processing and storage an important part of any IIoT architecture worth considering. But public IoT clouds can also play a role to enable sharing of insight across geographic and organizational boundaries (for example with distribution and supply chain partners). A multi-tiered architecture (involving far-edge, private cloud and public cloud) can provide an excellent balance between local responsiveness and consolidated machine learning, while maintaining privacy for proprietary data sets. Key to realizing such a multi-tiered architecture is the ability to employ ML at each tier and to dynamically adapt data retention and summarization policies in real time.

Adaptive federated ML

Federated ML (FML) is a technique where machine learning models can be operationalized and refined at the far edge of the network, while still contributing to the development of richer centralized machine learning models. Local refinements to far-edge models can be summarized and sent up one level for contribution to refinement of a consolidated model at the next tier. Far-edge devices across production lines within a factory can contribute to the development of a sophisticated factory-level model that consolidates learning from all related devices and production lines within that factory. Refinements to the factory-level model can be pushed up to an enterprise-wide model that incorporates learning across all factories.

Adaptive federated ML (AFML) takes FML a few steps further. First, as the centralized models evolve, they are pushed out to replace the models at lower tiers, allowing the entire system to learn. Second, when an uncharacterized anomaly is detected by the AI at the far-edge, the system adapts by uploading a cache of high-resolution raw data around the anomaly for archival and to allow for detailed analysis and characterization of the anomaly. Finally, these systems may also temporarily increase the sampling rate and/or reduce the summarization interval to provide a higher resolution view of the next occurrence of the anomaly.

Here’s an example of how an AFML approach works:

Adaptive federated machine learning training mode


  1. All raw sensor data is sent from the IoT devices to centralized, on-premises private IoT storage for the time required to aggregate a sufficiently large data set for training of effective AI models.
  2. An on-premises big-data, cluster-computing environment is then used to train AI models for anomaly and predictive analysis.
  3. Once the models have been trained, they are pushed down to fog computing devices and up to the enterprise level for consolidation. Centralized aggregation of raw data ceases and the system switches to production mode.

Adaptive federated production mode


  1. Complex event processors on the fog computing devices use the AI model to analyze all data in real-time and provide their insight to local supervisory logic and the private cloud.
  2. When everything is nominal, only summarized data are forwarded to the centralized IoT cloud for archival.
  3. Whenever deviations from the model are flagged by the fog device AI, the supervisory logic does three things:
    1. Executes any local rules in place for that predicted failure
    2. Sends a summary of the model deviations to the on-premises IoT cloud (for updating of the consolidated model)
    3. If the exception is an uncharacterized anomaly, sends to the IoT cloud a cache of raw data that surrounds the anomaly
  4. A batch process at each IoT cloud tier routinely retrains the machine learning models (using model deviation data and raw data) and periodically pushes down the upgraded model to the lower tier.
  5. Finally, selected data subsets that can be used by partners are sent to the public cloud for further exposure.

So instead of simply choosing to blindly summarize IIoT data at the edge (generating massive data exhaust), complex predictive analytic models can be employed directly on the far-edge devices where all sensor data can be examined. These analytic models can not only inform timely local supervisory decisions, they can learn from data generated beyond their reach and can also dictate how much of the raw data is worth preserving centrally at any moment in time. With this approach, the entire system will react quickly and get smarter over time.

Choosing the right technologies for AFML

The key to making an AFML architecture work is selecting IoT and analytics tools that are designed with this model in mind. You will need an on-premises IoT and analytics cloud infrastructure that is efficient, flexible, scalable and modular. You will need to be able to easily orchestrate the interactions of your key architectural building blocks. The tools you select should allow you to easily adapt to the shifting needs of your business, allowing you to experiment and innovate.

Although we are starting to see a proliferation of IoT tools that can be used for developing various IIoT solutions, these tools are seldom well-integrated. Industrial enterprises typically look to systems integrators to bridge the gaps with custom software development. The result is often a confusing, inflexible, costly and unsupportable mishmash of technologies that have been loosely cobbled together.

Thankfully, a few IoT vendors are now beginning to build more fully-integrated IoT service creation and enrichment platforms (SCEPs), designed to support an AFML IIoT architecture. SCEPs allow complex IoT architectures, applications and orchestrations to be efficiently created and evolved with minimal programming and administrative effort. These next-generation IoT platforms will help companies eliminate IoT data exhaust and harness IIoT data for use as a strategic business asset.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

April 28, 2017  3:18 PM

Match your real-time operating system in a cutting-edge platform

Kim Rowe Profile: Kim Rowe
Internet of Things, iot, IoT platform, iot security, operating system, OS

Today’s competitive world of embedded systems and the internet of things is placing ever greater demands on developers. They need to produce products that optimize size, weight and power, as well as have focused feature sets that are also flexible so they can be quickly adapted to meet changing customer requirements. In terms of processors, this means a family that is built around a common, high-performance core that features low power consumption, but which as a series offers a range of selections in terms of on-chip functions and peripherals that present a compatible platform for software. This newer generation tends to be organized into such families that, with a common instruction set, encompass a range that provides scalability from small-scale to large-scale applications along with I/O, on-chip flash and RAM in market-leading sizes.

Matching a real-time operating system (RTOS) to such a processor family requires that it be able to smoothly scale along the lines of power and performance offered by the line of processors. Ideally it should have a familiar, standard API such as POSIX, which is a Linux-compatible standard for embedded systems. It must have a set of core characteristics in a compact package along with a wide selection of tested and proven functional software modules that can be quickly integrated to match the application and selected processor along with the mix of on-chip and off-chip peripherals needed. This, of course, means a wide selection of such functional modules.

Many vendors now offer integrated development kits that include a single-board CPU with an integrated RTOS along with peripherals and tools to help the developer get started immediately adding innovative value. Such an integrated platform supports eight principles required for fast and efficient IoT development:

  1. Lean
  2. Adaptable
  3. Secure
  4. Safe
  5. Connected
  6. Complete
  7. Cloud-compatible
  8. Cutting edge

These features not only help guide development for focused design and meeting time-to-market demands, they also guarantee the functional considerations needed to work effectively in the internet of things.

The platform’s rich modularity supports the lean development model by providing standardization, interchangeability of drivers, protocols and service modules, and portability of applications. This lets developers quickly adapt to changing customer requirements in the midst of a project. The integrated platform optimizes both hardware and software design on the go.

Those same features make it adaptable — able to meet new market demands for features and functionality. Once a product is in place with a customer, the OEM must be able to quickly react to calls for additional features and expanded functionality — or even a smaller, lower-cost version of a product. Existing code can be moved to a higher performance processor and new features quickly added without serious revision of existing code.

Security and safety go hand-in-hand and must be designed in from the ground up. If it can be hacked, it isn’t safe. Security begins with the selection of a secure initial design and extends through communication protocols, strategies such as password, electronic key and physical recognition, the use of secure booting, encryption and many more strategies. However, the judicious selection of the basic system architecture, hardware and software is also a key requirement.

Two main features help ensure safety in systems. Determinism guarantees quick response to threatening conditions and makes the operation of the system predictable so that it can be reliably tested to meet strict timing requirements. Emergency stop with zero boot time means that a device can be halted instantly and restarted with zero boot time if required. Thus an unsafe condition can be halted immediately and brought back to a safe condition or the device diverted to an action to deal with the emergency.

The internet of things is connected and must accommodate a very broad range of sensors, both wired and wireless. This means the full range of both wired and wireless connectivity. An RTOS that can deliver virtually any connectivity solution that can be selected and integrated into the RTOS design off the shelf is complete in that it contains everything you are likely to need for a project as it evolves.

Among the supply of protocols should be those that can be used on the cloud side to easily and securely connect and transfer data and process commands. This requires the latest components and tools to deal with it and its newer applications, along with the ability to work with other leading-edge applications like Microsoft Azure. It also means the use of the best tools such as a wide selection among the IDE offerings, along with advanced support tools that allow use and tracking of memory and objects such as dials, gauges charts for variable displays, plus timing tools and displays to understand scheduling, interrupt processing and more.

The incorporation of these latest tools, protocols and technologies and their availability across a matched RTOS and processor family makes this a truly cutting edge platform.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Page 20 of 58« First...10...1819202122...304050...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: