IoT Agenda


October 25, 2019  3:54 PM

The unique requirements for the edge data layer

Laz Vekiarides Profile: Laz Vekiarides
Data storage, Internet of Things, iot, IoT analytics, IoT data, IoT data management, IoT data storage

The data layer is an often invisible, but essential foundation for any new infrastructure technology, but it’s frequently an afterthought early in a new technology’s development.  For example, when containers were first rolled out, the lack of persistent storage severely restricted their use until vendors provided workarounds to solve the problem. We can’t afford to make that mistake again.

There’s an urgent need to store, manage and protect the enormous amount of IoT data being generated at the edge and the exponentially higher volumes expected soon.

A look at the explosive growth in data underscores the urgency. In 2013, IoT produced 0.1 zettabyte (ZB), 1 million petabytes (PB) of data, and that’s projected to hit 4.4 ZB by 2020. By 2025, it will skyrocket to an estimated 79 ZB, according to IDC — and that’s just for IoT. Enterprises will also be using the edge for infrastructure services, and service providers will be using it to deliver over-the-top video among other things. There is no limit to the types of data that our world will generate that will contribute to this dramatic IoT growth.

Using the edge to overcome latency

Certainly, much of that data can reside permanently in the limitless, robust repository of the cloud, but only if performance isn’t a priority. Cloud data centers are built in low-cost areas far from urban centers. That’s economical, but the distance introduces significant latency, a no-go for any app demanding instant response. With the number of endpoints expanding at an almost limitless pace, the latency challenges are exacerbated. Consider healthcare apps involving life-or-death situations, such as IoT devices communicating in real time, self-driving cars that make critical decisions second to second and financial dealings transacted in milliseconds.

The answer is to ensure compute and data are kept close to the end user; in other words, the answer lies at the edge. But when it comes to edge data, there are some serious challenges. Edge data centers must be built near large metro areas, where real estate prices are sky-high, and because of the cost, facilities are going to be small — too small to fit enough traditional storage boxes to accommodate all that data.

Distributed storage is another issue. Applications at the edge won’t interact with only one facility,  for example, autonomous vehicles. As a self-driving car travels in and out of the range of different facilities, it will need to communicate seamlessly with all of them, and its data must follow along. That’s a tall order with just one vehicle. What happens when millions of autonomous vehicles hit the roads?

The solution is to pair the cloud with the edge and to provide a way to intelligently move data between them, minimizing latency and maximizing performance. In this way, the edge data layer acts as an on-demand service, and not as a massive rack of big storage arrays.

One way to limit the need for space and to avoid wasting resources is to cache the most active data at an edge data center. Better known as “hot” data, this tier typically accounts for only 10% of the data set. If data is “warm,” it can be stored at a point of presence far enough away so that real estate isn’t cost prohibitive, but close enough to provide no more than a couple of milliseconds of latency. Generally, anything under 120 miles away is close enough. The rest of the data set can be stored in the cloud if it’s cold and accessed infrequently.

A service model for the edge data layer

With this approach, 100 TB of local storage in an edge facility can represent 1 PB of usable storage. The full data set, including cold data, is ultimately stored in a backing cloud. Storing both hot and warm data in a nearby point of presence can ensure sub-millisecond latency. Keeping data close to the decision point enhances processing power, so analytics can be performed with minimal latency.

This model can also help address the power constraints that many edge facilities face. Because just a fraction of the full data set is stored onsite, not only is there far less gear consuming electricity, but it makes it financially feasible to use the latest solid-state storage, which offers better performance and energy efficiency than spinning disks.

Having data stored where it needs to be enhances both distribution and connectivity. The use of dedicated private lines, which have become more affordable thanks to declining bandwidth prices, can improve both performance and security. On the edge, uploads happen at the same speed as downloads, a major consideration for apps that produce massive amounts of data to be uploaded and processed.

It’s still early in the edge buildout, but the explosion of IoT requires the industry to implement a data layer now in order to ensure the edge can handle the rapidly growing mountains of data. And it will require a new approach that differs from both traditional on-premises and cloud storage. In the end, only a service model can successfully address the space and energy constraints of the edge, while also providing sufficient capacity and data distribution capabilities to power the future of IoT.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

October 24, 2019  2:54 PM

IoT security policy requires comprehensive, expansive visibility

Reggie Best Profile: Reggie Best
cybersecurity, healthcare IoT, Internet of Things, iot, IoT cybersecurity, iot security, security awareness, security in IOT

IoT is a black hole; every day it’s sucking in devices that must be secured.

But IoT is also a nebulous term. Although emerging devices are easily identified at inception, decades-old technology has become part of the IoT universe –sometimes stealthily –and must be dealt with. These devices must be properly segmented and managed from a policy perspective because they’re a gateway to an organization’s broader, connected infrastructure.

Some IoT devices represent the cutting edge of innovation. Others are part of systems that have been around for years, supporting the infrastructure of a building, campus or an entire city. Emerging IoT devices are likely to be built and deployed with security in mind, while more familiar hardware that now has intelligence and connectivity might get overlooked.

IoT devices are both mundane…

Multi-function printers with scanning and copying functions are an excellent example of one of the earliest iterations of an IoT endpoint, but often left out of security strategy. To improve collaboration and productivity, these devices are connected to the network without a second thought. Every employee can easily print from their workstation or scan a document to be routed anywhere in the organization.

That convenient connectivity, however, makes multi-function printers an IoT endpoint, and a popular doorway for threat actors to gain access to a company’s broader infrastructure. The good news is an IoT security policy can be a powerful tool to secure entire fleets of multi-function printers.

…and medical marvels

Even as hospitals move to electronic records with the goal of having a single view of a patient, they’re still full of connected printers, and increasingly, smart medical devices. And because they are connected to the network, they’re potentially a cybersecurity headache.

Just like every workstation at every nurses’ station and every tablet in a specialist’s hands, medical devices ranging from portable ultrasounds to heart monitors are all connected. They’re even more dangerous in that some devices are used by patients outside the facility. Today’s modern medical devices are ideal targets for threat actors who want to gain access to a hospital’s information systems.

Much like printers, the prescription is good security policy, including network segmentation. There’s no reason a portable glucose monitor needs to connect the same way patient information records and workstations do.

The mundane gets marvelous

The proliferation of a smarter buildings means whole cities and their infrastructure — from traffic systems to power grids — are increasingly comprised of millions of IoT endpoints. With all these devices connected to the cloud, organizations can study usage patterns to create even more efficient environments. An IoT endpoint is a path to a treasure trove of valuable information and mission-critical systems.

Mundane systems, such as HVAC, now have sensors to control temperature. For example, when a crowded room heats up because there’s so many bodies, HVAC knows to crank up the air conditioning. Conversely, the system is smart enough not to waste energy cooling or heating an empty room. Lighting systems are also automated thanks to wirelessly attached sensors, which — you guessed it — are on a network.

Similarly, security systems are made up of networked devices to monitor a building or entire campus. Wired or unwired, cameras as well as biometric access keypads and facial recognition sensors make it easier for doors and turnstiles to automatically open for the right people. They’re all IoT endpoints, too, and low hanging fruit for someone who just needs a small crack to enter a larger system.

Like printers, HVAC and security systems have been around for decades, and in their early days, always segmented on their own proprietary infrastructures. Advances in networking and even AI mean connectivity is a must, but how they connect and what they’re allowed to do on any network needs to be carefully policed.

Build it and the threats will come

Even on a small scale, these IoT endpoints proliferate quickly. Now imagine them as part of a bigger system, such as a smart city.

From traffic systems to parking meters, cities have become a mesh of networked IoT endpoints. Municipal power and water utilities monitor their own infrastructure and even those of residential and commercial buildings through wired and wireless sensors. This proliferation will continue exponentially as autonomous vehicles hit the road.

From technical perspective, these devices have a lot in common. Even when the functions and purposes of devices are unique, they may use the same communications protocols or the same storage media. What makes them different is how connected they must be. Do portable medical devices such as insulin pumps need to be on the same network as patient records? Does the single multi-function printer in a satellite office need to be on the same WAN as head office? Should real-time traffic systems worry about being compromised because of a water usage meter on a high-rise building?

Visibility is essential to robust security and policy management. You need to know you’ve got a smart fleet of printers on your network or dozens of connected security cameras so you can decide what they can do and what they can access. And you can’t do it manually. Not only must you establish a policy for every conceivable device and scenario, it needs to be automatically applied—you can’t expect your IT security team to keep up.

None of these systems need share a network to deliver value as an IoT device. But if you’re to properly segment these various IoT endpoints, whether it’s a mundane meter or a cutting-edge facial recognition sensor, you need to know they’re there. That starts with having a broad definition for IoT. It’s a big galaxy.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 24, 2019  1:15 PM

Epoxy’s role in IoT device PCB assembly and manufacturing

Zulki Khan Profile: Zulki Khan
Internet of Things, iot, IoT device makers, IoT devices, IoT hardware, manufacturing IoT, smart manufacturing

Tech companies are talking about designing IoT devices with AI and machine learning. Before getting there, IoT device OEMs must take a couple of steps back and assure themselves of effective printed circuit board microelectronics assembly and manufacturing as an initial step to assure product integrity and reliability.

An item such as using the right epoxy during printed circuit board (PCB) microelectronics assembly might sound mundane to less savvy PCB assembly and manufacturing engineers. An inexperienced process engineer on the assembly floor might just shrug it off during the epoxy selection process and inadvertently chose the wrong one. That leads to the highest probability of subsequent product failure or delayed latent field failures.

Therefore, it’s a good idea for IoT device OEMs to get a good handle on the what, how and why roles associated with epoxy.

An epoxy is used on the bottom of a bare die and its wire bonding on an IoT device substrate or PCB to protect it. However, a main point to be made here is that not all epoxies are created equal. There are multiple manufacturers producing different epoxies. And even within each manufacturer, there are multiple types of epoxies being produced. Also, it’s important to know that Enhanced Messaging Service providers and contract manufacturers are generally the ones deciding on the kind of epoxy to use.

Selecting the right epoxy for your IoT PCB product involves epoxy characteristics, curing conditions, viscosity  and glass-transition temperature.

Epoxy characteristics. Epoxy is made up of different elements. For example, some might have adhesive traits while others might have conductive thermal characteristics. However, some don’t have non-conductive electrical characteristics depending on the applications that are being used. When it comes to temperature, an epoxy should be effective at different ranges, allowing it to be used with different substrates and surface finishes.

Curing conditions. Curing is dependent on an epoxy’s application. Curing means the epoxy goes through a cycle of a specific temperature range during a specific time interval. Some epoxies are cured at lower temperatures, and others at higher temperatures. Some are cured in minutes while others take an hour and a half to two hours.

Viscosity. This is important because different IoT PCB applications require different viscosities. If the viscosity is not right — meaning it is too thin or too thick — it can cause issues during application. If it’s too thin, the epoxy will spill out to the outside periphery of the die, thereby creating unstable die attach joints before wire bonding can be performed.

Dispensing the epoxy itself plays a critical role in an IoT device’s reliability. Here, the accuracy of the epoxy lines dispensed under the die for die attachment is highly important. Also, the dispensing methods are extremely important. Each one, whether it’s a pump, needle or syringe, must be carefully evaluated for a particular die attach application.

Tack time is closely associated here. There’s only a certain amount of time allowed to attach a die. Otherwise, the epoxy can dry too quickly, which leads to a die attach that is not optimal.

Glass-transition temperature (Tg). High glass transition epoxy compounds are critical to the adhesive selection for higher-temperature applications. These products have superior mechanical, thermal and electrical properties than lower Tg materials at high temperatures. So, you have to take into consideration the Tg as a major factor when choosing the right epoxy for your given application.

All these key epoxy differences, as well as the different types and brands, are important to know. This knowhow helps to prevent an IoT device undergoing PCB microelectronics assembly from being subjected to failure at the outset.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 21, 2019  2:45 PM

How machine learning and IoT increase customer lifetime value

George Corugedo Profile: George Corugedo
AI and IoT, customer experience, Internet of Things, iot, IoT analytics, IoT and AI, Machine learning

Any business looking to implement emerging technologies has one primary goal: to generate revenue. Deploying advanced digital technologies into business functions is no small undertaking; it requires a large financial investment, a reskilling of the workforce and a cleaning of vast amounts of data to ensure it’s prepared to be analyzed. Simply put, if you take this on, you want to see the return.

However, there’s a fundamental problem with the approach many companies are taking with machine learning. They are using it to superficially enhance the customer experience, but stop short of transforming it into a true revenue-generating engine.

Take fast-food companies, for example. Many are racing to introduce AI-powered menu boards that will recommend add-on items based on the current selection, restaurant traffic or conditions such as weather or time of day. While this is a fantastic upsell opportunity, this could turn out to be more of a novelty than a mission-critical system that will boost the bottom line.

Personalization for a purpose

For AI and machine learning to ascend as top revenue-generating engines for the business, advanced analytic models must be embedded across the complete customer lifecycle. Without analytics that illustrate the why and the how, the technology does little more than scratch the surface of possibilities.

Personalization for personalization’s sake — such as seeing your name on a menu board — accomplishes little more than making the customer feel special in the moment. It is vastly different than using advanced analytics and IoT to personalize the experience for a segment of one in real-time based on their behaviors, interests and future intent.

Consider AI-powered chatbots that help customers resolve simple issues. The customer might be pleasantly surprised by the user-friendly experience, but a single interaction will not translate into direct and measurable revenue gains. Why? Because it’s restricted to one channel.

Data siloes inhibit chatbots and allow them to only go so far. Perhaps the bot can recommend a product that the individual later purchases, but if the data is siloed by channel, the brand will not have visibility into the entire customer journey. This significantly clouds any direct revenue impact and depresses the value of the advanced technology; a one-off sale is not akin to direct revenue lift.

According to Accenture, 91% of consumers are more likely to shop with brands that provide relevant offers and recommendations. Personalization also drives retention, which astute brands know is more profitable than acquisition.

Unlock channel constraints to realize new revenue streams

To produce a revenue lift that moves the needle, advanced technologies must automate intelligence to dynamically engage with customers across every touchpoint available.

Additionally, self-training models should be built on unified customer profiles that capture data from every source in the moment. Embedded intelligence — supported by a complete view of the customer — has the power to recommend a next-best-action to a segment of one that is far more likely to end in conversion.

This is transformative. Traditional engagement strategies are bound by channels, making it very difficult to know how a customer moves throughout a dynamic journey. Say an email is sent to a segment of customers, with success measured by click rates and conversions. If a customer who doesn’t respond then shows up anonymously on the website, without any link between the email and the cookie, the customer will likely receive an inconsistent message and end their journey prematurely.

Or, take the example of a company that sells coffee pods. With a traditional model, customers buy a subscription to deliver coffee on a regular basis. However, this model ignores actual consumption and results in either a backlog of coffee or the customer running out before the next subscription arrives.

With a connected brewer that understands consumer behavior, the brand can deliver the right pods when they’re needed. With an added layer of machine learning, the brewer can begin to understand more sophisticated factors as well, such as day of the week or time of year or even the weather– and deliver taste appropriate product accordingly.

Using embedded advanced analytic capabilities unbound by channels and data siloes, businesses will know in real-time everything there is to know about the customer. The intelligence is in knowing the right question to ask and confirming easily with the customer then and there. Frictionless relationships win revenue and loyalty in interactive IoT.

By consistently feeding data collected by IoT channels into machine learning models, businesses can essentially predict what the customer will do next.

Optimize models to deliver real business value

Advanced digital optimization ensures a consistent, personalized message and journey for a customer regardless of channel or any other variable.

Embedded AI and IoT allow businesses to use models built on their customers’ data to automatically recommend the next-best-action on each stage of the customer journey based on business goals. Further, simulation engines constantly watch models and will move new models into production that predict better outcomes based on predetermined metrics. This results in reduced operational expenses, increased productivity, improved personalization at scale and increased customer lifetime value.

Automated, embedded intelligence enables hundreds or thousands of models to run concurrently, all with a single-minded purpose of exploiting revenue opportunities according to any metric the business proposes. The resulting personalized and differentiated customer-facing experience empowers businesses to monetize customer data and truly impact the bottom line.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 21, 2019  2:13 PM

Use network tools to improve IoT analytics with data flow

Harnil Oza Harnil Oza Profile: Harnil Oza
Internet of Things, iot, IoT analytics, IoT data management, IOT Network

We all know that IoT is taking place in modern applications for industries. IoT technology is developing quickly, making analytics crucial to ensure security. IoT analytics operates using the cloud and electronic instrumentation, which requires programmers to control and access IoT data.

IT pros approaching IoT analytics should capture data via packets in automated workloads, also known as flow.  Flow is the sharing of packets with the For example, if you stream a video on the internet, packets are sent from the server to your device. This is flow in action. NetFlow and sFlow are both tools that monitor network traffic.

Methodologies for IoT analytics

IT pros are still creating methods to capture the flow of data for analyzing IoT. The number of cloud companies has increased, and as networks continue to grow, it’s very risky to carry down the large visibility gap for capturing data.

Because of data traffic, many cloud companies have started to send information through their networks via IP Flow, sFlow and NetFlow. When you start to capture IoT specific data, there are several advantages. The data gets standardized into industry-accepted data, and once the data is observed from the gateway, it can be correlated with traffic data coming out from the data center or cloud services in use. And every cloud environment can create flow through generating and exporting the data.

I have listed below the top companies’ methodologies of IoT analytics:

Amazon: Amazon is a famous platform for cloud services. Not only with its cost, but also with its frequency response of the network. It has many features to enhance the IoT platform, and can support many devices. This platform uses flow as the mechanism to communicate. The service is handled by the virtual private cloud. It comes across under certain levels such as ports, networks, traffic levels, and some other communication networks. Data gets stored using the CloudWatch logs in JavaScript Object Notation (JSON).

Microsoft Azure: It flows under a secured network system. The flow logs are work or travel in a flow and stored into Azure storage in the format of JSON. The data from the devices have been stored in a method of real-time data.

Google: Google is a famous platform in every technology. Google Cloud IoT Core is a fully managed service that allows you to handle easily and secure the connection with manages and ingests data from millions of globally dispersed devices. The data flow is run by logging the Stackdriver. And the performance of the network operates with good latency. It handles large data which works still fine.

Tools for network flow export

There are many resources that you may get for the network flow. Every one of them was work with the same aspects where the data get described perfectly for different kinds of devices. The network for every cloud-based IoT device is formed with an infrastructure that consumes the resources to deliver secure data. There are many tools based on the size of the devices consider if you are using a small size device to collect the data from the gateway, there are some tools that describe the data from the traffic of the network. You may hear about the Linux OS which much secured than the other OS. But even you can run the IoT based cloud on both Windows and Linux based systems. Most of them prefer Linux based devices only.

SoftFlowd: This tool is highly efficient and it is an open-source tool. This can convert the packet data into a flow data based on the application size but not used in many devices due to its lack of features than other tools. The only thing about this tool that it doesn’t has updates frequently.

NDSAD: This tool is completely running on the platform of hosting and collecting the data by the interface and export to the flow called NetFlow. It observes the data from the network card with the lower latency and can enhance with more advanced capture methods. The application of this tool is less consumed due to its feature.

Select a tool based on data flow

To analyze data from the flow can be easily done than the method like the software to track the data that works under a procedure. This works under the protocol of the network technology to maintain a secure way. There are many flow techniques for analyzing the data to obtain output and to make the data as standardized. I have listed an example tool of flow to obtain standardized data.

SampledFlow is also named as sample flow used to the purpose of network operating. It is a great source of data. It captures or observes the data from the different sources and output the data in a well-formed structured and the output can feed into another responding tool. The output of the sampled flow can be converted into NetFlow to move further steps.

Final thoughts

There are lots of IoT projects were running over the various applications and even many of the companies like mobile app-based companies were working towards the technology to get good and secure services. The data of the system can be handled a high amount of data by the cloud storage that I have noted above. Each cloud services have an efficiency to take care of your devices. Many tools can handle the flow from the packets and converted them into output. Each tool has its specialty which has drawn from its network and the devices. It also depends upon the size of the devices that you use. I hope you may get some knowledge about the IoT analytics flow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 21, 2019  1:50 PM

3 more emerging attack surfaces need greater security

Carolyn Crandall Profile: Carolyn Crandall
cybersecurity, Internet of Things, iot, IoT applications, IoT cybersecurity, IoT network monitoring, iot security, Security threats, Shadow IT

This is the second part in a two-part series. Find the first part here.

IoT is far from the only emerging attack surface being targeted and exploited by cybercriminals. As new networks and services that are designed to make life easier for organizations and their employees become more widespread, cunning attackers will find new ways to use them as a foothold to gain access to the broader network.

In this second part of a two-part series, three additional emerging attack surfaces will be explored with recommendations to secure each.

Remote networks

Large organizations often have remote offices or branch locations as part of their network. Whether it is a regional office, a bank branch, a retail store, a clinic, a subsidiary network or another type of site, the remote network location is another factor for security teams to consider. Because most remote workplaces have access to the corporate headquarters network, there are risks associated with remote office security for the organization to consider.

Remote sites are often tenants in a building, reliant on existing physical security controls which may not be as stringent as corporate policy requires. They usually do not have local technical support, let alone network security staff. The network security infrastructure at the remote site may not be as sophisticated or capable as that of headquarters, and security may lack visibility to suspicious remote network activity. These limitations make them attractive for attackers to leverage for access back to the corporate network. To compensate for these security gaps, organizations are implementing emerging network monitoring solutions for better detection at remote sites. Others are deploying deception technology to gain remote visibility and detection capabilities without additional infrastructure or security personnel at each location.

Applications and services

According to a recent McAfee survey, over 80% of employees admit to shadow IT usage, installing apps on their work devices without the consent of IT. The rise of the cloud has made the proliferation of both innocuous and malicious apps extremely easy, and many organizations don’t realize the extent of the problem: a recent Cisco survey indicated that CIOs estimated that their organizations used 51 cloud service apps, while the reality was over 700.

Although many of these apps and services are harmless, others are not. By installing unapproved apps, employees are installing software that has not been vetted or approved by the security team, and many have compliance or security risks. Some groups have even gone as far as setting up cloud environments using unapproved apps, which can expose data to attacks. Educating employees about the dangers of shadow IT usage can go a long way, and security teams can benefit from in-network visibility tools to help them identify when shadow IT apps are in use and who is using/installing them.

Active Directory Deception Objects

By design, Active Directory (AD) will readily exchange information with any member system it manages, but attackers can leverage this to extract information on the entire domain quickly. Security teams may not even realize that such activity is occurring since AD provides information to a member system as part of normal operations. Attackers can extract user accounts, system accounts, and trusted domain information from any compromised member system on the AD domain as part of their data gathering. They can use this information to find privileged accounts, overlapping security rights that provide elevated rights, or critical systems to target as part of their attacks such as trusted domain controllers or essential database servers. They can utilize tools, such as Mimikatz and Bloodhound, to compromise accounts on AD or identify user or service accounts with inherited administrative rights to obtain highly privileged access to the entire network.

Typically, organizations will manually defend against such activities, but emerging solutions can automate this process. To conduct counter-reconnaissance, organizations can create AD containers to seed fake user and system accounts, create deceptive AD trusted or member domains, or set up entirely artificial AD infrastructures that are part of the production AD infrastructure. By feeding false results on reconnaissance queries, the organization can proactively mislead and misinform attackers.

Keeping Security Front of Mind

The emergence of new attack surfaces is inevitable. They will continue to arise as a result of innovation, as developers discover novel, better and more efficient ways of operating. As long as humans seek to improve their lives through high-tech devices, cutting edge conveniences, and new ways to stay connected, there will always be new opportunities for cybercriminals to exploit.

Securing every device across every surface has become increasingly difficult — and perhaps impossible. By assessing one’s security controls and their efficacy in each environment, and by taking an assumed-breach posture, organizations will put themselves in the best possible position to understand their vulnerabilities and risk. Ultimately, prevent what one can, detect what one can’t stop early, and be prepared to respond quickly regardless of attack surface or methods used.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 18, 2019  5:07 PM

Why connected devices aren’t always as smart as you think

Michael Greene Profile: Michael Greene
connected devices, Cyberattacks, Data privacy, Data protection, Internet of Things, iot, IoT cybersecurity, IoT devices, iot security, Security threats

As the holiday shopping season looms on the horizon, sales of connected devices are expected to flourish. From a plethora of intelligent assistants such as smart fitness mirrors and connected doorbells, there is a connected device for you. However, these devices are often a hacker’s prime target due to lax security.

While a lot of time and money is invested in the features and functionality of these devices, security is often woefully neglected in the rush to get products to the market. This has been a key driver in manufacturers deploying default passwords as standard and failing to ensure that software is frequently updated.

The looming regulation in California — coming into effect in 2020 — should help to reduce the use of default passwords, but it will not eradicate them. It is the first regulation in the U.S. that will help ensure manufacturers of IoT devices equip their products with security features out of the box.

However, many manufacturers appear to be ignoring the pending regulation as evidenced by the 600,000 GPS trackers that were recently manufactured in China, and have been shipped across the globe. These devices have a range of vulnerabilities including a default password of 123456. Making the situation worse, these devices were to help parents track their children. This is just the tip of the iceberg in terms of the magnitude of the default password problem.

It’s clear that the rapid growth of IoT is resulting in many vulnerable devices entering our homes and businesses, expanding the potential attack vector for hackers. By 2020, a staggering 25% of cyberattacks within enterprises will involve IoT devices, according to Gartner.

If manufacturers’ recent track record is any indication, we can expect many organizations to continue to circumvent IoT security regulations. At the same time, the U.S. government shows no urgency in punishing these organizations or enforcing broader policies. As such, the responsibility falls to consumers and employers to take action and mitigate the security risks associated with smart devices.

To do this, consumers and employers must explain the steps people need to take to protect their personal information when using connected devices. For home use and enterprises, it’s about replacing default passwords before devices connect to the network. However, it’s also important that the new password is both strong, unique and uncompromised before using or connecting the device.

You wouldn’t drive your car without a seatbelt on, and you shouldn’t use a smart device with a default password. It is also recommended that IoT devices are not connected to networks with personal or corporate data. Many security experts recommend connecting them to a hidden guest network with separate security settings.

As the physical and digital worlds continue to blend, security must play an increasingly prominent role, and everyone must educate themselves on how to protect their valuable data. This holiday season, make sure that in the rush to embrace all things digital, choose safe passwords and keep your IoT devices off sensitive networks.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 17, 2019  4:58 PM

How do you know your AI is making the right choices?

Sudhi Sinha Profile: Sudhi Sinha
AI and IoT, ethical AI, ethics, Internet of Things, iot, IoT analytics, IoT and AI, IoT data, iot security

There is a lot of conversation around data right now. Its value continues to increase, whether as something that can be monetized for profit or savings, or in terms of improving understanding and operations for a business that uses it effectively. That’s why the advent of artificial intelligence (AI) and machine learning — which allow us to quickly gain insights from vast quantities of data that was previously siloed — has been such an incredible revolution. Most are gaining actionable insights from smart systems, but how do we stay accountable in the age of the machine? How do we know we can trust the data and resulting insights when human beings are a lesser part of the equation?

There are three things to consider when determining how to create a structure of transparency, ethics and accountability with data. The first is access to the raw data, whether it’s from a sensor or a system. It is critical to maintain a transparent pathway back to that raw data that can be accessed easily.  In a building context, for example, analytics can help quickly search through video footage during a time-sensitive security event to identify and pursue a perpetrator. They can do this based on clothing color, gender or other details rather than having to manually search through hours of footage. It remains important to have access to the original footage however, so that there cannot be claims that the footage was edited to challenge the findings.

The second consideration is context. Raw data by itself may not make any sense. It needs to have some level of context around it. Without that, there is no picture of what’s going on. This is true for humans and it is true for machines. Decision making is a result of information, but also of relevant context that informs what action is taken. For another building systems example, take an instance of an uncomfortably warm inside temperature. This could lead to the belief that the HVAC system is not functioning. However, if there is also a meeting taking place that resulted in higher than average usage of the space, that can be an important factor. Without the ability to make determinations based on both data AND its context, systems wouldn’t be considered “smart.”

The third item to prioritize is data security. People need to feel their information is safe and secure from bad actors and misuse. Strategies to increase security have included two-factor authentication of logins or financial transactions in some settings, but they must continue to be a priority moving forward.

Machines as decision-makers: An ethical question

For nearly all of history, decision-making has been a human prerogative, where judgements about right or wrong can be applied (the subjective nature of right and wrong notwithstanding). When the machine becomes the decision-maker, it becomes an ethical question. How do we maintain data trustworthiness as the need for human involvement decreases? In other words, how do we hold machines accountable?

It comes down to transparency: access to raw data as mentioned above, but also referential integrity for all data and the context used to analyze it. The employment of knowledge graphs, a visual way to represent the “thought process” of intelligent technologies, becomes very useful. They provide a way to visualize how AI, in whatever form it may take, got to its decision. Just as there are traditional practices to keep humans accountable for their decisions and resulting actions, machines must have standards put in place now.

In addition to knowledge graphs, having insight into the learning process of your technology through a “digital twin” is a crucial part of machine accountability. By being able to see a representation of physical systems and play out scenarios to answer “what if” questions, it can give insight into the decision-making process. This provides peace of mind and confidence in the machine’s ability to properly analyze the information it is collecting, whether it be from an HVAC, security or lighting system, or something else entirely.

In the future, when machines are driving more decisions, we want technologies to maintain ethical and authentic practices and access to private information that we see now. It is important to establish best practices now and to understand the decisions that are being made and the learning processes that machines are utilizing. With any decision, made from human or machine, it is important to maintain transparency and a logic chain for accountability and peace of mind. By establishing a logic chain to truly understand the decisions that are being made by machines, it becomes possible to understand why they make the recommendations they do and provides accountability and the opportunity to adjust accordingly for a smarter building.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 16, 2019  2:29 PM

How IoT is transforming energy efficiency tactics

Mike Jeffs Profile: Mike Jeffs
Automation, commercial IoT, Internet of Things, iot, IoT analytics, IoT benefits, IoT power, IoT sensors, Machine learning, retail IoT

The level of electricity generated in the UK last year was at its lowest level since 1994, with only 335TWh of electricity produced, according to Carbon Brief.

Although the figure was only a small reduction since 2017, it was substantially lower than 2007, which was the peak of electricity production in the UK.

As the country becomes more conscious of its energy consumption, there is a growing demand to reduce our carbon footprint. Businesses are aiming to become more efficient with their energy usage and using renewable sources where possible. Output from renewable sources in 2018 rose to a record high, contributing to 33% of the UK’s total energy consumption. There has been a 95TWh increase in renewable output since 2005.

With major shifts towards energy efficiency, IoT will become integral to help many businesses provide critical information on energy monitoring.

Source: Hark

Driving energy efficiency

The retail sector is just one of the many industries that’s seen a rise in overhead costs because of energy bills. IoT applications will enable retailers to improve energy efficiency with real-time tracking and monitoring insight.

A combination of sensors from existing systems and additional IoT sensors can be used to create a unified feed of data. The additional IoT sensors can be integrated into key assets in the stores, such as HVAC systems and refrigerators. Data is collected from each of these sensors to create an interconnected network of devices that feed data into the cloud. This unified feed is used to influence decision making and add intelligence in real-time.

Due to the real-time nature of IoT technology and the data it provides, business are able to optimize their operations, and prevent asset failure and subsequent loss of energy.

Source: Hark

Predictive maintenance with machine learning

Machine learning algorithms use collected information to highlight potential failures and inefficiencies within a company’s operations. Slight changes in energy patterns at a micro and macroscopic level can indicate possible control problems or failures in components such as compressors and heating elements. The algorithms can automatically analyse patterns and monitoring assets, creating real-time alerts to potential areas of concern to prioritize callouts.

This can prevent downtime, reduce callout charges and mitigate loss of product. For example, sensors integrated with refrigeration systems can provide insight into how it’s operating through power draw analysis, and ensure that any issues are fixed before failure occurs, preventing produce spoilage.

Automation helps save on costs

IoT is allowing for automation within retail and other industries through the speed and accuracy of real-time information. A great example of this is lighting. Lighting is a high cost for all retailers; but if lighting grids could automatically react to external, ambient light levels, there’s a huge potential for conserving power.

This concept is especially useful during triad periods when energy costs are at their highest. All stores should aim to reduce energy usage during peak times and if they are equipped to do so, move to backup generators to avoid the high charges altogether.

The Power Factor for energy efficiency

A key performance indicator for energy efficiency is power factor. In technical terms, the power factor of an AC electrical power system is the ratio of actual power to apparent power. A lower power factor results in more electricity being drawn to supply the actual power.

Retailers should aim to improve their energy efficiency by increasing their power factor and reap the benefits of reduced energy costs. Power factor ranges between negative one and one, with one being totally efficient with no energy wastage. However, it is technically impossible to reach one as there will always be some form of heat loss, so a power factor between 0.95 to 0.98 is an acceptable range. It is important for a retailer to monitor this number and aim to be as close to one as physically possible.

HVAC systems and lighting systems are main contributors to a bad power factor. A smart solution ensures that sensor data is analysed in real-time to evaluate performance, allowing businesses to identify the exact piece of equipment that is lowering the power factor.

Key metrics analysed can range from power factor fluctuations, kilowatts draw, individual phase frequency, amperage and volts.

By having control over their energy systems and monitoring their power quality indicators, a retailer can benefit from range a of advantages such as significant and immediate savings on energy costs, longer device lifetime, and reduction in low-power factor penalties and carbon footprint.

With such significant benefits that can be gained from implementing IoT solutions, it’s no wonder there has been huge growth in the sector.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 16, 2019  2:05 PM

The secret to powering IoT devices

EJ Shin Profile: EJ Shin
Batteries, connected devices, Internet of Things, iot, IoT data, IoT data management, IoT devices, IoT power

Nowadays, it’s hard to not find connected devices everywhere you look. Every second, another 127 “things” are connected to the Internet according to Stringify CTO Dave Evans, and Gartner predicts that there will be 25 billion IoT devices by 2021.

Connected devices are only as valuable as the data they gather, the knowledge they impart and the actions they inform from data analysis. This is true not only for bigger, high-profile applications such as smart homes and cities, but also for the ever-increasing number of smaller-scale IoT applications. These smaller applications — like smart labels, smart packaging, smart pills, smart tags, smart cards, smart medical devices and diverse wearables — impact lives and business activities every single day.

As more things are transformed into connected devices, the type of power source they use plays

a surprisingly large role in how efficiently they sense and transmit data, and how usable — and therefore, how frequently used — they are. Device makers who rely on conventional, off-the-shelf batteries that are thick and rigid often have limited success due to design restrictions that affect usability.

Energy storage advancements can make devices truly useable

Energy storage solutions have advanced more than many manufacturers realize. New battery innovations free manufacturers to create truly user-friendly IoT devices that enable efficient data sensing and transmission. Next-generation, high-performance battery solutions that are lightweight, thin, bendable and flexible can be seamlessly integrated into connected devices. This enables device hardware to be designed much more aesthetically, providing a better user experience with greater comfortability and ultimately leading to stronger market adoption.

For instance, if a patch for monitoring biometrics or for therapeutic purposes were to have a thick and rigid battery cell embedded, it would be uncomfortable for users to wear. This discomfort would limit their usage time, resulting in low data collection and thereby unhelpful analysis. But if the patch had a thin and flexible battery seamlessly integrated instead, it wouldn’t impede their daily movements. In fact, users wouldn’t be so conscious about wearing it at all. This would naturally increase usage. With each consumer using the patch more often, more data is gathered and more valuable, informative feedback can be provided.

Key battery advancements

So just how flexible is a flexible battery? Very. We did bending tests on a battery that has a 20mm radius. After being bent 10,000 times, the battery still had about the same charge and discharge performance as a non-bent battery. This degree of flexibility is a must for devices that need to be curved or bendable and ultimately helps consumers feel comfortable using them.

Other significant flexible battery advancements have to do with weight, safety, customization and thinness. Batteries can be configured in thicknesses as little as 0.5mm. This thinness is very useful for sensors, smart cards, wristbands and other applications where weight and thickness are crucial success factors. These features are also key to enabling batteries to fit into small spaces in device hardware.

Next-generation batteries must also be safer. Even though manufacturers put tremendous effort into making sure batteries are durable and international safety tests are required, this doesn’t guarantee that batteries won’t overheat, explode or leak. Flexible rechargeable batteries made with gel polymer electrolyte technology deliver greater safety than batteries with liquid electrolyte; the gel electrolyte has higher resistance to heat and won’t leak if punctured.

Instead of off-the-shelf, rigid batteries, manufacturers now have the option of customizing flexible batteries to better utilize the space and hardware design of their devices. Rather than having to revisit their design at the end of the creation process because those off-the-shelf batteries do not fit the optimized product design, engineers and designers can now take advantage of battery manufacturers’ customization services to create a flexible battery solution that best meets their size, capacity, thinness and shape requirements, and delivers better user experience.

With so many things going “smart,” competition among connected device manufacturers is heating up as never before. The kind of battery a device maker uses to power smart devices or their components, or to transmit data, plays a huge role in how innovative and useful their devices can be, and how compelling users find them.

Next-generation flexible and thin batteries are key to delivering the kind of highly usable, aesthetically pleasing and reliably connected devices that give forward-thinking IoT device manufacturers a competitive edge.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: