IoT Agenda


December 12, 2019  3:22 PM

Drastically simplify your IoT data pipeline

Tomer Shiran Profile: Tomer Shiran
Data Lakes, Data storage, Internet of Things, iot, IoT data, IoT data management, IoT data storage, IoT strategy

The IoT data pipeline is a simple concept. First, data is gathered from multiple sensors, then all data is transformed into a common format, followed by storing it in data lake storage. Once the data is stored in an accessible place, analysts can use it to come up with answers to business problems. Seems simple, but it is not always easy to execute. On average, the completion of a pipeline request ranges from weeks to months, resulting in loss of business and increased IT costs.

The most common pattern observed in IoT data pipelines is where enterprises use data lakes to store all their data. However, lack of data accessibility and increased complexity of these pipeline architectures rapidly turns data lakes into data swamps.

Organizations often react to data swamps by copying their data from data lakes into data warehouses using fragile and slow ETL jobs, doubling storage costs to maintain these copies of data. Then, data engineers must create cubes or BI extracts so analysts can work with it at interactive speeds. In this scenario, enterprises don’t have full control of their data. Analysts don’t know where their data is located, and there are dozens of data engineers trying to keep things running. What can we do to make things better?

Use a data lake engine to leverage the value of your IoT data

Data lake engines are software solutions or cloud services that provide analytical workloads and users with direct SQL access across a wide range of data sources — especially for data lake storage — through a unified set of APIs and data models. While it may sound somewhat hard to believe, data lake engines eliminate the need for data warehouses on top of your data lake storage because the data lake engine allows users’ BI and data science tools to talk directly to the data stored in the data lake. Data engineers and architects can leverage data lake engines to present BI analysts and data scientists with a self-service semantic layer that they can use to find datasets, write their own queries against them, and use them in their end user applications. This process is a whole lot simpler and faster since enterprises can eliminate complex ETL code as well as slow and costly data replication.

Key elements for simplifying your IoT pipeline

The basic pipeline scenario works well in small IoT environments where only hundreds of rows of measurements are captured daily, but in large industrial environments, where hundreds of gigabytes of data are generated every hour, the story is different. In such large-scale scenarios, we need to worry about how many copies of data there are, who has access to them, what managed services are required to maintain them and so on. Additionally, the traditional methods of accessing, curating, securing and accelerating data break down at scale.

Fortunately, there is an alternative: a governed, self-service environment where users can find and interact directly with their data, thus speeding time to insights and raising overall analytical productivity. Data lake engines provide ways for users to use standard SQL queries to search the data they want to work with without having to wait for IT to point them in the right direction. Additionally, data lake engines provide security measures that allow enterprises to have full control over what is happening to the data and who can access it, thus increasing trust in their data.

IoT data pipelines have to be fast. Otherwise, by the time the data is delivered to the analyst, it runs the risk of being considered obsolete. Data obsolescence is not the only issue; increased computing costs and the prospect of data scientists leaving because they don’t have data to analyze are also potential issues. Data lake engines provide efficient ways to accelerate queries, therefore reducing time to insights.

IoT data comes in many different shapes and formats. Because of this, the efficiency of a data pipeline is decreased due to the number of transformations that data has to go through before being put into a uniform format that can be analyzed. Take data curation, blending or enrichment as an example. This is a complicated process that entails using ETL code to extract fields from a source, then through other database connections, extract the fields that we want to blend from another source. Finally, the resulting dataset is copied into a third repository. Data lake engines alleviate the complexity of this scenario by allowing users to take advantage of a self-service environment, where they can integrate and curate multiple data sources without the need to move data from its original data lake storage source.

IoT data pipelines have been around for a while; as they age, they tend to be rigid, slow, and hard to maintain. Making use of them normally entails significant amounts of money and time. Simplifying these pipelines can drastically improve the productivity of data engineers and data analysts, making it easier for them to focus on gaining value from IoT data. By leveraging data lake engines, enterprises can embark on a more productive path where IoT data pipelines are reliable and fast, and IoT data is always accessible to analysts and data scientists. Data lake engines make it possible for data-driven decisions to be made on the spot, increasing business value, enhancing operations, and improving the quality of the products and services delivered by the enterprise.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

December 12, 2019  2:19 PM

Persistence of IoT botnets requires a security-driven network

Anthony Giandomenico Anthony Giandomenico Profile: Anthony Giandomenico
cybersecurity, Internet of Things, iot, IoT analytics, IoT cybersecurity, iot security, IoT strategy

Botnets continue to plague IoT devices, resulting in a range of criminal activity from denial of service attacks and dropping malicious payloads such as ransomware to hijacking unused IoT device CPU cycles for things like crypto mining. One of the most interesting aspects of botnets is their longevity and persistence.

Botnets persist quarter after quarter

According to Fortinet’s most recent threat landscape report, today’s top botnets tend to carry over with little change from quarter to quarter or from region to region, more so than for any other type of threat. For example, Mirai –active since 2016 — still sits in the top five of the most prevalent botnets identified in Q3 of 2019. That provides an interesting window into modern cybercrime, especially given the damage caused when Mirai was first released.

First, it suggests that the underlying control infrastructure is more permanent than any particular tools or capabilities. This is due, in part, to the fact that the traffic to and from IoT devices in many organizations is not being identified or tracked. As a result, communications back and forth from compromised IoT devices and their criminal control systems tend to continue uninterrupted. As the saying goes — as least as far as these cybercriminals are concerned — “if it ain’t broke, don’t fix it.”

One of the reasons botnets remain a common issue is that the OSes of many IoT devices cannot be patched or updated. This means that if a connected IoT device is vulnerable, it is at risk of being exploited. Because IoT communications traffic is not being tracked, too many organizations have little to no idea that the IoT devices attached to their networks pose a risk.

Perhaps most importantly, the prevalence of botnets indicates that far too many organizations either do not understand the risk that compromised IoT devices represent or simply feel that there is little they can do to protect themselves. Of course, even if deployed IoT devices can’t be patched or upgraded, there are plenty of things organizations can do to reduce the risk that such devices introduce. This begins by adopting a strategy that some cybersecurity professionals refer to as zero trust network access.

Steps to secure connected IoT resources

The basic idea is to assume two things. The first is that every device on your network, including your IoT devices, may have already been compromised. The second is to assume that users cannot necessarily be trusted and can be spoofed. As a result, the ability to see and communicate with connected devices needs to be explicitly authorized and strictly controlled. Achieving this zero trust network access includes the following elements:

Multi-factor authentication (MFA): Users need to validate themselves to the network using MFA before they can access, deploy, manage or configure any device anywhere on the network.

Network access control: Any device seeking access to networked resources — whether inside or outside the network — needs to go through a network access control system. This ensures that devices are identified and authenticated based on several criteria and then dynamically assigned to predetermined segments of the network.

Intent-based segmentation: Dividing the network into functional segments is essential to manage today’s expanding networked environments and to limit the damage caused by a compromised device or rogue user. By interfacing with a next-generation firewall, segments can be dynamically created based on the business objectives of devices seeking access to networked resources.

Inventory management: One of the Achilles’ heels of an IoT-based infrastructure is that many organizations have lost visibility into what devices are connected to their network, where they are located, or what other devices they can communicate with. Inventory management is essential in keeping track of your IoT devices and can be connected to your network access control system and segmentation solutions to know what devices are actively connected to your network and where in your network they have been deployed.

Threat intelligence: IT teams need to be able to map ongoing threat information about active compromises and vulnerable systems to existing IoT inventory. This mapping process enables network administrators to prioritize things like patching devices that support that process and to strengthen proximity controls and segmentation rules for devices that can’t be updated.

Behavioral analytics: Finally, a system needs to be put in place that can baseline normal IoT device behavior and then alert on anything out of the ordinary. For example, digital cameras broadcast specific types of data to specific destinations. But they should rarely if ever request data, and they should never transmit any data to other devices or destinations. And if they do, your network should immediately recognize such unauthorized behavior, quarantine the device and provide an alert to a systems administrator.

A security-first networking strategy is the best place to start

IoT devices have become an essential component of any organization looking to succeed in today’s digital marketplace. However, malicious actors continue to aggressively target these devices because they tend to be easy to exploit, and once they have been compromised they tend to remain compromised. Organizations that increasingly deploy and rely on IoT devices — especially as they begin to develop complex, integrated systems such as smart buildings — need an effective strategy in place to see, monitor, control and alert on every connected device in their digital infrastructure. That begins with an integrated, systemic approach that ties critical security and networking systems together into a single, security-driven networking strategy that can enable and ensure zero trust network access.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 9, 2019  3:14 PM

Who will be first to have an Uber-like innovation for communications?

Morné Erasmus Profile: Morné Erasmus
AI and IoT, Internet of Things, iot, IoT analytics, IoT and AI, IoT benefits, IoT device, IoT infrastructure, IOT Network, IoT network management, Smart cities

We live in an ever increasing digital society with an always on mentality. Cities and governments have the social responsibility to bridge the digital divide that’s occuring and ensure internet access to all their citizens. The private sector focuses on delivering a return for investors while citizens are expecting improvements in their quality of life.

To achieve these goals, we have to come up with a new way of doing business. We need ‘Uber-like’ innovation to completely rethink and disrupt the current status quo and the business-as-usual methodology. Let’s look at four examples of how IoT can drive innovations in our society when put to the task:

Government. Many cities are rolling out digital governmental services. For example, Dubai enables citizens to attend traffic court using a video call on a smartphone. This makes the process more efficient because people don’t have to wait for their turn in the courtroom and it also eliminates the journey to the court building, which can reduce traffic congestion.

Healthcare. The convenience of online doctor consultations, sometimes referred to as telemedicine, not only extends medical care to more people around the globe in a faster, more efficient way, but also better utilizes the resources of healthcare specialists. Personal IoT devices, such as smart watches, can now track health statistics 24/7, alert users to abnormalities, and signals early warning signs to users and healthcare providers. There are several documented cases of smart watches notifying users of a medical condition that saved the person’s life.

Education. The digital education experience allows users to tailor their classroom setting and content delivery to best suit their needs and provide them with instant access to the best resources on a global scale. Without having to travel, students can learn from the best minds around the world starting in grade school all the way through higher education. O f all students in the U.S. taking at least one online class, 48% were taking only online classes, according to a study conducted by Allen and Seaman.

Hospitality. Hotels are rolling out an IoT-based, automated digital experience for customers to get room access, control lighting and HVAC, and access entertainment through their smartphones. This not only improves the user experience, but also the retention rate and customer loyalty.

The “prevailing drivers of smart hospitality building deployments appear to revolve around making the experience within hospitality buildings more convenient for guests and improving the operational efficiency of the hospitality building with respect to those using the building either as a guest or third-party business,” according to a recent iGR report.

How do we get there?

IoT depends on reliable, high-speed communications infrastructure. If we have reached the point where customers expect ubiquitous broadband services, we need to leverage all technologies to deliver a seamless experience. This means converging wireless and wireline networks and ensuring that these networks are available wherever needed.

Source: CommScope

For example, the next generation 5G millimeter wave network will require a tremendous amount of backhaul fiber, and we cannot build this network the same way we deployed the 3G, 4G or LTE networks of the past. Due to the densification requirements — predicted at roughly 100 times that of 3G networks — we need to rethink the way we will deploy these networks. Shared infrastructure, such assmart poles, digital street furniture and existing utility and cable company access, all need to be part of the discussion.

As we start preparing for the electrical vehicle evolution, we need to use this as an opportunity to be smart about preparing our streets for the next generation of IoT applications. Most of the world’s leading car manufacturers have already started rolling out plans to convert their automobile offerings from internal combustion to battery-operated vehicles.

The EV charging infrastructure required to put these vehicles on our roads will be of epic proportions, and a lot of new construction will offer us the opportunity to leverage this work and prepare for a connected future. Think of the city street where you park your car every day: What upgrades need to take place to build charging stations or even future inductive charging infrastructure there?

When digging up the streets, we can add extra capacity by adding conduit and fiber to the shared infrastructure and incorporate these charging stations to smart poles or digital kiosks.

Leverage AI

Another requirement will be incorporating AI technology into new applications. AI can use data from existing or multiple new sensors and combine these inputs to derive more insightful outcomes.

For example, some companies use the public Wi-Fi network in Metro stations to gather crowed heatmapping to better understand commuters’ habits and improve train schedules without the privacy concerns associated with video analytics.

Source: CommScope

Another example of shared infrastructure for IoT is vapor detection sensors inside Wi-Fi access points in school bathrooms to address the vaping epidemic without infringing on privacy rights.

From the above examples, it’s clear that our lives are becoming more digitally connected and this trend will increase exponentially as we ramp hypoconnectivity technologies, such as 5G, IoT and private networks. Connectivity is the key foundation that will enable all these efforts, and society needs to start viewing connectivity through the same lenses as we do our other utilities, such as water, electricity and gas.

Little by little, we are building an always on future and layering on applications that improve peoples lives, but we need to remember that it starts with putting infrastructure in the right places.

Although we haven’t yet seen an ‘Uber-like’ innovation in infrastructure, don’t be surprised if the new demands from IoT applications bring disruption and innovation.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 9, 2019  2:19 PM

IoT lays the pathway to smart cities of the future

Joe Muratore Profile: Joe Muratore
Internet of Things, iot, IoT benefits, IoT data, IoT device management, IoT devices, IoT management, IoT strategy, IoT use cases, Smart cities

Today, 55% of the world’s population lives in urban areas, which is expected to increase to nearly 70% by 2050, according to the UN Department of Economics and Social Affairs. If smart cities are both required now — and in the future — then IoT technology is the pathway to the connected world of computers and devices that share data with each other to increase efficiency.

IoT technology comes with a set of challenges and obstacles to overcome, but it can create connectivity across disparate assets to maximize efficiencies in ways never previously possible, ultimately changing the way local governments conduct business, handle everyday life and crises, and budget their time and money.

With the right planning and IoT implementation, the varying levels of smart cities can prove successful for municipalities across the globe.

Opportunities of smart cities

What makes a true smart city or community? Having a network of connected devices in and around a city to keep tabs on what’s happening might sound invasive, but the benefits are endless. From regulating the flow of traffic to knowing the exact location of where to send a repair crew to fix a pothole, smart cities are just that: smart.

For example, let’s say a fire breaks out in an office building. Sensors connected to the network will send immediate alerts to a central command center. Those same sensors could even determine the speed at which the fire is growing, how many people are trapped and whether the building is safe for firefighters to enter. This information, coupled with building information modeling, can provide the first responders with the critical structural details, such as the location of water valves, gas lines and air ducts, to effectively manage the situation and minimize damage as much as possible.

Furthermore, the smart grid would help allocate the proper resources and reduce the risk to everyone involved. For example, geo-location tools can change traffic lights along the route to control the flow of traffic, enabling first responders to reach an incident location as quickly as possible.

IoT could also create a data history of the number of cars on the road at any given time and how many passengers are taking specific trains at various times throughout the day. Knowing this, the system could make predictions with amazing accuracy that could then trigger other smart systems, such as traffic lights and train switches, to work in sync to keep everyone moving and the city running.

IoT “provides the senses and nervous system, along with some of the muscles needed to really deliver on the smart city promise,” said, David Mudd, the Global Digital and Connected Product Certification Director at BSI.

“The ability to know exactly what is going on in real time across all aspects of the city’s infrastructure and how it is being used, and act on this information with minimal human interaction, has amazing potential to improve the quality and efficiency of services,” said Mudd.

Challenges to overcome

Like any new technology, the use of IoT requires overcoming some challenges. However, there are tools and resources, such as the international standard for sustainable cities and communities (ISO), that can help guide the development and implementation of smart cities.

Not surprisingly, at or near the top of the list of smart cities related concerns are data security. How safe is the data that’s being collected? Where is it being stored? Is it encrypted? One worry is that criminals could worm their way into the protected network through any IoT device, such as smart TVs, thermostats or even light bulbs. IoT devices and their data must be protected, and information security must be managed.

Another challenge is that a city could spend millions of dollars on a system that either doesn’t work as designed or doesn’t work at all. Imagine the catastrophic frustration that could occur in an urban environment if there was a denial of service on the traffic light system during rush hour. The system must work for protection to occur or for efficiencies to be realized.

“If you can’t trust the product to work as it should or trust the data it produces, then connectivity is an expensive, and possibly dangerous, waste,” said Mudd.

It is also important to consider that getting all the connected devices to work in harmony with each other will likely take some time. The sheer volume of data sets and devices, all of which require varying times to update and process information, will lead to hiccups. There could also be additional costs to remedying any problems related to this, so city governments need to be prepared and budget properly.

Finally, a detailed contingency plan must be in place for dealing with various incidents and scenarios, along with clear directions around who owns and manages the data being collected.

Solutions to common smart city challenges

A city connected through a network of devices, both wired and wireless, has endless benefits. And despite the challenges that will arise, ISO has a framework to overcome them. Recognizing that no two smart cities are the same, ISO 37106 offers a citizen-centric approach that prioritizes the needs of the community first, and offers all stakeholders a guide to help operationalize the vision, strategy and policies needed to become a smart city.

In addition, municipalities should do their homework and research the companies that offer smart solutions. During the implementation process, it is critical to meet with stakeholders in the project, including citizens all the way up to local, state and perhaps even federal leaders.

An important part of the process is listening to what everyone’s concerns might be, creating a list of action items and coming up with solutions. How transparent will you be with the data? How will law enforcement use it? Will the data be stored indefinitely, and how safe will it be? These questions will need answers.

Municipalities must “independently verify that a product or system will work as it should, safely and securely throughout its intended life,” said Mudd. Municipalities must also provide advice and training to stakeholders regarding best practices.

The cities of tomorrow have the potential to efficiently maintain themselves with minimal human input and take a lot of risk out of the equation thanks to IoT. There’s no better time than now to start the planning process.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 5, 2019  5:00 PM

Get the best from connected RPA in 7 steps

Pat Geary Profile: Pat Geary
Automation, Business strategy, connected RPA, Internet of Things, iot, IoT application, IoT benefits, IoT business model, IoT collaboration, IoT platform, IoT strategy

Connected RPA provides a collaborative platform for humans and automated Digital Workers to deliver business-led, enterprise-wide transformation. Although connected RPA is a business-led technology, it can end up out of control if it’s treated solely as a pure business project. However, it’s not a pure technology project either. If it’s treated as an IT project, it will end up with too much governance and too much control, which prevents it getting done well or at all. The key is to find a middle ground between delivering business- led and IT-endorsed projects.

To ensure that successful outcomes are achieved with any connected RPA program, we created an industry standard delivery methodology that’s broken down into seven pillars: vision, organization, governance and pipeline, delivery methodology, service model, people and technology. The general principles involve putting a structure behind how to identify, build, develop and automate processes. In this article, I’ll discuss what each pillar encompasses and the various considerations required when embarking on a connected RPA journey.

Vision

It’s important to first establish the reasons why a connected RPA program is being undertaken and align these reasons to corporate objectives. For example, Teleco’s drive for connected RPA included improving customer satisfaction, operational efficiency, process quality and employee empowerment. Naturally, key stakeholders need to be engaged to gain their backing. If they see connected RPA as a strategic business project, they’ll champion it and help provide the requisite financial and human resources.

Although connected RPA is managed by a business team, it’s still governed by the IT department using existing practices. Therefore, IT must be involved from the start because they can support connected RPA on many critical fronts, such as compliance with IT security, auditability, the required infrastructure and its configuration, scalability and prevention of shadow IT.

Organization

This next stage involves planning where connected RPA sits within the business so it can effectively scale with demand. A centralized approach encompasses the entire organization, so it may be beneficial to embed this into an existing connected RPA center of excellence (CoE).

Another approach is the federated set up where the connected RPA capability sits within a particular function but is scaled across the business with the central automation team controlling standards and best practice. This is achieved by creating delivery pods throughout the organization, responsible for identification and delivery of their own automated processes governed by the CoE.

However, a divisional approach is one route that must be avoided. This is where multiple RPA functions run separately across the organization with differing infrastructure, governance and delivery teams. This siloed set up is not cost effective and will prevent true scale ever being achieved.

Governance and pipeline

The next stage is identifying process automation opportunities that will generate the fastest benefits. It’s important to be clear about what makes a truly good process. For example, selection criteria could include targeting those standard processes with a high workload or several manual and repetitive tasks, processes that have quality issues related to human errors or those that require customer experience improvements.

Even if the ideal process for automation has been found, businesses should collaborate with IT to ensure that there isn’t any maintenance planned for the target application. To guarantee the traceability of the automation program, a set of indicators should also be defined, such as financial, process, quality and performance related KPIs, prior to any activities taking place.

Other considerations include how to generate demand for connected RPA within the business. This could involve providing employee incentives for identifying suitable processes, internal communications and running workshops for engagement.

Delivery methodology

Once these processes have been selected, they need to be delivered as automated solutions. This means capturing correct information in the define phase to avoid problems, which requires knowledgeable subject matter experts to be involved.

It’s also worth holding a process walk through for the right audience. Each chosen automated process must be documented and an understanding gained of how it will differ from the same human process. Once this has all been agreed with the business, and the process design authority has approved the proposed blueprint and conducted the necessary peer reviews, development can begin. Once the business is satisfied, sign-off testing can start.

Service model

As processes are in production, they need the right support around them. Businesses must ensure that Digital Workers are handing back business referrals or exceptions to the operational team for manual intervention, and that a technical capability is readily available in case the Digital Workers don’t act as expected. Ultimately, to ensure smooth continuity and availability of automation resources, there must be a robust IT infrastructure.

People

Appointing a high-quality, lead developer is essential, but people with these skills can be difficult to find in the current market. Developers need to be trained to the highest standard in both connected RPA development and process analysis. This will enable them to perform a hybrid role while receiving on-site support from an experienced consultant. As the development team continues to grow to ensure automation standards are maintained, businesses should appoint a design authority and a control room monitor to manage Digital Workers in production.

Technology

There are several technical approaches to examine when deploying a connected RPA platform. For example, considerations may include if it’s a virtual machine set up, how to manage license provisioning for future scaling as well as choosing between a cloud-based solution or on-premise hosting of the platform.

Final thoughts

It’s clear that to gain the best results with connected RPA, the complete journey must be defined upfront rather than waiting for mistakes and then correcting them. Once company-wide support is gained and a vision of desired results created, it’s best to start small. Getting this right enables the program to grow and scale organically rather than stagnating.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 3, 2019  4:54 PM

Battery-free smart home adoption? It’s possible

Srinivas Pattamatta Profile: Srinivas Pattamatta
battery life, Bluetooth, Internet of Things, iot, IoT connectivity, IoT device, IoT wireless, smart home

The time of the smart home is now. A staggering 1.15 billion annual shipments of Bluetooth smart home devices are expected by 2023 with connected home devices exceeding home automation by a ratio of three-to-one, according to a 2019 Bluetooth market update. It’s no longer science fiction to control lighting or regulate a home’s temperature with a voice command or pressing a button on a smart phone. Smart home technology today has become widely accepted and accessible.

While voice assistants are one of the most common smart home automation and entertainment devices for homeowners and property managers, there are many other emerging technologies that make up the average smart home. Specifically, there are three core categories for smart home technology, each of which feature examples of connected devices that require an increasing number of batteries:

  • Home entertainment: Remotes, voice assistants and audio systems
  • Home utilities: Connected refrigerators and washing machines
  • Home automation: Security systems, sprinklers and thermostats

Why is this important? As the growing number of wireless devices increases so does the number of batteries needed to power them. As consumers increasingly use more batteries, the financial and environmental costs of replacing them will rise, too.

The majority of IoT devices are connected wirelessly to the Internet via sources like Wi-Fi, Bluetooth or ZigBee. Wi-Fi is most commonly used for high input applications and streaming data on connected devices and ZigBee is a two-step connection requiring a hub that connects devices to Bluetooth or Wi-Fi. Bluetooth, with advanced BLE or Bluetooth 5, enables long, Wi-Fi-like range in the home and compatibility with smart phones, laptops, earphones and other devices.

The chosen source is vital as we look to examples within the emerging smart home categories. For example, home security devices have portable sensors that run with batteries and connect to security systems via Bluetooth.

The battery dilemma

Because most sensors are constantly online and communicating information regularly, their batteries are always powered on, which results in decreased battery life. This, in turn, results in more battery changes or larger batteries that will also need to be replaced way too often. Homeowners or property managers then must consistently monitor for low batteries or risk losing the protection and peace of mind a security device provides.

This is a prevalent issue across several connected in-home devices, such as automated door locks, automated sprinkler systems, temperature sensors and more. These battery-powered devices are not able to use rechargeable batteries due to current U.S. regulations. With the option of rechargeable out of the picture, homeowners and property managers are again faced with the dilemma of battery replacement.

Imagine a world in which extended battery life is the norm. A world in which you replace batteries every few years or even the entire life of the device; not every few months. Technologies like Atmosic’s M2 and M3 Series solutions can help remedy these challenges.

Leveraging multiple sources of power, such as radio frequency, thermal, light and mechanical, to harvest energy will be able to effectively power the increasing number of smart home devices. With the prospect of extending battery life or eliminating batteries altogether, we will be able to greatly reduce the financial burden and environmental impact that comes with battery replacements.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


December 2, 2019  12:38 PM

The ten things that should happen in 2020, continued

Mark Troester Profile: Mark Troester
AI and IoT, app development, Artificial intelligence, Internet of Things, iot, IoT analytics, IoT and AI, IoT trends, Machine learning

This article is the second in a two-part series. Read the first piece here.

In part one, we looked at the first five trends I see for DesignOps collaboration, digital innovation, AppDev and WebDev alignment, machine learning and AI making it into production in 2020.  Here’s the remaining five:

Modernization will be considered alongside cloud native

Modernization efforts are often thought of — or managed — separately from new application development efforts, except where new development efforts are a complete replacement. This can be due to how organizations structure their maintenance efforts vs. new efforts, or how they separately budget for development.

As technologists, we all know how hard it is to get funding to address technical debt because that effort typically doesn’t result in new application features. But it’s important that we come up with an approach that balances modernization efforts with net new development efforts that are typically cloud native. To accomplish this, organizations might do the following:

  • Organizations that want to continue deriving value from legacy systems can implement an abstraction layer to make legacy capabilities accessible through a common API
  • They can shift application workloads to the cloud while keeping data in place using new connectivity constructs that aren’t difficult to implement and don’t require risky firewall and network reconfiguration
  • Once the abstraction is in place, organizations can refactor the most critical parts of applications without changing the frontend apps by using a backend redirection to update capabilities

One simple way to ensure that modernization is factored into new AppDev is to think about modernization in the context of integration. Since most new AppDev efforts aren’t standalone, think about what modernization stages need to happen to integrate the new efforts with existing systems.

Organizations will start treating all users like customers

We all know that customer expectations for improvements to user experience are rising, but how does that relate to apps targeted toward employees and partners?

Employees and partners have the same expectations as customers. We’re also seeing new professionals that are not only digital natives, but also digital experts. We’re past the era where only IT experts dictate what apps and technology could be used. With the introduction of cloud and SaaS, people can — and will — bypass IT experts if they aren’t getting what they want, which can lead to another set of issues.

Leading organizations are taking note and are employing different approaches to ensure that employees and partners get consumer-grade experiences, such as:

  • Applying DesignOps to internal initiatives as well as usability and constant optimization to the user experience
  • Using a marketing mentality similar to managing customer conversions to ensure that employees can successfully complete actions that are supported by digital technology
  • Applying personalization to employee and partner experiences based on needs, which helps to better manage a multi-generational workforce as well as cater towards individual preferences that will drive efficiency and productivity
  • Designing customer experiences that are horizontally integrated across different digital touchpoints — omnichannel experiences — that are vertically supported or integrated with employee and partner experiences. For example, customer interactions that take place over an integrated web, mobile and chat experience are integrated appropriately into employee and partner systems to analyze and report on the customer activity and help ensure the right level of human support

Data access and APIs will be used to break down tech silos

Applications that don’t support modern API constructs to provide useful capabilities that are not accessible to new application workloads still exist. It used to be time consuming and expensive to code around these limitations. But with the introduction of data integration patterns that establish standard SQL and REST-style interfaces without writing code, this greatly increases the ability to access application capabilities as well as important data sources.

This not only makes legacy data more accessible, but it also makes new data types like JSON and XML accessible to business intelligence (BI) and reporting that are based on SQL access. Not only can organizations breakdown silos between operational and transactional applications, they can also break them down between operational and transactional applications and their BI and reporting efforts.

This becomes interesting when we get to the point where we don’t think of these different application patterns separately. For example, we don’t think of reporting and BI as something that happens after the transactional applications use operations such as periodic reporting. Rather, we think about how reporting and BI is integrated using advanced analytics in real time to guide the transactional application experience.

Whether we are using predictive capabilities and providing guidance within the application or using predictive capabilities to act on behalf of the user so that the user interacts in a fluid and exception-based pattern, the elimination of these silos can have a tremendous business impact.

IoT and mobility will force organizations to rethink the edge

We typically think about edge computing from an IoT perspective, where we have use cases that require processing either on the actual device or on the network edge that’s closer to the device.

I’d argue that we’ve long since thought about architecture with regards to proximity, and that edge computing predates technologies such as IoT and smart sensors. Given the limitations of network speed, we are careful to place application functionality close to the data.

We use intelligent caching on the device or on the network edge to eliminate roundtrips and network chatter. We design and manage data to address the vagaries of data residency required for international business. We also design levels of abstraction, virtualize data and use data pipelines to shield frontend developers and applications from data complexity.

While everyone is racing to the cloud, it’s important to continue to think about the edge; not only in the context of supporting IoT, but in the broader context of supporting distributed computing.

Another interesting factor is 5G. You would think that with network speeds 10x faster than 4G, 5G would negate the need for processing at the edge. But when it comes to the potential life and death consequences relating to data processing delays in the medical field, or the ability to automate automobiles or smart factories, there will be a need for processing at the edge. As a result, it is expected that mini or micro-data centers will be added to nearby cell towers.

While there is no single architecture pattern that makes sense for all use cases, it’s clear that organizations that are focused on faster data processing and reducing latency with 5G will have an advantage when it comes to improving customer and user experiences.

Organizations will align business and technology efforts more effectively

Perhaps it is a reaction to the philosophy of  “every company is or should be a software or technology company”, or pressure from the board and executive team to keep up with the pace of technology, there is a vital need to align business and technology efforts in a substantial way.

Businesses and IT can learn from each other in terms of the planning process. As many businesses have implemented some form of periodic planning mechanisms, such as OKRs and Kanban, IT has become more agile in developing short-term plans, and are becoming more adept at taking small increments, adopting minimum viable product or moving towards continuous delivery of capabilities and features.

Leading organizations will determine how to marry these different approaches and drive corporate objectives by creating a structured set of goals for each discipline. They will take a holistic view of how technology can help drive those goals by analyzing how digital technology can be used in every business discipline.

This analysis will bring together key business and technical leaders to determine the potential digital projects that are then prioritized by business impact and success feasibility. This periodic planning will be supported by more agile updates and modifications to the overall corporate goals and high-level activities, based on changes to market and business conditions.

Some organization will identify KPIs that flow up from actual customer and user experience by extending traditional infrastructure and application management to providing a customer or user-centric viewpoint. These KPIs are then measured and used as part of the overall approach to execute company goals.

So, hang on to your hats, folks. 2020 is going to be an interesting year.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 27, 2019  12:04 PM

SMBs and the great cloud video opportunity

Andreas Pettersson Profile: Andreas Pettersson
AI and IoT, cloud benefits, Data strategy, Internet of Things, iot, IoT cloud, IoT data, IoT strategy, Machine learning

When most people discuss the benefits of cloud for video surveillance, they tend to do so from the lens of enterprise end users. Enterprise investments in camera technology can run into tens of millions of dollars depending on the size of the organization and converting this massive upfront cost into an ongoing operational expense is one of the biggest opportunities for surveillance vendors.

However, SMBs account for many organizations globally as well as in the U.S. As of 2018, there were more than 30 million small businesses — defined as companies that have fewer than 500 employees — spread across the nation accounting for 99.9% of all U.S. businesses and employing 47.5% of all workers, according to the U.S. Small Business Administration (SBA)

Considering the obvious market opportunity, it is somewhat surprising that more consideration isn’t given to the SMB space by the surveillance industry and cloud providers. However, the proliferation of IoT devices, including connected surveillance cameras, means that there is a wealth of data now available to end users of all sizes.

More than 130 million surveillance cameras were expected to be shipped globally in 2018, a 10-fold increase from the less than 10 million that shipped in 2006, according to IHS Markit. Approximately 70 % of the cameras shipped last year were IP-enabled, further adding to the prevalence of video in IoT.

Taking advantage of untapped potential and cost savings

You may be thinking: “Why is it important for me, as a small business owner who has already invested in on-premise video hardware and software, to consider migrating my surveillance system to the cloud? What good is all this additional video data going to do for my business?”

These are valid questions and should be asked by anyone prior to making a significant change in the architecture of their security system. But just as enterprises can realize tremendous efficiencies from shifting management of their video network to the cloud, the same can also be said for SMBs. Aside from the shift from a capital expenditure to an operational expenditure, a cloud-based video system would enable SMBs to benefit from the automatic update capabilities of cloud-based services.

Unlike large organizations that enter into maintenance agreements with their systems integrators to take care of things like camera and server failures that are commonplace throughout the lifecycle of a video system, SMBs are often take care of these issues as they arise either with the help of in-house IT staff, or by paying an integrator for a service call. These headaches are eliminated with a cloud solution because software patches and firmware updates are pushed automatically as they are needed, often without any interruption to the end user.

Another area where the cloud can benefit a SMB greatly is video retention and system scalability. It wasn’t long ago that organizations had to purchase racks of DVRs to achieve the right levels of storage, often limited by physical storage space. Today, video can be retained for as long as needed since physical storage space is not needed with the cloud.

Additionally, the cloud provides a high level of flexibility, providing SMBs with the ability to scale their business as needs change without additional physical storage space. And rather than being forced to purge video due to lack of space — which could contain valuable insights for business managers or used as evidence for criminal prosecution or insurance liability claims — it can now be retained indefinitely.

Leveraging next-gen video analytics

SMBs could also see immediate benefits from a cloud video solution when using video analytics to improve business operations. Driven by advances in AI and machine learning technology, video analytics now goes well tripwires and can detect and classify people and objects with accuracy that is continuously improving.

Some SMBs are already exploring how they can use video data to garner additional insights into their organizations and improve efficiencies. For example, restaurants are now leveraging systems that use cameras in lobbies and other areas of a restaurant to track staff and guests, which is generating information that can subsequently provide feedback on things such as host availability, customer wait times and customer bounce rates.

However, leveraging these types of advanced analytics simply isn’t possible without the aid of the cloud for most businesses as deploying the hardware necessary to run them is cost-prohibitive. As video data continues to grow in importance in the years to come, SMBs will need to look to cloud solutions if they want to keep pace with ever-evolving surveillance landscape and maintain a competitive advantage.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 27, 2019  11:29 AM

Cybersecurity risks affect IIoT fog computing

Julian Weinberger Profile: Julian Weinberger
Cyberattacks, cybersecurity, encrypted communications, Encryption, fog computing, IIoT, IIoT data, IIoT security, remote access vpn, VPN

Cloud computing within IIoT is creating new opportunities for manufacturers and industrial systems. From connected cars and smart cities to real-time analytics and 5G mobile, IIoT sensors are generating data in unprecedented volumes.

Since most essential smart factory services would be inefficient without lightning responses from IIoT systems, many factory systems rely on sensors and actuators with built-in time constraints. Any latency or break in signal to operational sensors or actuators could have catastrophic consequences.

To overcome this challenge, leading technology providers have developed fog computing; a virtualized platform that runs essential cloud processes locally across a distributed network of IIoT devices. Fog computing enables consistent, two-way cloud communications between local operational components and remote management points via the Internet in milliseconds.

Closer to the edge

Though still in its infancy, fog computing is already being rolled out for a range of IIoT-based applications. For example, smart cities rely on access to data in real time to run public services more efficiently. In the case of connected cars, sensor data pertaining to road conditions, geo-positioning and physical surroundings is analyzed in real time at a local level. Other types of data, such as engine performance, can also be communicated to the manufacturer so they know when to offer maintenance services or repairs.

Sometimes, IIoT devices are located in remote areas where processing data close to edge devices becomes essential. An oil rig is a good example. A typical oil platform may have about 100,000 sensors generating data at the rate of several TBs every day. To relay all this data over the Internet and back for analysis and response is neither practical nor economical. Instead, cloud services must be brought closer to the edge.

Other applications in the cloud, such as mobile 5G, analyze the aggregated data from many thousands of sensors to identify opportunities for productivity improvements or trends over time. For example, in dense antenna deployment areas, a fog computing architecture with a centralized controller may be used to manage local applications and connectivity with remote data centers in the cloud.

Data encryption

It’s widely acknowledged that most IIoT devices do not have security built-in, and energy providers and manufacturers still deploy IIoT systems in remote, exposed locations. As a result, thousands of smart yet vulnerable mechanisms in physical isolation is a cause for concern as data shared across factory ecosystems and the cloud may be readily visible to unauthorized third-parties.

The best way to compensate for the lack of built-in security is to implement enterprise-grade privacy and protection measures to fog computing systems. Encryption can prevent confidential industrial data, such as intellectual property or operational information, from being observed by cybercriminals, hackers or spies.

Surprisingly, many industrial and manufacturing organizations have yet to introduce encryption into their IIoT environments. More than half — 51% — of organizations still do not use encryption to protect sensitive data in the cloud, according to a Thales study involving more than 3,000 IT and IT security practitioners worldwide.

VPN

The most effective way to ensure communications are encrypted and connectivity throughout IIoT networks is secure is to implement professional, enterprise-grade VPN software. A VPN can encrypt all digital communications flowing between local systems and the cloud with advanced algorithms such as Suite B cryptography. Even if a third-party were to penetrate a device or application, the information itself would be indecipherable.

A growing number of manufacturers and industrial organizations are pivoting to cloud-based VPN services for secure management of remote IIoT equipment because cloud VPN services offer airtight security as well as additional flexibility, scalability and reduced technical complexity. Cloud-based VPN services create end-to-end encryption between an on-premises central management point and remote IIoT devices. The cloud server conducts authentication checks automatically and establishes appropriate tunnels. Best of all, it does not decrypt or store any data that passes through.

Remote access to IIoT devices may also be on-demand and restricted to times and other parameters specified by the owner. For example, access may be limited to service engineers according to the principle of least privilege, which ensures security remains as airtight as possible.

Final thoughts

Although fog computing can improve productivity, efficiency and revenue, it also can put data at risk. Securing all data processed by these critical ecosystems with VPN software is paramount.

A VPN provides secure and reliable connectivity for remote IIoT machines and cloud-based control hubs by encrypting all digital communications passing over the Internet between innumerable devices and the remote administration center. These encrypted connections allow smart systems to send confidential data over the Internet while being shielded from unauthorized third parties.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 26, 2019  5:16 PM

Secure IoT devices and networks and bears, oh my!

Cheryl Ajluni Profile: Cheryl Ajluni
Cyberattacks, cybersecurity, Internet of Things, iot, IoT attacks, IoT cybersecurity, iot security, IoT strategy

There is a popular saying that goes something like this: You don’t have to run faster than the bear to get away. You just have to run faster than the slowest guy running from the bear. As it turns out, that saying is as good a metaphor for life as it is for security in IoT.

It all comes down to the issue of perfection. Success in life doesn’t require someone to be perfect in everything they do. What it does require is for a person to never give up, to keep moving forward, to work to improve even when it seems too hard and to outlast the competition.

Security in IoT devices is no different.

Just like life, appropriate IoT security does not demand perfection. It is not a matter of having to ‘outrun the bear,’ but instead needing to ensure IoT device security is ‘running faster’ or better than that of the competition. Unfortunately, that is often easier said than done.

Source: Shutterstock.

Preparing for a cyberattack

It is common knowledge that IoT devices are prime targets for hackers. They often lack rudimentary security measures and operate with out-of-date firmware. These security vulnerabilities create a backdoor into the network, which can be used to launch automated IoT botnet distributed denial-of-service (DDoS) attacks when exploited. That backdoor also gives hackers the ability to take control of an IoT device and force it to operate in an unintended way.

Imagine a hacker intentionally draining the batteries of IoT devices in a smart factory, causing an untold loss of revenue to a company. Or worse, what if the hacker takes control of a patient’s medical infusion pump, changing the amount of medication it dispenses? Without appropriate security measures in place, these scenarios could be all too real.

How do IoT device makers and network operators ensure their IoT devices and networks aren’t the lowest hanging fruit when it comes to security? It all starts with having the right IoT testing, security, and visibility infrastructure in place to protect both the IoT devices and the networks that support them. Having a solid security strategy and plan for mitigating attacks is also critical.

Source: Shutterstock.

On the network side, some of the best practices to consider when building that plan include:

Know your attacker. Hackers are creatures of habit. If an attack tactic works well, they will likely employ it repeatedly. Understanding the attacker, their patterns of attack and what to expect can prove critical to helping operators identify an attack in progress before it has a chance to get out of control.

Choose your weapons carefully. It’s not if a network attack will happen, but when. Being prepared with a DDoS mitigation tool or service is always a smart choice. But make that choice wisely by first checking the scale of attack the tool or service can stop, the level of service it can provide to critical infrastructure users and how many users are being affected while the attack is ongoing. Also check how often the tool or service falsely flags someone as an attacker.

Test your environment. Knowing how an attack will impact a network is essential to its prevention. This can be done by running simulations of attacks and defending against them by trying different solutions. The information that results can prove especially useful in building a database of defense mechanisms to be used for various scenarios. Plus, the more the network is tested in the lab, the fewer surprises the operator will encounter during a real attack.

Never stop working to improve security. Hackers adapt quickly and that means operators need to as well. The only way to be prepared is by proactively and continually seeking out new weapons and security techniques to plug the gaps in an existing security strategy.

IoT devices or networks alike are increasingly prone to cyberattack. Hardening devices and networks to withstand the onslaught is a tedious and ongoing process, one that can be addressed by many different strategies and using a range of different tools and services. The key to making the right choice is to first thoroughly test your environment and uncover any device or network vulnerabilities. Only then can IoT device makers and network operators begin to make the best choices for ensuring IoT devices and networks are resilient to attack.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: