Cutting energy use isn’t just good for the environment, it’s a source of huge cost savings and a sign of corporate social responsibility. With the recent Green New Deal making headlines, there’s renewed awareness around reducing emissions and the potential for tougher environmental regulations in future. It’s a good time to think about greening your buildings.
The U.S. Energy Information Administration has estimated that commercial buildings consume 20% of all the energy used in the country. For large businesses with multiple locations, the cost of running lights, air conditioners and other equipment reaches millions of dollars a year. Any business looking to cut costs should be concerned about energy use.
But it’s about more than dollars and cents. Energy efficiency is a major part of corporate social responsibility, and consumers increasingly expect their preferred brands to behave ethically. Almost 20 billion square feet of commercial real estate have been deemed LEED Certified at some level, with a further 2.2 million square feet certified each day. This is a great way for businesses to show they take corporate responsibility seriously.
IoT and energy efficiency
IoT technologies provide a powerful way for businesses to balance the comfort of employees and customers with the highest levels of energy efficiency. These technologies can automate otherwise manually intensive tasks, like monitoring the temperature and humidity throughout buildings, tracking energy usage patterns over time and ensuring equipment operates at peak efficiency. IoT systems are even more effective when they’re integrated with service automation platforms that can schedule maintenance visits when failures or abnormal results are detected.
The use cases for IoT in controlling energy usage are numerous, but here are four smart ways businesses can get started with controlling costs and reducing their environmental footprint:
- Monitor physical spaces and adapt to usage patterns. Tracking the ebb and flow of foot traffic allows control systems to intelligently turn lights on or off and adjust thermostats according to which areas are in heavy use. According to the US Department of Energy, consumers can save up to 10% a year paring back the thermostat a few degrees each day. Considering the acreage that commercial buildings occupy, that 10% translates into significant savings for larger environments.
- Monitor facilities equipment to prevent inefficiencies and surprises. Keeping large equipment well maintained and running at peak efficiency is critical for minimizing costs for systems such as HVACs and generators. IoT sensors allow you to centrally track equipment to ensure it’s functioning properly and not drawing more power than it needs. Analyzing the data collected allows you to identify potential failures before they occur, and automation software can trigger repair or maintenance orders to ensure there are no costly surprises.
- Weather sensing and “pre-conditioning.” A combination of weather forecasts and IoT sensors that monitor conditions in a building’s immediate vicinity allow you to “pre-condition” facilities and make adjustments in advance to avoid costly spikes in energy use. Analyzing historical weather data and cross-referencing it with energy use for environmental systems also allows you to identify patterns and replace equipment that proves uneconomical.
- Crowdsourcing feedback. Human feedback is another powerful tool for minimizing energy use. Mobile apps or strategically placed consoles allow consumers and employees to report on comfort levels in a building, allowing you to adjust heating and humidity systems to the lowest levels while still ensuring a pleasant environment. Mobile workstations in areas like kitchens or mechanical rooms also allow employees to report equipment failures on the spot so they can be addressed quickly, avoiding potentially costly interruptions.
Discussions about the environment are often contentious, but any building manager who’s seen a utility bill knows energy efficiency is as good for the business as it is for the planet. Technologies such as IoT and service automation empower building managers to achieve the optimal balance between comfort and efficiency to keep costs down. Some may also market this as good corporate citizenship, but all businesses should view these measures as simple common sense.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Robotic desktop automation on the surface promises a quick path to automation and ROI. Pat Geary, chief evangelist at Blue Prism, highlights an alternative reality, where RDA is the root cause of organizations not achieving any automation benefits at scale, while also experiencing “shadow IT” associated problems.
Robotic desktop automation is a misunderstood and overhyped software which has been mispositioned by RDA vendors as being robotic process automation (RPA). I should know what real RPA is, as I coined the acronym back in 2012. The IEEE Intelligent Automation standard is also very clear on what RDA is and what RPA is — one is not a subset of the other.
RDA has been designed to deliver multiple, short, record-and-replay tactical automations for navigating systems on desktops. RDA’s big promise is that business users working in front and back offices, across different departments, can record a process and have software robots deployed within hours. Where processes are complex and require more technical skills, users can just automate some parts of the process that can be recorded and leave the rest.
RDA, or as it’s also termed attended RPA, is being positioned as the quick-win automation tool du jour. Organizations are being assured that their business users don’t need to involve the IT department — so by bypassing the IT work queue, they can experience both business benefits and ROI faster than other RPA approaches.
But let’s be very clear, you can’t start on an RDA train and “switch” to get to an RPA destination — the tracks are totally different. RDA trains are small and slow — and while they’re quick to build, they actually have very little capacity, do very little real work and require continual maintenance and ongoing management.
Of course, some organizations may want to experiment with RDA technologies as a tactical tool and some initial benefits can be gained relatively quickly — but it’s a short-term benefit that over time becomes less valuable and more costly and challenging to manage and to scale. Our recommendation from many hundreds of successful connected RPA deployments is to take a strategic approach and plan for scale and success from day one.
Limitations by design
RDA sounds great in theory, and naturally vendors of this product will only highlight the benefits without the downsides. This is a major problem, because when organizations attempt to scale these tools to achieve bigger business goals, RDA’s design limitations become increasingly apparent. This assertion is backed by Gartner’s recent generalized prediction that “through 2021, 40% of enterprises will have RPA buyer’s remorse due to misaligned, siloed usage and inability to scale.” Leading automation academics, who have researched hundreds of implementations, are now being more specific and attributing this scaling issue to RDA-type approaches.
The problem with desktop recording and the notion of a personal software robot is that a single human user is given autonomy over a part of the technology estate (their desktop), which introduces a lack control and by extension creates multiple security and compliance issues. Desktop recording spells trouble for the enterprise as it captures choices based on an individual’s interpretation of a process versus a central consensus for the best path. This obscures a robot’s transparency and hides process steps which, when duplicated over time, becomes a potential security threat and limit to scale.
A good analogy to illustrate these limitations is the autonomous car, where navigating a physical environment is just as demanding as a software robot navigating a virtual one. Like an autonomous car, a software robot has to safely and successfully navigate its journey. To automate a process, a software robot must read different screens, layouts or fonts, application versions, system settings, permissions and language. A software robot may even adjust the order of tasks based on local congestion, such as latency in applications and networks, or a systems outage.
Imagine recording your journey to work on a Monday in an autonomous car and relying on the recording to navigate and ensure the same, smooth journey the next day. It would end in disaster as the environmental conditions would be completely different and an accident would quickly occur. Similarly, assuming that a recorded journey of a software robot in the virtual world will be consistently the same path for each fresh journey will result in a negative outcome, too.
Also, recorded processes are very inefficient when they run, as they sit and wait for target systems when they could be running. To illustrate this, it’s back to our car analogy. When I recorded my trip to work on Monday, the light at the end of my street was on red for two minutes, so I stopped and waited. The next day, the light is green, but my recorded journey says I must wait for two minutes — and I’ll probably get rear-ended in the process.
There are two more major drawbacks of the desktop approach to automation. First, if a robot and a human share a login, no one knows who’s responsible for the process — and this creates a massive security and audit hole. Second, if a robot and a human share a PC, there’s zero productivity gain as humans can use corporate systems as fast as robots. So this approach doesn’t save any time or make the process any slicker for a user.
Introducing shadow IT
Restricting automation to a multi-desktop environment, outside of the IT department or any central control, means that RDA vendors are effectively sanctioning and using shadow IT as part of their deployment methodology. This is potentially very damaging for an organization since shadow IT, in the context of RDA, means unstructured, undocumented systems that become part of the process flows of a business which are uncontrolled.
For example, the creator of a desktop automated process leaves the company or an organization changes. This can lead to audit failure due to an unknown fulfillment activity taking place or security holes, such as passwords embedded in these lost processes, fraud and denial of service. If your business allows departments to build these recorded desktop RDA scripts, then over time it not only creates a shadow IT nightmare, but without realizing it, it also creates a massive technical debt that your business will have to resolve.
Making RPA sustainable
For RPA to deliver value, longevity and resilience at scale, automations should be carefully planned, modelled and designed. To avoid shortcuts to building a process that can introduce risks, desktop recorders should not be used. Instead, so all processes achieve the highest design standards, they must be completely transparent and centrally pooled to offer the potential for reuse.
A more sustainable approach, is connected RPA that provides the ability to successfully operate and scale in large, demanding enterprise-wide deployments, where security, resilience and governance are equally, if not more important, than automation speed.
Connected RPA provides an easy-to-control digital workforce of advanced software robots that informs, augments, supports and assists people in the automated fulfillment of service-based tasks. Designed to be scalable, robust, secure, controllable and intelligent, these digital workers are run by business users through a collaborative platform with no recording and no coding required, while operating within the full governance and security of the IT department.
As an unregulated individual effort, where individual contributors are working to their own standards, RDA also misses out on the continual improvement philosophy of the connected and collaborative facets of RPA. With connected RPA, a business processes is broken into “work packets” made up of process objects that are built to business- and IT-imposed company-wide standards. These packets are shared, so they improve all the time based on the collective wisdom of the organization. These packets can then form a work queue that is executed against the collective priorities of the collective digital estate. For example, the packets take into account workloads, loading, deadlines and priorities — marshalling automated resources to the most pressing business objectives based on finite technical resource. When you unify your organization using connected RPA, from IT to operations, you get amazing productivity kicks. So, an organization achieving a 5% improvement each week yields 12 times the productivity by week 52.
Connected RPA can provide attended-style automation, too, but in a more intelligent, enterprise-friendly way, with human-in-the-loop or human-assisted processes. With this mode of operation, users don’t run the process on the same desktop as the human starting and/or interacting with the robot in the process. Instead, the digital worker is run in a secure data center or cloud platform and interacts through work queues with the human user. This gives the benefits of human-in-the-loop processing without the massive technical and operational issues associated with RDA.
Ultimately, RDA tools limit the scale and potential of RPA solely to the confines of the desktop and introduce a variety of risks too. Connected RPA provides the platform for collaboration at scale, where across many large organizations, human workers, systems and applications are already creating a powerful, intelligent ecosystem of partners that enable a real digital transformation. Don’t play with RDA, it won’t get you anywhere. Build your connected RPA platform from day one and leap to achieving value at scale in the most secure, audited and reliable environment available.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The expanding threat landscape resulting from the convergence of cyber and physical systems is providing cybercriminals with additional entry points into the corporate network. This is not going unnoticed by cybercriminals. The Fortinet Threat Landscape Report for Q4 of 2018 not only showed that half of the top 12 exploit spots were held by IoT, but that four of those spots were held by IP cameras. As it turns out, bad actors are now focused on exploiting the inherent IoT weaknesses in these cameras to possibly monitor or control the very same devices we use to monitor our physical safety and security.
Access to IoT IP cameras not only enables cybercriminals to snoop on private interactions and enact malicious onsite activities (such as shutting off cameras to better physically access restricted areas), but also enables them to be used as a gateway into the cyber systems they are connected to in order to launch distributed denial-of-service attacks, steal proprietary information, initiate ransomware attacks and more. The adage “who’s monitoring the monitors” is quite apropos here.
To prevent this sort of compromise, organizations need to establish security protocols designed to protect connected physical systems from attack, including segmentation, baselining behaviors, and alerts and quarantines that are triggered when behaviors change. This is also a reminder that every IP-enabled device, especially those that are part of your physical security system, needs to be part of your vulnerability and patch management process. This is especially essential as more and more physical security devices traditionally assigned to operational technology networks are now being converged into the IT environment. In this new interconnected world, security cameras are merely the canary in the coalmine. Criminals that gain access to things like fire suppression systems and alarms could potentially cause catastrophic harm.
These security protocols will also need ongoing updates as IoT threats continue to evolve. Fortinet’s Q3 report for 2018 contained an entire section detailing the evolution of IoT botnets over the last few years. One important 2018 adaptation was the ability to implant cryptojacking malware into infected IoT devices in the home. There is no reason why business devices wouldn’t be next. While mining cryptocurrencies requires high CPU resources, and individual IoT devices may not offer much in the way of processing power, hordes of easily compromised and largely idle IoT devices may offer such power through scale.
However, that Q3 report also revealed the merging of destructive tendencies with IoT botnets. Traditional malware like BrickerBot rendered over 10 million devices completely useless since its launch in 2016. While this might only be an inconvenience when your internet-connected coffee maker in the office break room bricks up, but what about a medical device in a hospital, an HVAC system in your building or a connected thermostat regulating the temperature of an industrial-sized boiler filled with caustic chemicals?
We don’t have to wonder because we now have malware like VPNFilter designed to target IoT devices, and even industrial control systems. Once installed, it can not only steal website credentials and monitor SCADA protocols, but it also includes a kill switch that can physically destroy an IoT device. And it also has the ability to inject malicious code back into the network session it is monitoring, allowing for crossover infection to endpoint devices. And the bigger issue is that traditional security systems do very little to secure vulnerable IoT systems.
Shifting security strategies
This weakness in many security strategies is about to get worse. The sudden, exponential growth of the attack surface due to the rapid expansion of IoT devices and edge-based computing, especially when deployed in emerging 5G networks, means that literally billions of IoT devices will be interconnected across massive meshed edge environments, where any device can become the weakest link in the security chain and expose the entire enterprise to risk.
To address this challenge, organizations will need to make some fundamental shifts in how they think about networking and security:
- IT security teams will need to develop new segmentation strategies to isolate devices and limit exposure. Segmentation will also need to be extended across networked environments for which organizations may or may not have full control, such as 5G networks and public cloud services, in order to protect wide-ranging workflows, transactions and applications.
- Security must become an edge-to-edge entity, expanding from the IoT edge across the core enterprise network and out to branch offices and multiple public clouds. To do this, everything connected to the enterprise ecosystem needs to be identified and rated, and their state continuously confirmed. Once effective visibility and control are in place, all requests for access to network resources must then be verified, validated, authenticated and tracked.
- Organizations must devise security that supports and adapts to elastic, edge-to-edge hybrid systems, combining proven traditional strategies with new approaches and technologies that operate seamlessly across and between multiple ecosystems.
- Disparate security tools will need interoperability to share information and stop threats. This will require vendors to establish new open 5G security standards, integrate APIs into their systems and develop agnostic management tools that can be centrally managed to see security events and orchestrate widely distributed security policies.
In the meantime, organizations need to adopt open standards and common operating systems to ensure as much consistent interoperability as possible across their evolving network. Correlating event data, sharing real-time threat intelligence and supporting automated incident response will require security technologies to be deeply integrated. This will mean the development and adoption of a holistic security architecture that uses machine learning, artificial intelligence and automation to accelerate decision-making and close the gap between detection and mitigation.
Situational awareness is also key. Organizations need to understand their critical processes and data, identify cyber assets and know what OS and applications are installed. They will also need to map their network architecture to understand data flows and possible blind spots, and identify threat actors to get an idea of how they will try to break in and what resources they are most interested in obtaining.
Knowing is half the battle. It will help you engineer as much risk and vulnerability out of your network as possible, and it will also help you select those security systems that are most appropriate to protecting your unique environment. Just remember, to be the most effective, any security technologies you choose need to be able to interact with your other enforcement points by sharing events, correlating intelligence and coordinating a holistic response to threats.
Because multiple exploits targeting IoT devices not only topped Fortinet’s charts for Q4, but indeed, for the whole past year, organizations cannot afford to take a wait-and-see approach to network security. Real-world exploits are already causing business disruption, including the destruction of IoT devices, and are poised to inflict further damage as techniques evolve and IoT-enabled networks continue to expand. Being aware of these changes is the first step toward creating stronger defenses across the expanding network — a necessity as IoT increases in size and momentum, and becomes increasingly embedded deep into our business strategies and networking strategies.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Although the holiday season is in the rear-view mirror, retailers are still feeling the impacts of yet another busy shopping rush. Santa Claus likely did not get every wish list item correct, and the result is a spike in returns over the first 60 days of the new year. It is not easy to predict which inventory will re-enter the supply chain, and retailers are constantly trying to better anticipate and handle these returns.
In a consumer-focused, commerce-dependent retail world, brands not only have to constantly address the needs of the consumer, but also must prepare for the increasing possibility of inventory returns. Couple this with the fact that many retailers, such as Costco, Target, REI and IKEA, among others, have incredibly customer-friendly return policies, and it is a recipe for a lot of returns. So, can greater connectivity assist in this new world of returns? In the near term maybe, but in the long term — absolutely. Here is how.
We are only just scratching the surface of connected products. For the most part, retailers and brands are looking to attach RFID or IoT to their products to prevent shrinkage or for greater visibility throughout the supply chain. Connectivity is important for basic visibility, but retailers and brands are not yet prepared to push this connectivity beyond these boundaries. Most importantly, consumers need to become comfortable with allowing this connectivity to become a part of the product lifecycle.
Furthermore, consumers are becoming increasingly comfortable with connectivity throughout all aspects of their lives. From connected vehicles to connected personal devices, consumers are becoming more accepting of our information being consumed by a growing number of devices. In fact, this consumption of data is even accelerating in the home, as people use connected devices in their houses such as smart speakers and connected appliances. These trends signal a growing acceptance of sharing more data across an increasing number of touch points. But, how does this impact the world of returns?
Those working in the supply chain need to start thinking about using this greater number of touch points as an opportunity to better understand how their products are being used. For example, consumer product companies spend vast sums of money to acquire point-of-sale data about which of their products are being purchased. What if they could access data from connected appliances in homes to tell them how their products are being used? This amount of information could shed light on not only how the products are truly being used, but also understanding the cause behind returns. For example, are the products not being used as they were designed for? Or, are consumers’ expectations not in line with the intent of the product? Similar to how the automotive industry has used greater connectivity of their products to enable predictive maintenance, consumer supply chains can do so with their own usage data to better anticipate when items might come back for return or exchanges.
Greater connectivity is coming to a host of areas in the consumer ecosystem. This connectivity holds the potential to allow companies in supply chains to better manage the lifecycle of their products from purchase to usage and finally to return or end of cycle. The notion of predictive maintenance is no longer monopolized by large industrial machines, such as airplane engines or wind turbines. As a greater number of consumer-focused goods are becoming smarter, retailers need to take advantage of the data that is being made available to them — and ultimately better adapt their supply chain to an increasing amount of returns.
What will 2019 bring to the internet of things? The 2018 forecast predicted new functionality and new markets, including convergence with AI enhancements to make devices smarter and broader adoption by manufacturing. Now that the technology is proven, business models are verified and appetites from both consumers and industrial users are whetted, is it smooth sailing from here on out?
If only it were that easy. 2019 is likely to become the year that IoT gets really complicated since the stakes are high and the challenges multi-faceted.
Here are five IoT trends to look out for in 2019:
1. More hacks and increased spending on cybersecurity. Did you hear about the family in Orinda, Calif., whose Nest surveillance camera warned them about an impending ballistic missile attack? The risk of physical harm from North Korea may have been fake, but the hack was certainly worrying. With billions of connected devices proliferating on the market, the rewards from stealing data continue to grow, as will investment in hack prevention and damage limitation. An estimated $124 billion will be spent on data security globally in 2019. As cybersecurity costs rise, companies are looking for ways to protect their user data and decrease their risk.
2. Greater interest in selling data. As data collection increases, so does the temptation to monetize. TV manufacturer Vizio thought it had a great business model: sell its flat-screen TVs at break-even prices, then generate income by selling customer data. Oops. Settling a data-tracking lawsuit is costing Vizio an estimated $17 million — after already paying a $2 million fine to the FTC. Vizio doesn’t have to stop selling data, but it does need to be more transparent and offer clear options for customers. A better choice for companies like Vizio is to use a secure data exchange that preserves the functionality of the data, without revealing compromising or sensitive information.
3. New data harvested from IoT and workforce management tools. Gartner predicts 70% of organizations will integrate AI to assist employees’ productivity by 2021. Gartner also predicts that a quarter of digital workers will use virtual assistants daily. Now that IoT systems can track productivity and workers, it creates new privacy concerns. Companies will need a way to ensure that individual privacy — including Social Security numbers and HIPAA data — isn’t compromised in the collection, sharing, and analysis of data.
4. Regulations: Study the fine print. 2018 marked the introduction of several new data protection regulations, from the California Consumer Privacy Act to the European Union’s General Data Protection Regulations. Although California’s most stringent law doesn’t go into effect until 2020, it’s expected to have broad implications for the rest of the country, which will require all companies to read the fine print. The EU rules, which went into effect in 2018, laid out how personal data can be collected and stored, but also stressed that it isn’t forbidding the sale of data per se. For example, the EU states: “When the data used for AI are anonymized, then the requirements of the GDPR do not apply.” “Mythbusting,” the title of a fact sheet released in January 2019 by the European Commission, supports this statement, dispelling rumors that GDPR will stifle innovation in artificial intelligence. Companies will need to closely study the laws, which may force some of them to adjust their business models.
5. Increased popularity of privacy-enhancing technologies (PETs). Will the expense of cybersecurity and privacy throttle IoT innovations? Rather than push for weak or general rules that treat all organizations equally, in 2019 there will be a greater focus on using technological solutions to solve privacy and security problems. “One possible solution is to encourage the use of privacy-enhancing technologies,” wrote the Harvard Business Review. “PETs, long championed by privacy advocates, help balance the tradeoff between the utility of data while also maintaining privacy and security.” PETs include differential privacy, such as that in place at Apple, and homomorphic encryption used by Google. Another option is to decouple sensitive information and store data away from a company’s system, protecting privacy and reducing risk while still allowing the use of the data.
These are exciting times for IoT device manufacturers, consumers and the many global companies that support them.
In my first IoT Agenda post, I discussed how the internet of things and the industrial internet of things are dramatically expanding organizations’ attack surfaces and introducing new security and compliance risks. In this article, I want to focus on how IoT and IIoT have escalated the importance of gaining visibility into and control over cloud computing and edge computing environments.
Before we can truly appreciate the role of cloud and edge computing in IoT/IIoT, we need to first have a basic understanding of how IoT works. At a very high level, distributed IoT/IIoT infrastructures consist of IP-enabled sensors, processors and other devices that collect data and then use some form of connectivity (e.g., Wi-Fi, Bluetooth) to push that data to the cloud for processing, analysis and action.
From a security standpoint, the IoT and IIoT devices themselves must be protected because they are part of the network ecosystem and, if compromised, can serve as a gateway to IT and operations technology (OT) networks, as well as the treasure trove of information they contain. Implementing proper cloud security measures is equally important, since the cloud is home to data aggregation and analytics processes — and end users rely on this information for decision-making and to take appropriate action.
Using the cloud for IoT/IIoT data analysis works just fine in many instances, but not all — and this is where edge computing comes in. Public cloud data centers are usually located in remote places, far away from the end users they serve. Because of these distances, sending IoT/IIoT data to the cloud for analysis and then delivering it back locally takes time — time that some users just don’t have, for example, a doctor relying on an IoT medical device, or a facilities manager in charge of a nuclear power plant.
Edge computing processes data in close proximity to the site of data generation, which eliminates latency and performance issues, enabling real-time control decisions. Given these benefits, edge computing is being increasingly adopted within organizations that rely on IoT to provide instantaneous machine-to-machine interactivity and responsiveness. This has made edge infrastructure security more important than ever — but many IT security teams struggle with edge security, especially within IIoT environments.
Securing the edge
The issues associated with securing edge computing in an IIoT environment are, at a conceptual level, the same as any other networked connectivity, namely:
- Privacy: An assurance of privacy of the communication between parties (i.e., data encryption);
- Authentication: Enforcing assurance that parties are who they say they are before allowing access to edge networks and devices; and
- Authorization: A mechanism of authorization to the various services to which parties are entitled.
However, while there are well-known solutions to these challenges in traditional, legacy computing environments, IIoT environments remain challenging. The reason is that OT networks were, until recently, isolated from IT networks and the internet, so industrial control systems, sensors, controllers and other IIoT endpoints weren’t exposed to common IT threats and, therefore, weren’t designed to run security software — there simply wasn’t a need for them to have this ability. Today, however, this is all changing as OT networks merge with IT networks, and once isolated IIoT systems and devices are now IP-enabled — but still lack the power, compute cycles and storage to run security software.
This has presented IT security teams with several security challenges. First, in many cases, security teams won’t be able to “bolt on” security at all; they’ll need to replace the OT endpoints altogether, which takes time, money and resources. Second, for those endpoints that do have the capacity to run security software, the overhead of adding encryption, authentication and authorization systems and processes may actually increase latency, which would negatively impact real-time embedded OT endpoints responsible for sub-second or even millisecond reaction times. This would be a major step back, since reduced latency is the reason edge computing emerged in the first place. And last, but certainly not least, edge and endpoint OT devices are often located in inaccessible, less hospitable environments, making it very expensive for organizations to implement and maintain security.
OT networks will eventually adopt IT security processes and protocols, but revamping products and infrastructures in this way will take decades. What can be done today?
Security starts with visibility
When it comes to IoT/IIoT, it’s important for organizations to have an accurate understanding of not only their IT/OT networks, but their cloud and edge computing infrastructures as well. In other words, they must be able to answer questions such as: Who has access to what endpoints? Are IoT, edge and cloud systems being properly managed? Are there leak paths to and from the internet that could be compromised by cybercriminals? Is network traffic normal? And the list goes on.
The only way to answer these questions is by gaining visibility into three equally important areas:
- Visibility into all endpoints and assets across all computing environments;
- Visibility into how those endpoints are connected to the enterprise, the internet and each other; and
- Visibility into whether the endpoints and subsequent traffic are expected, or if they indicate suspicious behavior, anomalous activity or rogue devices.
You can’t protect what you can’t see. The first step to winning the IoT/IIoT security battle — whether in an IT or OT environment — is visibility. Once visibility is achieved, organizations have access to the information they need to fully understand their risk posture, prioritize security strategies based on this understanding, and use IoT/IIoT data and other next-gen technologies to advance business processes without introducing unnecessary risks.
The change is seismic. The impact, drastic. The potential is limitless. The fourth Industrial Revolution doesn’t have to be your enemy. We’re witnessing a shift where unforeseen technological innovation and advances are driving customer expectations. These demands often pose challenges to industries, companies and leaders to not only deliver on their customers’ needs, but understand how these technological advancements can benefit their businesses. Moreover, they must begin to anticipate market changes driven by technological innovation.
- Strategy: Blending disparate emerging technologies to advance strategic goals is one of the key challenges in preparation for the 4IR. Quite simply, there is no off-the-shelf “4IR kit.”
- Growth: When a 4IR strategy is designed and carried out correctly, it is agile and resilient, capable of incorporating tomorrow’s new technology and business models with as little disruption or redesign as possible.
- Trust: Data powers the revolution. But customers’ willingness to entrust companies with sensitive information about their lives and businesses hinges on the quality of experience offered to them and knowing their data is protected.
- Workforce: Demand for digital talent is at an all-time high, and companies cannot wait for there to be enough graduates or trainees to fill it.
So, how do we address these considerations? Turn the revolution into your business’ evolution.
When faced with the challenges to digitize our own business and to understand the technological disruption taking place across the market, we created several solutions. From reshaping our business model to bringing the creative industry inside our four walls, there’s a series of efforts we’ve taken to adjust to the market and inspire significant change within our PwC culture. We also focused on expanding our talent with different backgrounds and recently embarked on a digital upskilling journey with 55,000+ individuals across the U.S. The key here is that it is a journey.
As companies are looking to reshape their businesses to respond to technological innovation, they must keep the following principles in mind:
- Understand both business model innovation and product innovation. Simply put, typical product innovation will no longer be enough. Leaders will need to balance product innovation with the more radical approach: business model innovation. By focusing on business model innovation versus product innovation, the focus shifts from the organizations of today and toward the organizations of the future.
- An increase in cross-sector activity. Deep knowledge expertise has proven critical in traditional business models, but as we look to evolve, having the flexibility to work and collaborate cross-sector is imperative.
- Hiring isn’t the sole answer on talent. We can’t expect to solve digital disruption by only hiring digital talent. The digital fitness of all staff, new and existing, is part of the culture shift necessary to motivate and retain your workforce, while preparing them for the needs of tomorrow.
It’s a journey. One that requires a highly adaptive mindset with a fluid approach, a deep knowledge of new technologies and an even broader understanding of how all parts of businesses are impacted by this revolution. Yes, it’s difficult laying out a strategy for a future with uncertainty, but the key is to start now.
Companies are constantly striving to introduce new and unique connected devices to market, and there has been a great deal of innovation aimed at the home and business that transform the way we live. Home security, for example, has been an emphasis for IoT ecosystem development and has already had a steady adoption in the home and business, from smart entry systems and security cameras to smoke alarms.
Similarly, Nielsen reported that nearly a quarter of U.S. households own a smart speaker. The IoT revolution is upon us and there is no doubt that it’s just scratching the surface of its peak potential. With that said, there is an anchor that I firmly believe is preventing these devices from saturating the market — and that is power.
Batteries are convenient, but constraining
Batteries that power IoT devices pose a solution, as well as a problem. While they serve as portable power options that are quick to install anywhere, batteries also impose strict power budgets and limit device functionality.
For example, while wireless security cameras may be easy for homeowners to self-install due to the lack of wiring involved, batteries are known to quickly deplete. Constantly changing batteries is a hassle, and homeowners are often left wishing they had invested in wired systems that offer features such as streaming video. Similarly, smart locks require constant battery replacements in order to power the device, and even more so if the user wants video recording, biometrics or cloud storage.
An executive from a smart home products company we recently spoke with gave some insight into challenges they face regarding powering IoT devices. “At first,” he said, “we wanted to make sure customers don’t have to replace batteries more than once a year, but we realized we could not fit the features we wanted under those power constraints.” He added, “Ultimately, we decided that the need to replace batteries every six months would be okay, so we could include the features most desired in the product.” Whether you agree or disagree with this executive, this exchange illustrates how battery power handcuffs solution designers.
Let’s assume the device is using four high-end AA alkaline batteries — likely too large for some IoT devices, but it’s a good reference point. If the batteries need to last one year, the average power consumption is only 0.002 watts. Compare that with wired devices that can take 1, 5 or even 50 watts. You can begin to see the problem. This huge gap explains the dramatic functionality difference between battery-operated and wired devices.
Would wired devices serve as a better alternative?
While some may believe that wired devices are the solution, they carry their own set of problems. Consumers can’t easily install wired IoT devices themselves without having a background in electrical wiring or fishing wire through walls, and, at an average cost of $100 per hour for labor, professional installation is quite expensive. Also, if a problem arises with a wired IoT device, installation teams are required to make a service trip to the home or facility to troubleshoot and resolve the issue.
The concept behind long-range wireless power relies around three simple steps:
- An energy transmitter converts electrical energy into some physical phenomena.
- That physical phenomenon travels through air to reach an energy receiver.
- That phenomenon is converted back into energy on the receiver side.
There are different approaches to make wireless charging a reality, including radio waves, ultrasound and infrared light. When looking into options, product designers need to investigate a few key parameters:
- How much energy can be available for the receiving device? At what distance?
- What is the transfer efficiency?
- How large are the energy transmitter and receivers?
- Is it safe? Are the quoted values within the regulator’s (FDA, FCC, etc.) tolerable limits?
Using the light spectrum for energy delivery
One promising alternative is long-range wireless power that uses infrared (IR) light. It is promising for several reasons, the first being that IR light is natural light. About 50% of the sun’s energy is IR, so humans have already been living with IR for millions of years.
Light can travel in a thin, straight line. For instance, with a laser pointer you can shine a small dot on a wall from far away. In the same manner, an IR transmitter can focus a tight energy beam on a small receiver. This means high efficiency, and that all or most of the transmitted energy reaches the receiver and that little or no energy leaks into the environment.
Energy transmission using IR light can safely deliver hundreds or thousands of times more energy than batteries or other wireless charging methodologies due to the fact that the transmitted energy significantly dissipates the further it travels, limiting the usable energy available at the device level. The combination of power, distance and safety of IR holds significant promise to product designers and end users alike.
What’s the future of wireless power and IoT?
Regardless of approach, wireless power is not always the right answer. Sometimes a battery provides plenty of energy for a particular sensor. If a device needs a new battery every five years, wireless power might not be a high priority. By the same token, if an IoT device needs tens of watts, wireless power might not be able to deliver the amount of power required. Sometimes, it is not practical to install an energy transmitter within reasonable distance of the client device.
Long-range wireless power can absolutely be useful as a third option for energy delivery. If power cords are cumbersome and batteries are insufficient, wireless power can be an alternative. Designers equipped with much more than batteries can create breakthrough IoT innovations, simplify installations and put the power back into consumer’s hands. Wireless power is not a perfect fit for all applications, but forward-thinking companies will soon consider this technology and the benefits it provides.
More than 55% of the world’s population lives in cities, and by 2050, some 70% of the global population is expected to live in a city. In North America, city-dwellers already exceed 80% of the population. The urbanization of the world’s population will create huge challenges related to public health, pollution, energy, housing, safety and traffic. We need smarter ways to build and operate the cities of the future, and that’s why smart cities are a hot trend today.
In a smart city, there will be sensors everywhere, and IoT sensors, coupled with feedback loops through data centers or edge processing gateways, will fuel new applications and services. Think of streetlights that dim when no people are detected on a street, or sprinklers that turn on water only when needed, or parking sensors and cameras that direct traffic to available parking spots through a dynamic application.
Connectivity is the first key challenge in building out these sensor networks. There will ultimately be billions of devices, and they will need connectivity back to a data center or edge processing unit so they can relay their data to a smart city application. Of course, many smart city sensors will connect via wireless networks using protocols ranging from Zigbee, LoRa, Wi-Fi and 4G or 5G, but there is no such thing as a wireless network without wireline connectivity. Fiber remains one of the most robust ways to deliver large fronthaul and backhaul traffic.
As a result, cities that have been familiar with providing rights of way for water, gas and electricity services will have to add a fourth utility — connectivity. Cities must plan and implement ways to enable broadband connectivity to all citizens and edge devices for their future connected cities.
Until now, it has largely been service providers’ responsibility to build out fiber networks, but this is changing in the smart city era. Cities need ubiquitous fiber coverage, but they can’t rely on service providers to build out these networks everywhere because carriers normally only invest in areas where they can receive a good ROI. Cities need to provide infrastructure for poorer neighborhoods to counter the “digital divide” by ensuring that connectivity reaches everyone.
There are many hurdles to be overcome. Regulatory issues, lack of technical expertise and the need for new financial models still burden broadband deployments, but we are starting to see innovative approaches from cities with the political willpower to advance their communities. For example, cities are crossing departmental silos — the department of transportation typically has fiber at intersections, and cities are looking at sharing these fiber assets. To ease fiber deployments and reduce costs, cities should coordinate among departments so that when one department — water and power, for example — is planning to dig up a street, the city can lay conduit for future fiber deployments to reduce costs and simplify deployment. Streetlights are another example — cities are deploying smart light poles that incorporate concealment systems for sensors, cameras, wireless nodes, power systems and other components, and running fiber to them. These cities will have desirable assets when 5G small cells roll out and a leg up on the competition.
It will be each city’s responsibility to accelerate deployment and assist with these challenges to make it easier to deploy technology in their cities, or else they risk being left behind in the technology evolution and widening the digital divide. They already enable gas, water and electricity, and now they need to enable connectivity — not necessarily pulling the actual cables or lighting up the fiber, but enabling rights of way and providing the real estate for smart devices. In this way, cities can lay the groundwork for the rapid evolution and deployment of smart city applications and broadband connectivity throughout their community.
The internet of things is made up of many different moving parts. Most fundamental is the network of sensors embedded in the things themselves. And, of course, the provision of connectivity is imperative since there would be no network without it. Whether via Wi-Fi, Bluetooth or 5G, a range of protocols are employed to connect the myriad disparate components and allow them to communicate — sharing data with each other, with businesses or with service providers. This data also is essential to the functioning of IoT. Not only is its analysis critical in delivering the insight and intelligence required to inform business operations and processes, but it can represent a valuable revenue stream — if correctly monetized. Then, there are the security measures that must be put in place to protect the sensors and the data they generate.
With so many parts in play, it can be easy for businesses and operators to take a divide-and-conquer strategy, attempting to integrate a combination of different products from different vendors for each element of their own IoT network. Others might even adopt a one-size-fits-all approach to piecing together the various different architecture and network elements in the hope that they will work seamlessly together on any given use case.
Neither of these approaches is recommended, however. For one thing, IoT is still relatively immature and we don’t know exactly what the future will hold in terms of its development, especially with the imminent arrival of 5G and all that entails.
On a more pragmatic note, service-level agreements can differ from deployment to deployment, and the quality of experience within a particular use case can vary depending on usage and environmental conditions. Events such as these can have a negative impact on billing, IoT monetization and, ultimately, the bottom line. In order to avoid this and to make their IoT networks more effective and lucrative, businesses and operators should therefore rethink their approach to IoT, and start looking at it instead as a converged service.
Benefits of a converged network
Sometimes referred to as a next-generation network, a converged network merges multiple diverse networks over one common, standardized architecture. This enables a more effective, efficient transport of various kinds of traffic, such as data, video and voice.
Traditionally, converged networks were designed to allow voice and data telecommunications to exist in harmony; the advent of IoT, however, means that converged networks are now required to accommodate M2M communication as well. Indeed, the convergence of network, data and services can allow for the seamless, automatic connection of devices integral to IoT, regardless of the devices’ location, vendor or operating system. Common standardized infrastructure on most converged networks has a variety of services and protocols. When done right, they offer efficiency benefits including interoperability and the ability to carry out system upgrades without the need for downtime across the entire network. It can also reduce the risk of having to rip and replace. As IoT technology evolves, a single converged network may significantly reduce an organization’s ongoing maintenance costs and can drive efficiencies in other areas of the business.
Preparing for an uncertain future
For example, a converged services approach can be taken to billing. Whereas operators once billed solely for connectivity, the sophisticated requirements of IoT now mean they are able to bill their customers for devices, applications and bundled services too. By taking a converged services approach, and cooperating with different players in the IoT ecosystem, including CSPs, application providers or device vendors, operators are able to construct and implement bundled services and billing mechanisms that meet the needs of any use case, whether B2B, B2B2C or B2B2B. What’s more, taking such an approach allows them to revise and update them as conditions change over time, thus ensuring that they are always providing the most appropriate — and profitable — offering at any given moment.
Networking has evolved over the years as a result of the latest technological developments in the space, and is likely to continue to do so as IoT grows in size, maturity and complexity. And it’s this complexity that can hinder a business or an operator in its attempts at a successful IoT deployment. Rather than taking a patchwork approach to building an IoT network — or a pre-emptive strategy based on relatively little knowledge of how IoT will play out — a converged services approach is the most sensible means of implementing the ever-changing and evolving elements that make up IoT. With a flexible and cost-effective approach, while delivering high quality of experience across any use case, convergence is future-proof. And when the future is uncertain, this must be a consideration for any operator looking to monetize IoT and maximize their return on investment.