The center of gravity for computing has expanded and contracted over the years.
When individual computers were expensive, users congregated around those scarce resources. Minicomputer departmental servers and especially PCs kicked computing out towards the periphery. Cloud computing on public cloud started to pull compute inwards again.
However, a variety of trends — especially IoT — are drawing many functions out to the periphery again.
The edge isn’t just the very edge
The concept of edge computing as we understand it today dates back to the nineties when Akamai started to provide content delivery networks at the network edge. We also see echoes of edge computing in client-server computing and other architectures that distribute computing power between a user’s desk and a server room somewhere. Later, IoT sensors, IoT gateways and pervasive computing more broadly highlighted how not everything could be centralized.
One thing these historical antecedents have in common is that they’re specific approaches intended to address rather specific problems. This pattern continues today to some extent. Centralize where you can and distribute where you must remains a good rule of thumb when thinking about distributed architectures. Today there are far more patterns and complexity.
Why must you sometimes distribute?
Centralized computing has a lot of advantages. Computers are in a controlled environment, benefit from economies of scale and can be more easily managed. There’s a reason the industry has generally moved away from server closets to datacenters, but you can’t always centralize everything. Consider some of the things you need to think about when you design computer architecture, such as bandwidth, latency and resiliency.
For bandwidth, moving bits around costs money in networking gear and other costs. You might not want to stream movies from a central server to each user individually. This is the type of fan-out problem that Akamai originally solved. Alternatively, you may be collecting a lot of data at the edge that doesn’t need to be stored permanently or that can be aggregated in some manner before sending it home. This is the fan-in problem.
Moving bits also takes time, creating latency. Process control loops or augmented reality applications may not be able to afford delays associated with communication back to a central server. Even under ideal factors, such communications are constrained by the speed of light and, in practice, can take much longer.
Furthermore, you can’t depend on communication links always being available. Perhaps cell reception is bad. It may be possible to add resiliency to limit how many people or devices a failure affects. Or it may be possible to continue providing service, even if degraded, if there’s a network failure.
Edge computing can also involve issues like data sovereignty when you want to control the proliferation of information outside of a defined geographical area for security or regulatory reasons.
Why are we talking so much about edge computing today?
None of this is really new. Certainly there have been echoes of edge computing for as long as we’ve had a mix of large computers living in an air-conditioned room somewhere and smaller ones that weren’t.
Today we’re seeing a particularly stark contrast. The overall trend for the last decade has been to centralize cloud services in relatively concentrated scale-up data centers driven by economies of scale, efficiency gains through resource sharing and the availability of widespread high-bandwidth connectivity to those sites.
Edge computing has emerged as a countertrend that decentralizes cloud services and distributes them to many, small scale-out sites close to end users or distributed devices. This countertrend is fueled by emerging use cases like IoT, augmented reality, virtual reality, robotics, machine learning and telco network functions. These are best optimized by placing service provisioning closer to users for both technical and business reasons. Traditional enterprises are also starting to expand their use of distributed computing to support richer functions in their remote and branch offices, retail locations and manufacturing plants.
There are many edges
There is no single edge, but a continuum of edge tiers with different properties in terms of distance to users, number of sites, size of sites and ownership. The terminology used for these different edge locations varies both across and within industries. For example, the edge for an enterprise might be a retail store, a factory or a train. For an end user, it’s probably something they own or control like a house or a car.
Service providers have several edges. There’s the edge at the device. This is some sort of standalone device, perhaps a sensor in an IoT context. There’s the edge where they terminate the access link. This can often be viewed as a gateway. There can also be multiple aggregation tiers, which are all edges in their own way and may have significant computing power.
This is all to say that the edge has gotten quite complicated. It’s not just the small devices that the user or the physical world interacts with any longer. It’s not even those plus some gateways. It’s really a broader evolution of distributed computing.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
This is the first of a four-part blog series.
Let’s dive a little deeper on the topic of edge computing since I get a lot of questions on it. My 5-part blog series on Realizing the Holy Grail of Digital from last year is a great primer, and in this blog I’ll get into more depth while repeating some content for continuity.
What is edge computing and why does it matter?
First, start with a recap of the basics. There is no single edge. I define edge computing as moving compute to locations as close as both feasible and necessary to the subscribers that need it. For example, the closest a telephone company can feasibly locate significant compute to its subscribers is their towers and network base stations. Organizations do this to minimize latency experienced by their customers, as well as alleviate bandwidth congestion on their upstream networks. In another example, for operational technology professionals it is often necessary to locate compute directly on the factory floor — close to the process — for reasons of bandwidth, latency, uptime and security. If you want a funny stare and an earful, ask the manager of a nuclear power plant to connect their operations up to the cloud. Bring popcorn.
Don’t get me wrong, the cloud isn’t going away, but commonly cited reasons for an increase in edge computing over time include reduced bandwidth consumption meaning cost savings, reduced latency through improved necessary reaction time to events in the physical world, maximum uptime by not relying on WAN like cellular that can be less reliable than wired, and improved security. In terms of security, this means protecting assets close to the source of data and in the legacy sense, ones that were never intended to connect to broader networks, much less the internet.
5G… another Windex of technology
Read “Getting to Advanced Class IoT” of my 5-part blog series for what I mean with the Windex reference. Often I get asked whether edge computing is a short-lived fad since 5G will negate the latency and bandwidth arguments. True in some cases, however uptime still matters. I wouldn’t trust my car airbag deployment to any WAN, regardless of how fast and reliable it is during normal operation. Further, bandwidth always comes with a cost.
Universal trust is paramount; we need an open edge to get there
All data originates at the edge, whether it’s generated by people or things. With people-generated data, you need to trust the individual who created it. After all, if it’s on the Internet, it has to be true, right? However, a benefit to deploying edge computing technology with IoT devices is the ability to apply intrinsic trust to data the moment it’s created in the physical world. This happens with the help of open, transparent technology including root of trust down to the silicon level.
The application of this trust to ensure data provenance and confidence everywhere can not only prepare us for what I call the holy grail of digital, but also meeting compliance needs, such as GDPR, as well as being able to trust your workloads running on the same infrastructure as others. I’ll continue to dig into these important topics in upcoming blogs.
What are the drawbacks of edge computing?
The biggest drawback to deploying edge technology is having to secure and manage all those distributed nodes, outside of the confines of secure data centers. For this to scale, you need consistent infrastructure regardless of devices, sensors or applications used. If you have a sweet tooth, you’ll appreciate it when I say that we need one cake with lots of flavors of icing and sprinkles.
For this reason, many great tech providers in the ecosystem have been investing in EdgeX Foundry as an open, vendor-neutral software framework to bring together heterogeneous value-add — both commercial and open-source — around a common center of gravity.
Think of EdgeX doing for IoT what Android did for mobile. Read the next posts in this series for considerations when developing edge computing solutions and my related three rules for IoT and edge scale, which are centered on us all being better off focusing on creating value and less reinvention in the middle.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Autonomous drones represent a category of drones where the pilot is no longer actively involved in flying the drone and is instead monitoring it remotely and only taking control when absolutely necessary.
Autonomous drones can fly under GPS-aided navigation. These are autonomous programmed flights with GPS way-points. There are other categories of autonomous drones which can fly in GPS-denied environments or gps intermittent conditions. They can be in the pilot’s line of sight or can be beyond visual line of sight with additional regulations.
Autonomous drones are used across various industries and verticals from delivery of goods, industrial asset inspections, surveillance, construction, indoor warehouse applications and inspecting dangerous areas, such as mines where humans have a safety risk.
The challenges of involved technology
Autonomous operations are complex from a technical standpoint. While AI-controlled drones with vision carry all the hype, underneath there are various sensors, controllers, and sensor fusion algorithms working in tandem to achieve autonomous flight and also serving redundancy.
Sensors are far from perfect. GPS-aided drones can suffer from loss of GPS signals. Localization amd mapping algorithms in GPS-denied or intermittent conditions have limitations based on environment texture, reflectivity, types and the quality of sensors used.
Battery limitations restrict the total duration of the flight and can become a hurdle for automated paths especially for prosumer category drones with limited payload capacity and flight duration. Loss of battery power in the middle of the flight can cause forced landing in rough areas where recovery of the drone is a challenge.
Autonomous drones operating in GPS-denied areas require complex algorithms involving a fusion of sensors such as laser, cameras, inertial measurement unit for localization and mapping, obstacle avoidance, and path planning. Algorithms can fail due to sensor limitations and this can lead to drones getting lost or colliding. Drones can drift due to strong winds, crash against static obstacles, such as trees, and dynamic obstacles, such as birds. Sensors can fail under different environment conditions. To be able to handle all of these factors is an enormous challenge.
Some form of wireless radio beacon signal still remains a necessary requirement to periodically convey where the drone is to the remote monitoring operator. Recovery of a drone that is lost or crashed in especially difficult terrain or a dangerous areas is a complicated issue. Adding redundancy in sensors, services also has an impact on cost and operations. Expensive sensors and sensor-based hardware can provide good operation in GPS-denied areas. But drones are always prone to crashes or failures and more expensive hardware in difficult terrains or conditions has a cost impact.
GPS accuracy can become a problem for accuracy
Deep learning with cameras is used extensively in the newer generation of autonomous drones relying on camera vision for autonomous navigation, but AI remains an empirically tuned black box. The amount of training data necessary to tune the model defeats a number of use cases where it’s difficult to fly the drone with pilot manual control to get access to the data. There have been mind boggling advances in this field and will continue to solve more problems.
For remote monitoring of autonomous drones, camera based video transmission is preferred for first-person view, but also requires high bandwidth wireless transmission and can have larger latencies based on the connectivity chosen. 4G and 5G connectivity is applicable where the cellular signals remain in range.
Overall regulations on drones
The Federal Aviation Administration in the U.S and the equivalent in other regions state that there is a 400 feet altitude flight restriction unless specific permissions are acquired for industrial asset inspection. For drones flying outside line of sight, you must have a spotter that maintains direct line of sight. Flight is restricted within 5 miles of any controlled or uncontrolled airstrip or helipad. There are also restrictions on flying near stadiums, during emergency situations and flying over people not involved with flight operations.
Autonomous drones with secure connectivity to cloud does enable various advantages, such as remote fleet management of multiple drones, flight planning for the entire fleet of drones, pushing software upgrades, deploying new AI models to an already deployed fleet of drones, real-time telemetry, and logging. There is no doubt connected drones especially for autonomous operations are going to be extremely useful. Connectivity can vary from cellular, satellite and other types of wireless.
Industrial use of drones
The market for industrial asset and infrastructure inspections is ripe for use of autonomous drones. Whether it’s inspection of cell phone towers, wind turbines, electrical transmission towers, wind mills, bridges, or oil or gas pipelines, autonomous drones have a huge potential. The agriculture industry is already using GPS-guided drones extensively. Site monitoring including construction is another fast growing area of drone use.
The road ahead
Autonomous drones have come a long way. Autonomous navigation algorithms have matured, available on-board compute power for real time operations has shot up, and sensor prices have come down significantly. The problems are not solved entirely, but autonomous operations are becoming more and more feasible at manageable costs for chosen use cases. The industry must focus on picking specific use cases or problems and developing autonomous solutions for them.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The construction industry employs about 7% of the world’s working-age population. Although many people think that robots will take away their work, masons should not be worried at this time.
In 2017, McKinsey Global Institute published a surprising report: labor productivity in construction has decreased by 50% since 1970. In addition, McKinsey believes that productivity in construction has registered zero increases in recent years. While other industries have been transformed, construction has stalled. The effect is that, when adjusting to inflation, a building today costs twice as much as 40 years ago.
And even though the construction industry is a growing market, there are still some problems that need work to improve. The main issues facing the construction industry in 2019 are:
- Shortage of skilled labor
- Rates have raised the price of steel, aluminum, wood and other materials
- Growth is slowing
- Productivity and profitability
- Low performance projects
- Security problems
- Consistent use of technology
The construction sector has been positive for four years, mainly due to the recovery of residential construction. After the precedent of the past decade, 2020 will still remain in the positive zone with 3.5%, but won’t stay in the positive any longer by 2021, decreasing to -3%.
The adoption of IoT in construction
The construction industry is notoriously slow in adopting technologies, such as IoT which could boost productivity and ultimately profitability. Even though it is believed that construction companies that adopt this technology would attract new labor groups and have a significant advantage over competitors, the reality is this is not happening.
The construction sector is in the tail end of the digital transformation as evidenced in the report “Global Survey of Construction and Engineering,” prepared by EY. The construction and engineering companies see the need for change, but in one way or another they resist.
One of the great characteristics of the construction sector is its enormous capacity to reinvent itself. The integration of technological trends such as IoT will facilitate many of the tasks of the sector, optimize resources, and improve compliance with deadlines and quality in tasks. To obtain a competitive advantage over other companies in the sector, it is necessary to take advantage of the opportunity offered by IoT and recognize the changes that are taking place in the sector.
The opportunity of IoT in construction
Investors’ appetite for startups in the construction sector is growing, although not many are in the IoT industry. There are still few examples of companies in the sector that are adopting IoT. Although IoT will positively affect this industry, none of the productivity, maintenance, security and safety drivers seems to convince organizations at the moment.
There are four areas of IoT innovation in construction:
- Machine control
- Construction site monitoring
- Fleet management
Construction site monitoring includes anchor load monitoring from installation, control of the deformation of the ground during the construction of the tunnel, monitoring changes in pore water pressure during soil consolidation, and monitoring of the settlement process during the soil recovery works.
Some practical examples of IoT in construction include remote operation, replenishment of supplies, equipment construction and monitoring tools, maintenance and repair of equipment, remote use monitoring, and energy and fuel savings.
IoT and other emerging technology can improve productivity, reduce costs and boost security in the construction industry. Construction companies, real estate and engineering firms should continue their investments in IoT. These companies should not fall back into the same mistakes and should not fear the loss of jobs due to the new technologies like IoT or AI. The adoption of IoT is unlikely to replace the human element in construction. Instead, it will modify business models in the industry, reduce costly mistakes, reduce injuries in the workplace and make construction operations more efficient.
Smart construction is key to build smart cities with smart buildings, smart transportation and smart healthcare. Our imagination is the only limit in using IoT for construction.
In August of 2011, Marc Andreessen wrote what has now become one of the most famous quotes in our industry, “Software is eating the world.”
When I first heard this quote, it sounded scary, dangerous and even threatening. But what he really meant at the time was that software was transforming the world. He described how more and more businesses were run on software and delivered online as services, including everything from bookstores — remember when that was Amazon’s core vision? — to agriculture, entertainment and even national defense. As a result, he made a very compelling prediction, “over the next 10 years, I expect many more industries to be disrupted by software.”
Ten years later, all companies are software companies
On the surface, Andreessen’s prediction has come true. We do now live in a digital world and every company who has at least one customer must be a software company. Every company that wants to survive has a digital presence, connects to customers, their devices and locations in a myriad of ways. Chances are “there’s an app for that.” No matter the industry, companies are interfacing with customers, suppliers, partners and investors with some form of software that brings it all together.
But while digital engagement using software applications is what we see and experience, the data that is generated from every touch of a phone screen, click of a mouse, and swipe of a digital credit card is the protein that powers the industry athletes and separates the olympians from the amateurs.
Software without embedded analytics will starve
Most companies today use that IoT data protein to feed their salesforce, marketing department, security office and finance organization. But the true industry athletes know that consuming the data is a path toward improving software and systems performance and other operations, which is the real key to winning the gold medal. Feeding real-time data to product management, software development and customer support is a key element of the training that makes improvements and enhancements possible at a speed that post-experience customer surveys cannot deliver. As more and more enterprise software solutions become self-managed services or hosted offerings, predictive maintenance and proactive action is the differentiator between the high school athletes and the Olympians.
This leads me to my own prediction: software without embedded analytics will starve.
Optimal Plus takes home the data olympics gold
One of the data olympians that I admire most is Optimal Plus. Software on the manufacturing shop floor is not new, but traditional manufacturing software does not represent the next generation of embedded analytics that will separate the industry athletes from the rest. The key here is sensor data, data that tracks every component of every machine across a distributed and often complex supply chain, which translates into a lot of data. There is a difference between collecting that data in homegrown repositories or newly established Hadoop clusters versus truly acting on business insights generated from billions of sensor data points in near real time. When you consider this volume of data and pair it with the costs and quality demands in the electronics and semiconductor manufacturing industry segment, you get Optimal Plus.
Optimal Plus is a software company, but it is a software company that lives and breathes on embedded analytics in distributed devices. Responsible for delivering the manufacturing intelligence necessary to produce 50 billion semiconductors and printed circuit boards each year, Optimal Plus compares historical data with almost-instant test information to predict faults and prevent downtime. And in production environments where errors can — and usually do — negatively impact profit, Optimal Plus delivers instant analytics integrating previously siloed data islands – the result of software that generates data but then doesn’t know what to do with it.
Analyze or be eaten
Software has transformed the world, but without the core protein of data and the user behavior analytics it can provide, the software itself is not enough. Companies that provide software applications and digital experiences without embedded analytics will starve, while the companies who leverage the protein of data to understand each and every customer at the individual click or touch level will consume their competitors. Analyze or be eaten. What do you think of that, Marc Andreesen?
Creating smart buildings requires integrated systems and digital technologies. As I discussed in my last article, the true value of digital transformation lies in the data and applied insights these systems gather and provide. Proactive insights can drive building and occupant productivity, efficiency and safety as AI and machine learning collect usable information from facility equipment, spaces and systems. Digital solutions act as a natural extension of the capabilities provided by physical equipment like chillers, lighting systems and access controls.
Most businesses — from a single facility to large enterprises — are already collecting some form of data, whether it be video footage, energy usage or access control logs. The important points are how this data is gathered, what is done with it and how is the business benefitting. Through advanced building management systems, gathering AI-powered insights leads to benefits such as lower energy and operational costs as well as improved occupant satisfaction and productivity.
The value of smart, connected technologies in understanding your building
Elevating facilities into smart buildings is a result of the actions taken based on relevant data gathered. Machine learning and AI applications can synthesize massive amounts of information to enable managers to more easily identify solutions that create more efficient buildings and assets. In fact, Gartner found 40% of top-performing companies identified AI and machine learning as the number one game-changing technology, with data analytics taking second place at 23%. Although many building owners and managers implement complex technologies, the data collected can be overwhelming and difficult to interpret, especially if these technologies are integrated with other building systems. AI and machine learning provide access to not only the data collected from working equipment, systems and building spaces, but also proactive, actionable insights that address functionality, streamlined operations and lowered costs. This all-inclusive, specific view of enterprises can also associate costs with specific building areas.
For example, platforms with AI capabilities can look at a single floor of a building or take an enterprise-wide view across security, lighting, HVAC and other systems from a single pane of glass. They can then identify potential problem areas, such as an underutilized conference room with the lights on all day or a specific floor that receives too many temperature complaints. Once these opportunities for improvement are identified, the system can make recommendations to reduce the amount of lighting or increase the temperature, lowering energy usage and improving efficiency within the building while also meeting occupant satisfaction goals.
Managers looking to meet these goals need to identify adaptable technology that is vendor-agnostic. Platforms must be able to be layered into existing technologies and systems to provide new insights immediately while allowing managers to avoid the costs of an entire building retrofit or new construction project. Engaging with an expert early on can help establish clear goals for transformation and streamline the process of implementing appropriate smart building technologies from start to finish to achieve those outcomes. A partner can identify the most effective applications for AI, machine learning and additional technologies to best optimize your facility and provide the most useful insights for your building on an individual basis.
To illustrate, the benefits of intelligent enterprise management range across building systems. Thermal, water, electrical and carbon energy usage and storage are just some of the areas that can be viewed from a centralized location and analyzed to forecast potential peaks and problems. Building and occupant behavior can then be adjusted accordingly. For example, an incoming winter storm or drought generally means an increase in energy use. With intelligent management insights, such as occupancy and space usage, the facility manager or even the building itself can identify low-use areas to keep at appropriate temperatures while focusing resources on high-occupancy floors and rooms during severe weather events and allow for appropriate water use in the event of a drought.
In addition to building use, some systems can provide insights into financial health and bills associated with energy and equipment management. They can identify costs and returns with utility consumption and identify equipment issues proactively to allow for better maintenance management and infrastructure services. These insights can not only impact operational costs, but also improve facility efficiency and productivity. Having a complete view into any area of your building at your fingertips allows for safer, more sustainable and more efficient operations.
Into the hands of the occupant
Efficiency and sustainability goals are of course important for any enterprise, but occupant satisfaction goals can and should also be addressed with digital transformation and building management technologies. While occupants don’t need access to the level of data an enterprise facility manager does, creating opportunities for convenient and personalized space management based on specified insights from applied AI and machine learning can greatly enhance employee experience.
When employees are comfortable, they’re more productive. Having control over workspace lighting, temperature, building navigation and space reservation from personal devices allows for more autonomous and productive use of space that can even learn employee preferences. When connected to an access control system, lighting and temperature can automatically be set when an employee badges in, no matter if it is their usual office space or if they are working from another location. This extends to after-hours use of the building, ensuring maximized building efficiency and minimized maintenance calls, allowing for building managers to focus on other matters.
Providing this level of access may seem intimidating as some managers envision temperature wars between coworkers or similar circumstances. However, as occupants have access to their energy usage data in conjunction with those controls, they and the facility can use the insights from their specific workspaces to identify optimal lighting and cooling timing and levels, along with crowded or underutilized work areas or overbooked rooms. This in turn lowers facility energy costs while simultaneously creating a comfortable, productive environment.
Providing data access to occupants, employees and others who use facilities on a day-to-day basis is a key opportunity for potentially lowering operational costs. Additionally, a Gartner survey in 2017 found nearly half of CEOs have no digital transformation success metric. Both satisfaction and productivity can act as key statistics to showcase the impact of digital transformation initiatives in creating smarter buildings and businesses. Investing in employees and technology and providing access to that data enhances visibility, trust, productivity and reliability for both occupants and facilities.
Determining the success of your digital transformation efforts involves looking at the success of technologies in the context of buildings, equipment and employees.
Insights from AI, machine learning and other data-enabled technologies empower building managers to understand current building operations while identifying concrete actions that will allow more efficient and productive use of energy, equipment and occupant spaces. This ability, combined with placing technologies into occupants’ hands, provides additional insights for building managers to improve satisfaction, which can then in turn impact overall productivity of both the facility and its occupants. By implementing smart building abilities at both an enterprise-wide and personal level, maximized efficiency, savings and satisfaction prevails across the business.
Remember when you relied on your keyboard or mouse to pull up files on your laptop? Or needed a conference phone to connect to a meeting? The way we interact with devices has come a long way. As consumers, we don’t search for things on the Internet anymore. We ask Alexa or Siri to find the things we want and have them delivered to our doorsteps. The same technologies that power these devices are beginning to make their way into the workplace. Just as they have transformed our personal lives, they promise to change the way we work.
Not your father’s interface
Not since 1984 have we seen the kind of change in the way we interact with devices that we are experiencing today. Thanks to things like AI, machine learning, speech recognition, biometrics and augmented reality, we no longer have to click or type. We can swipe, talk, touch or even just look.
Alexa goes to work
The possibilities this can open are limitless. When brought into intelligent digital workspaces, connected devices can eliminate the mundane tasks that frustrate and keep us from doing meaningful work that makes us happy and more productive.
Consider a smart conference room. When you walk into the room, the IoT-enabled workspace will already know who you are, what meeting you are there to attend and what presentation you need to get started. It will launch the meeting automatically and adjust the room conditions to suit your personal preferences, lowering the shades, turning the TV monitors on, initiating the video camera and session recording. When the meeting is over, it will send links to the session recording to all attendees, giving you back at least 20 minutes in your day.
Or think about the open floor plans that companies are beginning to embrace. Employees can sit anywhere and log into their computer applications via shared devices at any desk. So, what do you do when you need to find coworkers or free space for yourself? In an intelligent workspace, you can ask an IoT-connected kiosk or smart screen using voice commands. Using streamed sensor, you will be automatically guided to coworkers or a free space on an interactive map.
It’s a brave new world filled with opportunities, but it carries a unique set of challenges that IT needs to understand and plan for in order to capitalize on them.
Rethink your infrastructure
To truly succeed with the coming sophistication of IoT, your enterprise may need a new type of computing infrastructure. Think about all of the data and telemetry feeds that continuously stream between different systems and sensors. There’s no efficient, cost-effective way to send it all from an ever increasing number of devices to the cloud for deep processing.
Public cloud infrastructures are great at providing flexible consumption models and faster rollouts, but they struggle to keep pace with the mix of high data-rates and demand sub-second user-response that IoT requires. When it comes to IoT, edge computing and hybrid cloud models are a better bet, as they allow you to pick the hybrid architecture that suits your needs and build an infrastructure that can intelligently and automatically decide where will most efficiently process data.
Then there’s privacy. IoT devices collect massive amounts of data. In the face of strict regulations related to financial data, the private nature of personal or health related data, and the persistence of advanced cyberattacks, enterprises need to decide when to locally process data or export it to the cloud and need a flexible system to support their efforts. Here again, edge computing and a hybrid environment win the day.
Understand your risk
With IoT devices, the stakes of a single breach are incredibly high and there isn’t room for undue risk. The new vulnerabilities and expanded attack surface devices introduce require an intelligent and contextual model to manage. Typical client management tools were conceived at a time when desktops, laptops and devices were stationary, corporately distributed and mostly connected to the enterprise. Those days are over.
Work today happens anywhere, anytime and on any number of devices. Managing IoT devices requires a different approach that is flexible and supports roaming, wirelessly connected, mobile users on their chosen devices. IT admins also need a new set of tools that integrate management, security, application and desktop virtualization, and mobility into a centralized infrastructure to simplify, secure, manage and monitor all types of endpoints, applications and software from a single pane of glass.
Prepare for the future
When it comes to the workplace, IoT promises to drive new levels of freedom, productivity and innovation. It’s already happening. As the technology continues to mature and integrate with new systems, vendors and applications, it will only happen faster. Enterprises that develop plans to operationalize IoT now can speed their way to a sustainable competitive advantage. Those that don’t will be left in the dust.
Self-driving cars and smart medical equipment, once the stuff of science fiction, are a fact of life in today’s digital, connected world. Network infrastructure connects devices in virtually every industry, making life more convenient and more productive for us all. But how safe is a future where bad actors can exploit machines crucial to the nerve centers of modern society?
Studies show that Industrial internet of things (IIoT) is set to fuel an increase of applications for smart cities, farming, factories, health services, logistics, transportation and utilities. Practical, everyday examples include better optimization of energy consumption with smart metering and smart grids, remote health monitoring and equipment maintenance, as well as improved logistics for better transportation monitoring. Yet, given the scale and scope of IIoT deployments, a lot is at stake if something goes awry. System failures and downtime in IIoT can result in high-risk, or even life-threatening situations.
The power to cripple a nation
SCADA systems tie together power, oil, gas pipelines, water distribution and other decentralized facilities. They control and monitor physical processes, like transmission of electricity, transportation of gas and oil in pipelines, traffic lights, and the list could go on. The security of SCADA systems is crucial — compromising them affects key areas of society. A blackout, for example, can cause enormous financial losses to everyone tied to that grid.
Designed to be open and easy to operate and repair, SCADA systems don’t provide secure environments and can be difficult to harden. Additionally, the recent move from proprietary technologies to open solutions, coupled with increased network connectivity, has made them extremely vulnerable to network-type attacks.
In his book, Lights Out: A Cyberattack, A Nation Unprepared, Surviving the Aftermath, author Ted Koppel reveals that a major cyberattack on America’s power grid is not only possible, but likely. Worse still, the U.S. is shockingly unprepared for such a hack, despite ongoing cyber-tensions between the world’s leading powers. Koppel argues that a successful attack would plunge America into the dark ages. Iif the 2015 attack on Ukraine’s power grid is any indication, he’s right.
In 2015, alleged Russian operatives brought leaders of a Ukrainian power plant to their knees. Known as the December 2015 Ukraine power grid cyberattack, the incident is the first known successful cyber-attack on a power grid. Attackers used several methods to compromise the grid, taking SCADA under control and remotely switching substations off. They then disabled IT infrastructure components, destroyed files stored on servers and workstations with the KillDisk malware and deployed a denial-of-service attack on the grid’s call center to deny consumers information on the blackout.
Around the world, cybercriminals are actively targeting critical infrastructures, including healthcare facilities where human lives are at risk every time an attack is deployed. The profits in compromising IoT devices provide criminals with significant motivation. For example, smart electricity meters can be hacked to siphon money. Truly, the possibilities are endless, and the security of the smart world is at grave risk.
Real-time threat detection for networked devices
In recent years, bad actors have honed their skills to penetrate networked infrastructures through vulnerabilities embedded in the very IoT applications we herald as the key to a better future. Because industrial systems are not typical computers and don’t support on-board defenses against cyberattacks, IT administrators need a way to detect potential attacks in real time, before hackers take control. This means industrial IoT applications need a completely different kind of protection.
Network traffic analytics has emerged as a key technology to help protect large infrastructures with disparate systems, as it provides complete threat-related network activity for any device on the network. By focusing on the network behavior of endpoints, network traffic analytics can help security operation centers defend fleets of devices with limited or no built-in security and no endpoint security agent running on it. Network traffic analytics uses a technique called device fingerprinting to keep tabs on every networked device. In this case, a fingerprint is various data points, such as the device’s unique ID, its media access control address or information from network data packet headers.
This lets network traffic analytics solutions take note when a device behaves abnormally, based on what it should and shouldn’t do. When an anomaly is detected, IT admins can choose one of several courses of action to prevent the attack from unfolding, including cutting off all ties to the Internet or secluding the suspiciously behaving endpoint from the rest of the network until a complete analysis is performed.
At the heart of network traffic analytics lies machine learning. Next-generation network traffic analytics solutions use predictive machine learning models that accurately reveal threat activity and suspicious traffic patterns. An ideal network traffic analytics deployment leverages semi-supervised or tunable machine learning. Unlike strictly supervised approaches, a tunable machine learning model does not require only labeled training data, meaning it readily identifies key patterns and trends in the live data flows, without the need for human input. When a threat is detected, IT admins receive a detailed security incident explanation and a suggested course of action, facilitating incident investigation and response.
Legacy or traditional security systems fail in the face of silent threats that creep their way onto networks. Studies show that the cybersecurity skill gap is widening every year. Plus, attackers today are smarter than ever, using malware that changes form to evade pattern-matching algorithms. As the threat landscape evolves in unpredictable ways, a new approach to cyberdefense is urgently needed. Network traffic analytics technology distills the patterns to a set of readily-identifiable scenarios to alert security teams, which guides response efforts faster and more efficiently than ever before.
With the number of connected smart devices on the network multiplying faster and more haphazardly than the predictable growth of traditional fixed and mobile endpoints, there is only one truly effective approach to cover all devices that generate cyber-risk: analyze network data.
Everyone in the industry knows IoT security is a mess. The industry has proven incapable of coping with cybersecurity problems as the commoditization of technology doesn’t allow for any unnecessary costs, including security.
I have previously argued for a world cybersecurity organization. The safety of connected devices that rapidly become an ingrained part of everyone’s life cannot be left to the consumers or the producers. Nor should it be left up to individual politically-driven governments to regulate because technologies don’t adhere to the laws of physical borders.
Luckily the U.S. is stepping up. In addition to several pending initiatives and bills aimed at regulating and making IoT less of a mess, a settlement between the Federal Trade Commission (FTC) and the famous home router vendor D-Link is the first real case where the U.S. government compelled a technology giant to adhere to stricter security standards. Per the settlement, D-Link must:
- Implement security planning, threat modeling and testing for vulnerabilities before releasing products.
- Continuously monitor and address security flaws.
- Provide automatic firmware updates.
- Accept vulnerability reports from security researchers.
- For 10 years obtain biennial, independent and third-party assessments of its software security program. The assessor must keep all documents it relies on for its assessment for five years and provide them to the commission upon request.
- Give the FTC authority to approve the third-party assessor D-Link chooses.
At face value, the settlement seems like it will lead to improved end-user security. It lacks certain basic measures, such as prohibition of hard-coded password and encryption, but this is a significant step in the right direction.
Home routers criticality and the ignorance of the industry
The U.S. government probably filed the lawsuit against D-Link as it realized the pivotal role a router plays in everybody’s home. By compromising the router, adversaries get control of the aorta of the connected household.
However, D-Link is not the only one who needs to step up their security measures. Linksys, Belkin, Netgear and other famous providers of home routers suffer equally well from what seems to be an industry syndrome. Search these names and vulnerabilities to learn about the dire state of the industry.
The IoT Cybersecurity Improvement Act of 2019 seeks to force the private industry to adhere to basic security by excluding the ones who don’t from selling to the government. This security act will force the home router industry to step up.
What is next?
Given the importance of home routers, it makes sense for the government to start here. Hopefully this settlement sets a precedent for other industries to follow.
Citizens should not have to experience collateral damage before governments impose basic security hygiene. The effect of seeing similar settlements in various other industries could push IoT organizations to act while we wait for the IoT Cybersecurity Improvement Act, the National Institute of Standards and Technology and similar corrective measures.
With D-Link production centers in Taiwan and China, the regulations could seem like an act of trade war, but the charges were put forth in January 2017. Hopefully it serves the U.S. well that one of the worst-in-class players happens to be an international actor.
Software is bringing complex business logic and artificial intelligence to the edge, enabling enhanced applications and raising user expectations. But what changes should IoT executives make to their hardware-oriented business to ensure they’re not left behind?
In my previous article, I discussed how the reduced latency of 5G networks will cause a lot of software to move from end user devices to the backend world of data centers and the cloud. But while the bulk of data processing for connected edge devices currently happens on the backend, the cloud won’t cut it for cutting-edge apps because this class of applications requires instantaneous execution of action and analysis. This includes emerging IoT uses such as robotics, artificial intelligence, autonomous vehicles and manufacturing, where sending data back and forth across a central server is seconds too slow.
We’re still in the early days of this adoption, but with data volume and velocity soaring, you can expect to see a rush of real-time and business-critical software on edge devices. By 2020 the average end user will generate 1.5 gigabytes of data per day, according to CB Insights. The deluge of data puts increasing demands on bandwidth and slows down response times. This strain on bandwidth will only cause edge computing to rise in popularity.
Living on the edge
Edge computing is a prudent alternative to the centralized focus of the cloud. If the weather is bad, the power is out, the data center is on fire or someone hacks into the cloud, will your entire business shut down? Will your customers’ lives be in danger? The decentralized nature of the edge minimizes these risks.
For instance, a self-driving vehicle must make instantaneous decisions that ensure the safety of its occupants and that of nearby drivers, passengers, cyclists and pedestrians. Similarly, an autonomous train capable of making real-time decisions on routing, braking, track selection and energy usage can greatly reduce or even eliminate accidents caused by human error.
From the cloud to the edge
By bringing compute and storage systems as close as possible to the application, component, or device producing the data, edge computing significantly reduces processing latency. In 2014, Cisco coined the term fog computing, which essentially provides a standard for extending cloud networking capabilities to the edge. As David Linthicum, chief cloud strategist at Deloitte Consulting, puts it, “fog is the standard, and edge is the concept.” By facilitating the interaction of compute, storage and networking between end devices and backend data center and cloud systems, fog computing enables enterprises to push compute to the edge for better and more scalable performance.
Fog computing is often used in retail stores with point-of-sale registers. One good reason why is that when a customer uses a credit or debit card or smartphone to make a purchase, connectivity to the backend isn’t always available to complete the transaction. So the retailer will have a local server tucked away in the store, with a fog network allowing all the devices, software and components to communicate with one another. Similarly, factory floors use fog networks that enable automated machines to share data. When one machine in an assembly line completes its step in the manufacturing process, for example, it notifies the next machine down the line to start up its job and so on.
Edge technologies to watch
Some people may argue that edge computing is nothing new and that consumer edge devices have been around since the dawn of the pocket calculator. Perhaps, but edge computing has come a long way since then, and today it uses sophisticated protocols and technologies tailored to its unique characteristics.
Many edge companies use Message Queuing Telemetry Transport (MQTT), a popular, lightweight communications protocol for remote locations requiring a small code footprint or when network bandwidth is in high demand. The new standard, MQTT v5.0, has significant upgrades, including better error reporting and shared subscriptions.
Advanced Message Queuing Protocol (AMQP) is also popular on the edge. AMQP is an open-standard messaging protocol that allows you to create message-based applications with components built with different languages, frameworks and operating systems.
Edge-based businesses are using HTTP far less than MQTT, AMQP and other protocols. By contrast, in the cloud world it’s almost a given that you’ll be using HTTP more than 90% of the time.
Containers are big on the edge, too. When people talk about software containers, they’re usually referring to cloud-based and other backend systems. But containers also make it easier to securely distribute software to edge environments and to run containerized apps on a lightweight framework designed for easy patching and upgrading.
Bringing AI to the edge
A lot of business logic has already migrated to the edge, but AI on the edge is still in its infancy. Complex algorithms needed to run AI applications often require the powerful processing of data centers and cloud systems, making these apps less useful on edge devices. This is changing fast, however. By 2022, 80% of smartphones shipped will have built-in AI capabilities, up from 10% in 2017, Gartner forecasts.
However, other factors may limit AI’s effectiveness in the edge world, at least in the near future. Current AI-driven applications often lack sufficient data — the full context — to make life-impacting or business-critical decisions. And when an AI app makes decisions based on partial data, the results aren’t good.
Here’s a simple example. Say you’re using an AI-powered assistant on your smartphone. This app can read your work calendar, and whenever a colleague sends an email requesting a meeting, the AI assistant looks at your calendar and schedules a good time to meet.
For this process to work flawlessly, however, the AI app needs full context — in this case, access to your entire daily schedule — to make intelligent decisions. So what if you’ve made plans to meet a friend at 4 p.m. on Friday—but you never put personal appointments on your work calendar? Your AI helper, working with insufficient data, may schedule a work meeting for that time because it lacks the full context of your professional and personal schedule.
Monitoring the edge
Performance monitoring in an edge environment presents a new set of challenges. To gain full context of your operation, you must monitor the entire ecosystem — network, cloud, applications, security, data center, IoT devices and the customer experience. An effective monitoring solution must examine everything, not just a subset of the overall network, to produce real-time insights, trigger actioning systems and automate operations.