This is the second piece in a three-part series. Read the first piece here.
I have a belief that’s unorthodox in the data science world: explainability first, predictive power second, a notion that is more important than ever for companies implementing AI.
Why? Because AI is the hottest technology on the planet, and nearly every organization has a mandate to explore its benefits by applying AI models developed internally or acquired as part of a package solution from a third-party provider. Yet a recent London venture capital firm MMC study in Europe found that 40% of startups classified as AI companies don’t actually use AI in a way that is material to their businesses. How can these startups and the customers that buy from them rely on the predictive power of their AI technology when it’s not clear if the models delivering it are truly AI?
Explainability is everything
AI that is explainable should make it easy for humans to find the answers to important questions including:
- Was the model built properly?
- What are the risks of using the model?
- When does the model degrade?
The latter question illustrates the related concept of humble AI, in which data scientists determine the suitability of a model’s performance in different situations, or situations in which it won’t work because of a low density of neural networks and a lack of historical data.
We need to understand AI models better because when we use the scores a model produces, we assume that the score is equally valid for all customers and all scoring scenarios. Often this may not be the case, which can easily lead to all manner of important decision being made based on very imperfect information.
Balancing speed with explainability
Many companies rush to operationalize AI models that are neither understood nor auditable in the race to build predictive models as quickly as possible with open source tools that many users don’t fully understand. In my data science organization, we use two techniques — blockchain and explainable latent features — that dramatically improve the explainability of the AI models we build.
In 2018 I produced a patent application (16/128,359 USA) around using blockchain to ensure that all of the decisions made about a machine learning model, a fundamental component of many AI solutions, are recorded and auditable. My patent describes how to codify analytic and machine learning model development using blockchain technology to associate a chain of entities, work tasks and requirements with a model, including testing and validation checks.
The blockchain substantiate a trail of decision-making. It shows if a variable is acceptable, if it introduces bias into the model and if the variable is used properly. We can see at a very granular level the pieces of the model, the way the model functions and the way it responds to expected data, rejects bad data or responds to a simulated changing environment.
This use of blockchain to orchestrate the agile model development process can be used by parties outside the development organization, such as a governance team or regulatory units. In this way, analytic model development becomes highly explainable and decisions auditable, a critical factor in delivering AI technology that is both explainable and ethical.
An explainable multi-layered neural network can be easily understood by an analyst, a business manager and a regulator, yet a neural network model has a complex structure, making even the simplest neural net with a single hidden layer, which produces a latent feature in the model making it hard to understand, as shown in Figure 1.
I have developed a methodology that exposes the key driving features of the specification of each hidden node. This leads to an explainable neural network. Forcing hidden nodes to only have sparse connections makes the behavior of the neural network easily understandable.
Generating this model leads to the learning of a set of interpretable latent features. These are non-linear transformations of a single input variable or interactions of two or more of them together. The interpretability threshold of the nodes is the upper threshold on the number of inputs allowed in a single hidden node, as illustrated in Figure 2.
As a consequence, the hidden nodes get simplified. In this example, hidden node LF1 is a non-linear transformation of input variable v2, and LF2 is an interaction of two input variables, v1 and v5. These nodes are considered resolved because the number of inputs is below or equal to the interpretability threshold of two in this example. On the other hand, the node LF3 is considered unresolved.
To resolve an unresolved node, thus making it explainable, we tap into its activation. A new neural network model is then trained. The input variables of that hidden node become the predictors for the new neural network, and the hidden node activation is the target. This process expresses the unresolved node in terms of another layer of latent features, some of which are resolved. Applying this approach iteratively to all the unresolved nodes leads to a sparsely connected deep neural network, with an unusual architecture, in which each node is resolved and therefore is interpretable, as shown in Figure 3.
The bottom line
Together, explainable latent features and blockchain make complex AI models understandable to human analysts at companies and regulatory agencies––a crucial step in speeding ethical, highly predictive AI technology into production.
Keep an eye out for the third and final blog in my AI explainer series on the three Es of AI on the topic of efficient AI.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
This is the first in a two-part series.
The meteoric rise of IoT has created a popular new attack surface for cybercriminals. The “2019 Sonic Wall Cyber Threat Report” indicated that there were 32.7 million cyberattacks targeting IoT devices in 2018, representing a 217.5% increase over attacks 2017. What is interesting, however, is that in the Attivo Networks 2018 Threat Detection Survey findings, securing IoT ranked just sixth on respondents’ list of attack surface concerns. This is possibly because IoT, medical IoT and other interconnected devices commonly fall outside of a security team’s responsibilities.
This circumstance is cause for concern. Adding to the gravity of the situation is the fact that these devices often have minimal security requirements, are governed by laws preventing third-party security adjustments and lack regulation, which has been slow in setting standards. With Gartner predicting that 25 billion connected devices will be in use by 2021, security teams will need to proactively search for new ways to both secure the ever-growing number of potential entry points and quickly identify attacks that use these devices for easier access to the network.
Emerging attack surfaces present new and often different challenges for defenders to overcome. Organizations must learn understand how security teams can better secure each attack surface.
Flexera’s”RightScale 2019 State of the Cloud Survey” indicated that 94% of enterprises utilize the cloud, with 84% of enterprises employing a multi-cloud strategy. The rapid growth of cloud and multi-cloud environments has presented organizations with shared security models to go along with both familiar and unique security challenges, so it is not surprising that 62% of Attivo Threat Detection Survey respondents listed securing the cloud as the top attack surface of concern. What compounds this particular situation the fact that many of today’s security tools rely on virtual network interfaces or traditional connections to servers, databases and other infrastructure elements, which are no longer available in serverless computing environments.
Unsecured APIs, which allow cloud-based applications to connect, represent a significant new threat vector for cloud users. Unsecured APIs and shadow APIs represent substantial potential dangers. Similarly, the rise of DevOps comes with a new set of vulnerabilities, including the proliferation of privileged accounts. Access to applications, embedded code and credential management all require a different assessment of risk and how to secure them. Given the fluidity of the environment, detecting credential theft and lateral movement quickly gain importance. Deception technology represents an increasingly popular approach to this, helping with the detection of policy violations and unauthorized access, as well as identity access management inclusive of credentials, exposed attack paths and attempted access to functions or databases.
Increased focus on edge computing has driven increased traffic to data centers and presents new security challenges as data processing and functionality move closer to end users. With more and more information stored in these data centers and the growing popularity of smaller, distributed data centers, there is an increased need to reassess security frameworks and their fit for these new architectures. The arrival of 5G is also likely to fuel the growth of edge computing. In this environment, security, privacy and storage management will need additional attention. Scalability of security systems will be critical as data centers increase in size and given the distributed nature of these networks.
Whether on-premises or in a remote data center, threats may come in both internal and external forms and may be intentional or unintentional. Incomplete or inadequate screening of new employees, lack of consistent internal protocols and limited access control can create an unsafe environment for data and operations. Lack of backups and disaster recovery services can also render information vulnerable to disasters — natural or otherwise. Securing the complete digital ecosystem is about more than just fitting today’s needs. The way those needs are rapidly evolving will require assessing security frameworks and their fit in the various environments, as well as an overlay view of how an attacker would attack them. Once organizations have completed identifying risk, they can move forward with establishing a full security fabric comprised of prevention and early detection tools, as well as the programs for faster and automated incident response.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Location has become a critical component across a wide variety of organizations as part of their ever-expanding IoT implementations. As companies get more sophisticated in their knowledge of what IoT can do for their business, they’re moving beyond basic applications to use IoT to better manage key business processes.
Knowing the precise location of a person — or object — by itself, and in relation to other people and objects helps organizations better understand business processes such as inventory control in a warehouse environment as well as generating advanced player statistics in a sports environment.
While accurate location capabilities are on every company’s wish list, each has a different idea of what location accuracy means. Is location within a meter accurate enough? For some use cases and applications, it is perfectly sufficient. However, other use cases may demand much more precise levels of accuracy, perhaps even as accurate as 10 centimeters.
There are a few distinct levels of accuracy, and several technologies and methodologies by which to achieve each level. The gating factor is the use case for which location capabilities are required. Understanding what the short-term and longer-term needs are for location — and taking into consideration total cost of ownership (TCO) of the solution — will be critical to making the correct choice.
While location capabilities are important for many types of organizations, looking at the need for location services in a single environment, such as a warehouse, makes it easy to understand both the benefits and the use cases the different levels of location can enable. There are three general levels.
Detecting the presence of a person or object is the simplest location-based solution designed to answer the question: Is it present or not? Unfortunately, it’s also the least accurate location level. Presence detection is commonly used in a warehouse to determine whether an item or a pallet has arrived. Using an advanced location system using Bluetooth technology and the angle of arrival methodology, a locator is placed at the entrance of the warehouse to act as a gate through which each tagged item passes. The locator identifies the tag based on its unique ID, measures the angle of the tag’s signal and calculates the direction of motion for the tag, thereby determining whether it’s entering or exiting the warehouse. Some real-time location systems (RTLS) can determine presence, and provide real-time information within the warehouse because of their long communications range. However many are limited by unsophisticated solutions that work well outside or inside, but not both.
RFID technology can be used for presence detection by tracking objects transiting through the gates. However, scaling beyond this use case may be limited by the cost of architecting a solution using multiple gateway readers. Its inability to track items that are already inside the warehouse once the system is installed can also negatively impacting both the cost-effectiveness and usefulness as a location solution.
Proximity-based solutions are designed to identify both the presence and location of items. Proximity detection typically uses a combination of high-accuracy positioning in areas where the use case demands it, and low-accuracy presence detection in areas where precision is less of a requirement. These solutions are ideal when uniformly accurate coverage is not needed.
This is often the case in the warehouse example, where approximate location information may be adequate for many use cases. With an RTLS that utilizes Bluetooth technology and the angle of arrival methodology, locators can be placed strategically within the warehouse, creating zones that track items in real time as they enter or leave each zone. In some deployments, locators can be positioned at strategic choke points, providing basic movement tracking. Optimizing the deployment and density of locators to enable higher location accuracy only where it’s needed provides a strong TCO without limiting potential use cases.
Other technologies that are commonly used for proximity-level location accuracy include Wi-Fi and Bluetooth Received Signal Strength Indication-based beaconing. However, they are each too inflexible to allow for deployments with non-uniform locator density. This makes it difficult to manage costs and deliver the right level of accuracy for each use case that is supported.
Solutions based on Positioning, which is the highest level of location accuracy, are designed to reliably locate the exact position of a tracked item in real time, both inside a warehouse and nearby, such as in a storage yard. Positioning unlocks the full potential of location-based solutions because of its flexibility and very precise levels of accuracy.
In warehouses, companies often need to know the precise and real-time location of items, both stationary and in motion. Common warehousing applications such as worker safety, collision avoidance, inventory management and advanced workflow optimization all require knowing the precise location of people and objects. With an RTLS that utilizes Bluetooth technology and the angle of arrival methodology, businesses can uniformly cover the area of interest with locators, so that the system can reliably calculate the accurate location of tags in real time.
Having this level of accuracy, and the flexibility to also support presence and proximity, satisfies nearly all of today’s use cases as well as many potential new ones; some of which may not have been thought of yet because of the limitations of location technology and cost considerations.
An additional technology being used for use cases that demand high levels of location accuracy is Ultra-wideband (UWB). However, UWB is only useful for high-accuracy use cases. This is because it cannot be scaled down technology-wise, or even cost-wise to cover proximity and positioning requirements, limiting its effectiveness as a solution that covers all use cases. The high cost of tags and the limits to radio certifications across different geographical areas also limit UWB’s usability as a positioning solution.
The warehouse example showcases the reach, flexibility and accuracy of the different types of solutions available for determining location. Organizations of all types and sizes, from manufacturing and supply chain and logistics to healthcare and retail, have already established a wide variety of use cases where knowing the location of an object — or person — provides tremendous business value. The specific level of accuracy they need depends on the use case — or use cases — they need to support across the business; today and in the future. An RTLS that covers all levels of accuracy, multiple use cases, and is scalable for future growth provides the most attractive TCO and best return on investment.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
North American manufacturers that want to pursue multiple industrial internet of things (IIoT) initiatives and scale the results throughout their organizations often face significant challenges, according to a new independent survey of more than 125 manufacturers primarily in heavy industry and automotive sectors.
The survey from Software AG also found that manufacturers have objectives that span both products and production, but are unable to reach their predicted value because of known and unknown complexities.
IIoT offers new revenue if organizations can overcome obstacles
Organizations clearly prioritize new revenue generation for IIoT projects, with 84% of automotive and heavy industry manufacturers from the survey having selected the monetization of products as a service to be the most important area of IIoT. Optimizing production is also viewed as a top priority for 58% of heavy industry and 50% of automotive manufacturers. Historically, the primary use for IIoT has been predictive maintenance, but in this survey, respondents did not view it as important as monetization or operational optimization.
This is because the vast majority of manufacturers report that their IIoT investments are not adding data or value to other parts of the organization. Despite the fact that 80% of all respondents identified that processes around IIoT platforms need to be optimized to stay competitive, very few are doing this. These organizations face obstacles in obtaining and sharing IIoT data that make optimization difficult. Fifty-six percent of automotive manufacturers consider IT/OT integration as the most challenging task that has prevented them from fully realizing the ROI from IIoT investments.
Analytics are equally difficult to use
More than 60% of the manufacturers surveyed had as much difficulty with defining threshold-based rules than using predictive analytics. This means that simple if-then statements, which any associate can create, are giving organizations more trouble than predictive analytics, which rely on complex algorithms that require the expertise of a data scientist. While neither task is considered simple, leveraging predictive analytics was rated as only slightly more difficult than condition-based rules.
Manufacturers place a high value on IIoT, but cannot spread their existing IIoT innovations and investments across their organizations. They can solve this dilemma by investing in the right IT/OT integration strategy and IoT technology. Manufacturers can use four best practices to scale their IIoT investments across their enterprises.
- Ensure clear collaboration between IT and the business by following a transparent step by step approach that starts focused and has clear near term and long- term objectives to scale.
- Give IT the ability to connect at speed with a digital production platform that is proven to be successful.
- Leverage a GUI-driven, consistent platform that supports all potential use cases and an ecosystem of IT associates, business users and partners.
- Enable the plant or field service workers to work autonomously without continual support from IT through GUI-driven analytics, centralized management and easy, batch device connectivity and management.
IoT presents opportunities for growth and new possibilities for any organization across all industries. With IoT technology, workers can monitor a patient’s health from another city or capture information on soil temperature from a field three hectares away.
Substantial changes will be possible in sectors including finance, energy and transportation. IoT technology could radically transform the way modern businesses function by increasing efficiency and reducing costs.
While the possibilities seem endless, in order for organizations to succeed, they need to look at the current state of IoT and understand the steps it will take to make the future a reality.
The current state of IoT
IoT is rapidly becoming business as usual as it transforms industries and businesses all over the world and more organizations adopt the technology. In some capacity, 34% of companies are using IoT and the confidence it the technology is only growing, according to Vodafone’s IoT Barometer 2019. Companies believe IoT technology provides a competitive advantage, and 74% of adopters in Vodafone’s report believe that within five years, companies without IoT deployments will fall behind. Organizations that adopted IoT projects in their core systems see positive results.
The benefits that come with the integration and deployment of IoT include an array of benefits from cost reductions to improved safety, increased responsiveness and even the creation of entirely new revenue streams, ultimately making businesses smarter and more efficient.
As data becomes more valuable to corporations, 80% of advanced adopters use analytics to improve business decision-making. The data collected from IoT devices is essential to steady and continue digital transformation.
Increased IoT integration correlates with greater business advantages at every stage of an organization’s IoT journey. It’s no longer a case of whether to implement IoT, but when and how.
Barriers to IoT adoption
Since last year, adoption of IoT increase by 5%, according to Vodafone. A majority of companies still have yet to implement an IoT strategy due to looming doubts around the cost and complexity of deployment. What these companies might not realize is that starting out in IoT is easier than ever.
There are current IoT solutions and platforms that make it fast and easy to deploy and manage these types of solutions effectively. Organizations can even look to off-the-shelf IoT products that offer low-risk and quick deployment. The widespread uses of IoT applications make it much easier to deploy complex IoT projects with existing infrastructure.
Adopters no longer see security as a major barrier. Sixty-five percent of adopters reported that their security concerns with IoT are no greater than with other new technologies. Even so, security and privacy issues should remain a top priority when considering IoT solutions. Taking a proactive approach to ensure that devices are tested and the integrity of the network is maintained is essential.
There is a strong correlation between sophistication and the benefits adopters have seen from using IoT. This means that adopters that developed their IoT strategy well and are progressing in its implementation reap the greatest rewards.
Businesses can assess themselves to see how they measure up in comparison to others in their sector with the IoT Sophistication Index tool. This feature provides personalized recommendations to help businesses improve their IoT sophistication and ultimately their ROI — wherever they are in their journey of adopting IoT.
IoT innovations and the future
In the future, intelligent assets will communicate with the world around them. Even though sometimes that’s hard to imagine, businesses need to act now to ensure they switch to a digital-first mindset to keep thriving.
IoT is that change agent that allows a business to compete in a digital world. Treating IoT as a critical part of a business’ digital strategy, integrating it with business systems, and selecting appropriate connectivity are all essential components of a successful IoT implementation that will deliver tangible results.
Cybercriminals are constantly probing consumer IoT devices such as home routers, IP cameras and printers to find access points into the network. Once they have access, they can disrupt network functions, gather critical information and deliver malicious payloads. At the other end of the spectrum, cybercriminals also probe critical infrastructures to target high-end industrial control systems (ICS) and SCADA technologies for the same purposes.
There is also a middle ground that criminals are focusing on: A growing line of control systems for residential and small business use. These smart systems have garnered comparably less attention than their industrial counterparts, but that seems to be changing.
Targeting control devices
Data revealed a small but significant shift in attention toward this middle ground of control systems, according to Fortinet’s Q2 2019 Threat Landscape Report.
A signature related to building management solutions was found to be triggered in 1% of organizations, which may not seem like much, but it is higher than typically seen for ICS or SCADA products.
Securing control systems
Imagine the harm a resourceful criminal could do with access to any number of these types of devices, including environmental controls, security cameras and safety systems. This is why the security of smart residential and small business systems deserves elevated attention.
Cybercriminals are watching closely for opportunities to commandeer control devices in homes and businesses. Unfortunately, cybersecurity in these venues, especially for devices traditionally considered to be isolated from traditional attacks, is not always straightforward and sometimes falls outside the scope of traditional IT systems.
However, securing these control systems is clearly necessary. The nature of IoT, including its endless number of endpoints and ever-growing volume of data and application traffic, make the task daunting. Fortunately, segmentation and network access control (NAC) solutions are a reliable foundational strategy to build on to protect company resources. When these solutions are in place, visitors and unauthorized devices seeking network access are connected to a guest network by default, critical resources — such as financial data — are isolated from the rest of the network, and all sensitive communications are automatically encrypted.
What’s so beneficial about segmentation is that when countermeasures fail in one part of the network, segmentation protects other areas from being compromised. Network and device segmentation should address:
- Access management: NAC combined with intent-based segmentation enables businesses to identify devices and impose strict access controls based on user, role, device type or even applications; a critical risk management solution for today. As devices either initiate a new network connection or as traffic or applications attempt to cross network segments, access control combined with inspection helps establish secure perimeters around critical resources by identifying and preventing the spread of malware.
- Risk assessment: Businesses can use data, devices, users, locations and threat intelligence feeds — along with a host of other criteria — to identify threat categories and assess risk in real time.
- Policy and device management: Seeing all devices and their related activity, including IoT devices, allows IT teams to appropriately set policies to manage risk across the network.
- Control: By treating those parts of the network that interact with IoT devices differently, companies can better control risks from a policy standpoint.
Forewarned is forearmed
A veritable Pandora’s Box of threat vectors has been opened in most networks and cannot be closed. Instead, they must be constantly monitored to help businesses understand the cyber risks they face.
One of the latest vectors is attacks on IoT-powered control systems within businesses. Access to one device in the system may grant access to any other device in that system, with the potential for significant business disruption. By segmenting their networks, organizations will limit exposure should an intruder get in and keep their critical assets secure.
There is a lot of buzz around the upcoming 5G standard for mobile device communication. At first glance, this will be great for individual smartphones, tablets, mobile hotspots and more, but the reality is that 5G will prove to be a bigger, fundamental enabler for IoT solutions.
I also dispute there will be new challenges introduced. Before we get to the challenges, let’s wrap our heads around the good news that comes with 5G.
How 5G improves IoT projects
The main benefits of 5G will be enhanced bandwidth and reduced latency for devices defined from the 3rd Generation Partnership Project (3GPP). For IoT uses, this will introduce options for many critical applications.
One emerging area for IoT will be effectively gathering real-time information. 3GPP has two types of technologies that will help IoT scale: enhanced Machine-Type Communication (eMTC) and Narrowband IoT (NB-IoT). These both focus on reducing complexity, having better device density and establishing more power efficiency. eMTC is associated with real-time items, such as wearables and personal IoT use cases. NB-IoT focuses on solutions where latency is more tolerable and a lower amount of data needs to be transfered. Additionally, some of these solutions can have more than 10 years of battery life. Couple the mobility with a battery like this and organizations can overcome some of the barriers to implementing an IoT solution.
To put some perspectives on a number of changes in place here, both eMTC and NB-IoT implementations will benefit. Qualcomm is very interested in the milestone that 5G brings, and has produced a visual roadmap of what 5G will enable with 3GPP capabilities. All the benefits — better positioning, better battery life, better real-time uses, serious scale and more coupled with the better performance — go beyond what IT pros expected just a few years ago.
IoT faces potential challenges with 5G
I mentioned there are some considerations to pay attention to. When we make these types of changes and introduce this type of scale, the backend solution better be ready to support it. The amount of data, data transfer and the number of devices connected will introduce some particular concerns that need to be taken into account.
I would go so far to say that the only practical way to build IoT solutions on serious scale based on 5G is through the hyperscale public clouds in many situations. Of course, a strong recommendation like that requires answering detailed questions around the specific use case. The reality is that a solution that deploys thousands of IoT devices that provide real-time data or rich media could potentially saturate communication lines and storage resources of a traditional data center quite rapidly. The hyperscale public clouds will provide the scalability to match this significant change coming for IoT solutions.
5G is a big deal, and it’s more than just what we will keep in our pocket. Because of that, we need to consider the business impact of a 5G solution. Will the IoT solution powered by 5G provide an overall benefit? Will the IoT solution powered by 5G uncover bottlenecks or introduce capacity constraints to existing services, platforms and resources?
My practical advice is to ensure the cloud is part of the IoT solution organizations choose, and it is powered by 5G from the start. This will help organizations avoid complicated problems later that can put the whole IoT solution value at risk.
By 2025, 41.6 billion IoT devices are expected to generate 79.4 zettabytes of data and the collective sum of the world’s data will be 175 ZB, according to IDC. Essentially, close to 50% of world data will be generated by IoT. By 2025, 70% of data generated by IoT applications will be processed outside the conventional data center, according to Gartner.
Edge computing in IoT
Considering the amount of data generated by IoT, it is a no brainer that the data needs to be processed closer to the data generation point. This new model of computing is known as edge computing, which provides significant advantages compared to the conventional cloud computing model.
Edge computing is well-positioned to take the challenges of IoT head-on. Latency issues found in cloud-computing is mitigated by edge computing local data processing. Dependency on edge computing becomes pronounced when there is an unreliable communication channel to the cloud for data processing. Edge computing brings long-term efficiency to data processing in IoT applications, which is inevitable.
Elements of edge computing
A few elements of edge computing include:
- Computing devices. Machine learning algorithms running on computing devices process data generated by IoT devices. Computing device can be a small form factor server or an embedded system-on-chip board.
- Data storage devices. Data can be stored locally for analysis at a later time, or to understand real-time data behavior. Data can also be designated to a central data center.
- Communication infrastructure. IoT devices exchange data with computing and storage devices over a reliable communication infrastructure.
Edge computing also requires other technology, such as regulated power supplies, optional battery backup and optional cooling systems.
Some edge computing sites are remotely located, and it’s possible that each site might not have qualified IT staff. If this is the case, it becomes essential to have the ease and reliability of connecting to devices on these sites. Connectivity to these devices will provide the IT staff the ability to manage and control devices remotely.
Devices might malfunction, and as a result edge computing applications running on those devices will likely malfunction as well. IT staff might want to look at logs, statistics, alerts and resource consumption patterns on the devices. On many occasions, IT staff might want to upgrade the device system software and applications, apply security patches to the devices or update the learning model of machine learning applications. IT staff might also need to change the configurations, restart devices, restart applications, or delete and modify logs and statistics to bring failed systems back to a normal operating state.
There are four important considerations IT pros should take into account while creating a reliable remote connectivity solution for an edge computing.
Security is one of the most critical aspects in any design. Security must never be an afterthought; it must be part of the solution. The remote connection channel must be secured using strong encryption and authentication algorithms. A public key infrastructure technology is one of the most adapted solutions. SSH/SSL based tunnels are also popular solutions for remote connectivity.
Edge computing devices by themselves must not open up additional endpoints — or ports — that are exposed to the public internet. This can cause serious security vulnerabilities and can also increase the attack surface significantly.
Organizations must follow strong security policies for usernames and passwords on each edge computing device. Note whether the devices are remotely located outside the enterprise IT network or data centers. While the edge computing devices can be physically secured, the interfaces on them — wired or wireless — may be exposed to attackers. Systems are only as secure as the weakest secured device in the edge network.
Identifying the devices in edge computing sites can be tricky, especially if the devices are connected over a Global System for Mobile Communications network, or are behind a network address translator or firewall. The edge computing devices will not have a globally addressable IP address, and the remote connectivity solution must address this. The solution must also provide an easy way of mapping a device’s ID to the endpoint for connectivity.
Connect at scale
Most of the edge computing solutions involve large scale deployment of devices across multiple edge computing site locations. These sites might be geographically dispersed, and the remote connectivity solution must consider this requirement. IT staff must be able to connect to a large set of devices and perform operations. This is because remote connectivity solutions with persistent connectivity within each of the devices may not scale. A persistent connection needs connection states to be maintained and refreshed at regular intervals. These models do not work efficiently at scale. A solution that involves on-demand connectivity has a better prospect of scaling.
Managing remote edge computing sites with no IT staff locally available can be a significant overhead. Automation is the key enabler for efficient operation. It is desirable to create rules, clear logs and bring the system back to a normal operating state. Remote connect solutions must support programmable interfaces such as APIs, which IT staff can use to create If This Then That rules. Programmable APIs can also be used to pull statuses and statistics at regular intervals and feed data to the operational management systems.
With the recent publicized ransomware and cyberattacks, medical device security has become a hot topic in the boardroom. Senior management is not only concerned about sensitive patient data being leaked. Patient safety is now also at risk.
The organizational challenges of securing medical devices
Common cyberattacks that aren’t designed to harm patients are still a major threat to patient safety due to the fact that, in many cases, connected medical devices are unprotected. Even as a result of an everyday cyberattack, such as ransomware, where medical devices aren’t being targeted directly, patient treatments can be interrupted and devices might crash, causing service disruption.
There are new vulnerabilities discovered all the time, including Urgent/11 running on VxWorks, Wi-Fi vulnerabilities on Meditronic’s smart insulin pumps, NotPetya based on the same EternalBlue package as WannaCry, Sodinokibi malware running on Microsoft Windows 7 through 10 and Selective TCP Acknowledgment vulnerability known as SACK Panic that resides in the TCP stack of the Linux kernels.
This is in addition to the infamous WannaCry ransomware attack that is still active, and has been attributed to shutting down more than 60 hospitals in the UK and more than 100 million dollars in damages. But even though the danger is clear, and there are directives from the FDA and Office of Civil Rights to take action, not enough is being done to protect patient safety.
Who is responsible for medical device security?
Typically, IT is primarily responsible for information security in larger hospitals, but they need to rely on specialized expertise of biomedical engineers to know how to secure medical devices effectively. Sharing information and collaborating can be difficult when the relevant experts work in different departments. Communications are even more complicated when biomedical engineering is outsourced. Recently, we are seeing a new trend where biomedical engineering is reporting to IT, which makes collaboration easier. A new position is also emerging: The medical device security engineer which makes one individual ultimately responsible for the security of medical devices.
However, even if one person is charged with security, hospitals typically have specialized departments such as radiology, oncology, cardiology and pediatrics that each have their own medical devices with unique connectivity requirements, behaviors and workflows. This makes it difficult for one individual to define and enforce a unified security policy throughout the hospital.
Patient safety interfering with patient care
Doctors and nurses are already at their limit caring for patients. When devices do have authentication, punching in passwords to protect patient data and safety can appear counterproductive because they slow down patient treatments. Since remembering passwords is tedious, many caregivers share logins which can make devices even less secure.
In addition, if a medical device is malfunctioning, caregivers are likely to yank the device and replace it with another without being aware that the product failure is due to a security incident. After a manufacturer announces a security vulnerability and a patch is available, the installation needs to be coordinated with the manufacturer and all the departments to help minimize the impact on patient treatments.
If a patch isn’t available, all the relevant departments need to collaborate to apply a mitigation, such as limiting device communications by utilizing access lists or implementing network segmentation. All of these measurements can impact business processes related to patient care.
Collaboration with verification
Because of all the complexity and the high level of collaboration required, voluntary compliance to medical devices’ security procedures isn’t strong enough. To protect patient safety, medical device security should be fully regulated with specific measurable requirements, and then enforced. Doctors and other caregivers should also be educated about the potential risk to patient health by not securing medical devices as part of their formal training.
However, there are steps that hospitals can take today without waiting for regulations and cybersecurity training to take effect. Hospitals should make sure that all the responsible people in the relevant departments share all information related to medical device operations and clinical workflows. IT security needs to be part of the procurement process so that security requirements are taken into consideration.
Hospitals need to have full visibility when it comes to medical devices, including those that were added by vendors on a trial basis. Hospitals must also have the ability to assess all vulnerabilities and prioritize them based on their impact on patient safety, service availability and data confidentiality. Following the prioritization, hospitals should implement the proper compensating controls, such as network segmentation and access control lists, to limit the attack surface. Devices should also be continually monitored for anomalous behavior to detect and prevent potential threats.
Medical device cybersecurity is a must, but it requires cooperation from everyone. A combination of training, sensible policies, enforcement and automation can help keep patients safe. Because in the end, patient health and safety are equally important.
There is a huge buzz these days around the impending roll out of 5G technology into the broad consumer and commercial marketplace. This heavily-hyped new technology will surely bring value to some applications. However, there are places where the advent of a ubiquitous 5G infrastructure simply does not matter. In the realm of IoT, the potential benefits are a lot less clear and far from ubiquitous.
As one considers the implications of 5G in developing and executing an IoT strategy, here are some of the key considerations:
What is 5G?
Broadly speaking, 5G is a next generation cellular system solution for enhanced communications. Current cellular technology in widespread use today is often referred to as 4G or 4G LTE. 5G represents major improvements to 4G infrastructure particularly focused on two key drivers:
- Increased communication speed with lower latency
- Increased bandwidth
Certain applications will take advantage of this enhanced capability. However, 5G is not a panacea for all and comes with a few challenges largely related to the higher frequency of the signal:
- The range for a tower or cell site will be significantly shorter for 5G than for lower frequency 4G
- Because the range is shorter, there will be a need for a vastly more elaborate and extensive network of cell sites in order to provide coverage
- 5G transmission has more of a problem transmitting through walls and foliage than lower frequency networks
- For battery powered end devices, the useful battery life will be lower than with existing infrastructure because the chipsets draw more power
Because of this, the cost and logistical challenges of deploying a broadly accessible 5G infrastructure will be enormous.
When will 5G be ubiquitous?
Despite the hype, it will be many years or longer before a ubiquitous 5G network is deployed and fully operational. Yes, indeed there are 5G -enabled phones coming out and, yes, the cell carriers are all hyping the start of 5G rollout. Hype aside, the fact is that even where 5G infrastructure is deployed, the coverage is often concentrated in limited regions. We are still a long time away from having a widespread 5G infrastructure available for most regions.
Considering the challenges and potential benefits of 5G infrastructure, the affect on IoT can now be considered.
Where 5G matters
5G helps in situations that need high speed communications and increased bandwidth beyond indoor applications, including applications that require extreme low latency, real-time communications or large data transfers. For example, a deployment of autonomous vehicles would need low latency. Real-time communications with access to shared processing infrastructure can help with highly complex analytics. An IoT application that has large data transfers could involve augmented reality where high bandwidth and speed are necessary for moving real-time video data.
IoT applications are doing quite well today without the use of 5G, but there are situations where having this could be an advantage. It is important to realize what the drivers behind 5G are. It is the large cellular carriers — such as Verizon, AT&T and Sprint — that view this as a means to compete with the large cable carriers that roll out wireless infrastructure in the Wi-Fi family. As the saying goes, follow the money.
Where 5G doesn’t matter
There are many situations today that simply do not require a 5G infrastructure. For example, applications that involve very small datasets where increased speed or bandwidth are irrelevant, edge computing applications where the processing of sensor data is performed locally, or applications that do not require real-time updates. 5G is not necessary in situations where sensor data needs to be communicated infrequently rather than continuously.
Many applications today simply do not require the benefits that 5G can bring. This is obvious in the range of products in the consumer, commercial, medical and manufacturing industries that work without a 5G infrastructure.
In summary, 5G will bring benefits in the next decade to a range of IoT applications where its fundamental capabilities are useful. A majority of the current IoT solutions have little or no need for the unique capabilities of 5G, especially considering the disadvantages for implementing 5G hardware. While 5G is coming, it will be a long time before it is widely available and will not be a benefit to all IoT applications.