Millennials might be one of the most overanalyzed generations of all time. So, naturally, let’s talk about them some more. But this time with our eyes toward how the millennial generation — and their proclivity for screen time, participation trophies and avocado toast — will force change across industries and the supply chains that serve them.
Tell me if this sounds familiar: The millennial generation is so connected via mobile and social, yet at the same time, totally disconnected from the reality of the world. There’s even a well-worn joke about “inviting my friends over so we can all sit in the living room and stare at our screens.” I know these are mass generalizations of an entire generation — and not very flattering ones at that. But these jokes and generalizations come from the fact that businesses and employers are all trying to get a better understanding of a generation that will comprise one-third of the workforce by 2020, and 75% of the workforce by 2025. And as they grow into their careers and take on more senior roles at businesses, millennials will gain greater buying power along the way. And the fact that we still haven’t figured out why they choose the products, entertainment or lifestyle they do, should strike fear into the hearts of even the toughest executives and their teams.
Does the next generation care about today’s business models? Do they value brand names over price, convenience or choice? Is the supply chain set up to continue delivering relevant products and services to a hyper-connected group of shoppers that has access to more information than any generation that came before them? And, what does the internet of things have to do with any of this?
Brands, retailers, food and beverage companies, and their supply chains must all be hyper-sensitive because consumers — an increasing number of which are millennials — expect and demand an unprecedented level of service, experience and digital interaction. A recent YouGov survey of millennials and how they view brands revealed that 61% of shoppers between 18 and 34 years old had switched brands over the last 12 months. And many of them cited reasons that had to do with supply chain. Whether it was product quality, in-store availability, corporate social responsibility issues or even sustainability, these young shoppers said that the things taking place behind the scenes mattered. It isn’t simply about name recognition, ad placement or even being an “it” brand, it is about showing your work and proving your products are made in a way that reflected their own distinct values. These consumers value transparency. Millennials are no longer willing to accept what they are told as the truth — they are willing to question what they’re told, and have all the tools necessary to find the answers.
Which brings us back to IoT. With the ability to connect a greater number of objects and light up parts of the supply chain that were otherwise dark, there is no excuse for not knowing exactly what’s going on in your supply chain. Brands can no longer play dumb when the world finds out one of their factories is exploiting child labor or dumping chemicals into the local water supply. Ignorance is not an option. Customers will increasingly expect brands to know what’s happening deep within their supply chain. Brands will be under the microscope, and they’ll need a means to collect and sort the massive data about their products that lives beyond the four walls of the enterprise. IoT could be the vehicle to help provide that data. And greater connectivity across the supply network, along with machine learning, will help validate that your company is upholding the high standards of an increasingly valuable customer segment.
So maybe we shouldn’t be so hard on millennials and all those strange habits that we Gen Xers and Baby Boomers struggle to understand. Perhaps I’m being generous by giving them credit for placing greater pressure on supply chain transparency, but there’s little doubt their shopping patterns and comfort with digital technology are a major catalyst in the drive for more openness and information.
The push for greater visibility and awareness of our supply chains will benefit all consumers; it will give us all the ability to make better decisions and greater influence in driving quality and sustainability among the brands we choose. It truly shifts the power to the consumer.
It is up to our brands and their supply chains to ensure they leverage the appropriate technologies and business processes to uncover the appropriate information from their networks.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
By 2050, 66% of the global population will live in cities, according to data from the United Nations. Connectivity is also growing. By 2020, there will be a quarter billion connected vehicles on the road, according to Gartner. By 2019, 2.6 billion people will use smartphones worldwide, according to Statista. And between 20 and 30 billion things will be connected by 2020, according to McKinsey.
With people around the world moving into cities and living increasingly connected lives, cities themselves will quickly need to get smart. But how, exactly, does one go about making a city smart?
One might argue that the only way is to construct the city from the ground up, allowing individual sectors and entities within those sectors to become smart one by one — from transit improvements to commercial building upgrades to connectivity in public recreational spaces. This is a more organic approach that arguably demands less planning, time and money upfront.
However, the aforementioned method ignores the very core of what it means to be a smart city. A smart city should be an ecosystem in which components of all sectors seamlessly integrate. If this doesn’t sound simple, that’s because it’s not. In order to achieve this feat — to achieve a truly connected, integrated city — an approach prioritizing holistic design and planning is critical.
To build a house, you need a solid foundation, and a city is no different. Planners must establish a cohesive design to serve as the core that will stabilize the other features of the city. Some key guidelines should inform this approach:
The user experience needs to take priority. The city itself must be structured to complement the priorities of its citizens. This means city governments and planners (designers) must engage with local residents. It’s critical to think about their user journeys and stay close to what would improve their lives and make them happy. The core of a strong and truly smart city is to crowdsource ideas from locals. Their feedback should be melded into an infrastructure that will boost not just city revenues, but the long-term health and happiness of the community.
Not every component of a city can or needs to transform at once — but bear in mind the long game with each step. However, each step forward should be taken mindfully and keep in mind the bigger picture. You can build a smart city in fragments so long as you don’t design it that way — that way each piece fits into the same puzzle and unfolds into one overall vision.
Integration is everything. There are downsides to building a city piecemeal; it can prevent integration and compatibility of various components and sectors within that city down the road, potentially hindering rather than improving efficiency. We’re seeing the importance of integration on a personal scale — for example, we’re just seeing how your refrigerator can tell your car that you’re out of milk, so you know to head to the grocery store after work. Or how you can listen to the same song uninterrupted from your car speakers and your home speakers. This same degree of seamless integration needs to apply to everything from public transit to roads shared by cars and cyclists to office buildings and public spaces to achieve the full advantages of a smart city. And thoughtful, intentional design is the only way to orchestrate that web of various components.
Daily inefficiencies need not persist. As people migrate to urban areas, they increasingly face the burdens of long commutes, full metro cars, and delayed trains and flights. These challenges can lead to larger, more dangerous inconsistencies, like long hospital waits and system glitches leaving people without power or phone signals. If cities choose to integrate their methods across systems, the result can be a harmonious network of homes, cars and offices working together to streamline the daily experience for everyone. Things like real-time notifications for traffic congestion, public transit delays or safety alerts, as well as smart streetlights for open parking spaces, could all provide alleviation of the unnecessary stresses of urban living. The same goes for analytics-based insights to help city residents make smarter decisions based on data reflecting conditions specific to a city.
Ultimately, planners must think long term when instating any new frameworks within a city. Their mindful and careful deliberation may seem tedious at the start of the planning process, but will deliver the ROI of a truly smart city. By thinking things through, keeping an eye toward the future and investing time and resources into planning now, we will see the lasting potential of a truly smart city.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
While managers in many industries have been leveraging automation and the industrial internet of things to monitor their assets for several years, the question must be asked: Are they getting real and sustainable value from the data they’re collecting?
Perhaps not. While managers in many industries have been leveraging automation and IIoT to monitor their assets for several years, it’s becoming increasingly clear that IIoT does not always deliver on its promise of raising uptime and productivity. In fact, Cisco reported this year that 75% of IoT initiatives fail.
That has not stopped management teams from becoming dependent on connected sensors and IIoT deployments to enhance and improve their maintenance processes in many fields, from manufacturing to transportation, big agriculture to mining. Connected sensors can track everything from temperature readings and pressure gauges to asset utilization and more. They can provide valuable information to be used to predict failures before they occur. All of these benefits can be critical to keeping operations running without a hitch. But eventually, something will go wrong.
Just knowing that something is wrong, however, does not ensure effective service management. An alert is just an alert; it may notify a manager that something’s amiss, but ensuring this information is actionable and integrated directly into the actual repair process is often quite a challenge.
When an alarm sounds or a light turns on, it’s not unusual for teams to respond with a flurry of phone calls and emails, all the while searching for warranty, service history, build details, maintenance status and other asset information. This is true whether it’s a combine sitting in a field, a tractor-trailer parked by the side of the road or an assembly line that’s shut down.
Once a technician, mechanic or engineer is dispatched, work orders and inspections are written on paper forms, requiring (error-prone) data entry. The teams can spend more time trying to track down, decipher and share information than they actually spend fixing the problem — a huge drag on productivity.
Transforming the service and repair process
Effectively maximizing utilization and performance of equipment, reducing unwelcome downtime and accelerating repairs requires a better approach. Adding a layer of communication and collaboration on top of — not replacing — existing applications, including IIoT and other diagnostic equipment, can transform service and repair processes. Instituting a closed-loop service event management process, known as service relationship management (SRM), can reduce downtime and repair costs while improving productivity and efficiency.
An SRM platform enables managers and maintenance teams to quickly access connected sensor data and turn it into sharable, usable, actionable information. For instance, maintenance managers can quickly engage contractors and equipment manufacturers around technical questions by sharing pictures, service history and diagnostic information. Repair teams have mobile access to inspection information, wiring diagrams, build details, recommended repair plans, predefined labor operations and required parts.
SRM unifies the management of service events. It enables rich, role-based user experiences that combine in-context access to detailed equipment information, real-time communication and collaboration, business intelligence tools and integrated diagnostics from IIoT applications.
Intelligent information sharing
A few years ago, an article by Josh Bersin in Forbes highlighted the importance of connecting systems of engagement and systems of record. Adding a new intelligent layer of connectivity with back-office applications has only recently become truly feasible.
Information can be used simultaneously in both the SRM and legacy environments in support of service management. For instance, if an asset management system is already managing maintenance schedules, preventive maintenance-due information can be shared with the SRM platform. Then the information for the service can be captured via mobile devices to ensure successful completion of maintenance operations. The updated data on the asset, event and completed work order can then be shared with systems of record when the service activity is completed for further processing and analysis as required.
In the case of IIoT sensors, information gathered by these sensors is fully integrated into the service and repair process with an SRM platform. All service-related communications, as well as asset information, is made actionable through the capture of information from multiple IIoT sources and components; designated users are automatically notified, creating a collaborative electronic workspace. Information can include everyday operational performance data, failure alerts, fault codes and other sensor alarms, which can be categorized by severity levels based on user-defined parameters. This information can be accompanied by suggested repair and triage plans and prescribed labor operations and parts based on IIoT source, type and severity.
By connecting the people, processes, technology and information across the service value chain, participants can access the information they need, when and where they need it.
Delivering sustainable value and real ROI
According to Grandview Research, IIoT spend will reach more than $933 billion by 2025, growing at a compound annual growth rate of 27.8% between now and then. To ensure a great ROI on that investment, industries need to leverage SRM.
The ever-increasing number of connections between assets, data, people, technology and processes has inspired the creation of new terminology — connected assets and smart factories, for instance. No matter what the terms or the industry, managers should not get caught up in the hype and just add IIoT sensors to their equipment without a clear path to success.
The key is to focus on the business value and outcomes that can be achieved when the words are actually turned into action. That means improving communication and decision-making, both internally and externally, creating new levels of visibility and transparency, and ensuring process consistency and receiving actionable feedback for continuous improvement.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
In the past, information technology (IT) and operational technology (OT) were seen as two distinct domains of a business. The former focused on all the technologies that were necessary to manage the processing of information, whereas the latter supported the devices, sensors and software that were necessary for physical value creation and manufacturing processes.
One of the factors that is reshaping the industrial internet of things market is the convergence of information technology and operational technology.
But, like so many other aspects of IoT, the convergence of IT and OT is not new. Many companies including Atos, Cisco, GE, PTC, Bosch, Siemens, Schneider, Rockwell Automation and Advantech B&B SmartWorx have been evangelizing the merging OT and IT in IoT.
We have seen how the convergence of networks — both industrial (OT) and enterprise (IT) — are enabling applications such as video surveillance, smart meters, asset/package tracking, fleet management, digital health monitors and a host of other next-generation connected services.
With billions of devices already connected through IoT and billions more to still bring online, that convergence is unavoidable. As we look at what it means when the world of operational technology and information technology converge, we need to assess it from two angles.
The IT angle of the industrial internet
IT systems and professionals have the responsibility of setting up information technology infrastructures supporting industry businesses. The IT in the industrial internet enables interactions with the physical world of products, be it on the shop floor or in the field, and the challenges associated with managing this new world.
The OT angle of the industrial internet
The OT angle or the business angle is where business heads imagine new products for the industrial internet that are IoT-enabled at birth.
The business side is now designing a new era of products that assume interactions with corporate IT systems whether they are within managed corporate IT networks or not.
IT and OT convergence will drive IIoT adoption
As interest in IIoT gains greater traction across industries, engineers are realizing that it’s very possible –and not all that complicated — to monitor production parameters and detect deviation from required quality standards, predict future events and trends, continuously optimize product quality and reduce overall production time. This ability to control every step in the product lifecycle will enable new business opportunities and significantly change the concept of manufacturing through instant access to real-time process information and feedback.
To achieve this, however, companies’ IT and OT departments will need to change and work together. To adapt, OT will need to improve skills in the areas of security, teamwork and communication.
Business Drivers for IT/OT convergence
Main business drivers for the IT/OT convergence are:
- Increasing operational efficiency utilizing machine data is the typical starting point for most industrial internet initiatives
- Advancing consumer insight with predictive algorithms for better customer service offerings and forecasting demand
Hurdles of IT/OT convergence
There are certain challenges to IT/OT convergence. First and foremost, these technologies dabble in a very diverse set of equipment that have a myriad of protocols. Over time, they generate humungous amounts of data that need to be stored, in some cases, forever. Building products that cater to these diverse interfacing requirements and unstructured data storage and processing needs is non-trivial. All of this unstructured data needs to be made available for processing in real time as well as scheduled processing.
Typical corporate IT teams need support with skills and infrastructure for building these data infrastructures. Particular attention needs to be paid to the security and compliance risks for this new infrastructure. Unlike in the past, you now need to know whether the industrial equipment out there on the field has been authenticated, is securely transmitting data and whether there is threat vulnerability from the asset. Again, all these need to be assessed in real time and on a continual basis. As you look at the physical world, there are situations where your products need to cater to the diversity of networks that maybe involved.
The need of IoT platforms in IT/OT convergence
When I wrote “It is an IoT platform, stupid,” I emphasized the importance of selecting the right IoT platform because it will affect your IoT strategy. In the industrial internet, the selection process it is, if anything, more important.
Sensors, machines, devices, gateways, programmable logic controllers and IT systems will need to provide and process the information your business needs when its need them. Industrial networks and IT networks need to interoperate; your technologies also need to function in an environment where some of this data is available partially.
With the increasing number of connected devices and data, IT/OT convergence is of paramount importance to developing an industrial IoT platform that will help drive insights and efficiency into your hyper-connected enterprise.
Should IT and OT merge to accelerate the pace of change in IIoT?
There’s no doubt that IoT technology will accelerate the integration of OT and IT systems, but the fact is that as of today, and according to several global cellular providers, the growth curve in data plans for IoT M2M devices hasn’t turned out to be as explosive as their companies had expected it to be.
Some engineers have expressed the fear that all of their existing infrastructure would soon become hopelessly obsolete, while other engineers have assured that it would be a long time before even wired Ethernet was well-established enough to be trusted in the industrial internet, and that the advent of industrial IoT must surely be a long way down the road.
There appears to be a rather wide expectation gap between the engineers who think that IoT technology is ready to apply in the industrial internet and those who believe that IoT tech isn’t ready for primetime yet.
Until now, industrial networking professionals have been quite conservative. They argue that industrial networks can’t be allowed to fail, and that many industrial companies are committed to single-vendor, proprietary technologies that make change prohibitively expensive.
But while the OT world was still busy getting different kinds of machines to communicate at all, the IT world was making incredible progress. Ethernet networking standards were developed and adopted. Cost points fell to amazingly low levels. New Ethernet technologies continued to appear, and we’ve now reached the point where the so-called internet of things is becoming a reality.
OT can’t ignore IT anymore.
In the industrial internet, we will see more OT and IT converge. This convergence will drive new industry technologies. We’ll see exciting new industry applications emerge, but we’ll also watch plenty of existing tech keep chugging before shutting down. The industry has been evolving for decades, and it will continue to do so. IIoT technology isn’t a break with the past, it’s merely a pathway to the future.
Thanks in advance for your likes and shares.
If healthcare security was an emergency before IoT, thanks to privacy risks and other threats, the emergence of connected devices pushes this to a potential crisis.
But before we panic and start throwing money against the problem quickly and mindlessly, which is a normal reaction to existential risks, let’s first understand some of the frameworks that are operating.
First, we need to keep in mind that advanced attack campaign patterns essentially behave the same across industries: There are similar intents behind attacks, and the same tactics and techniques to execute these intents.
A hacker can be a murderer
Goals and methods are typically agnostic to industry, from healthcare to banking to intelligence. Attackers always want the same things: to steal information, manipulate data, disrupt service with DoS attacks and use ransom to achieve their ends. But while in other industries the risks are financial and reputational, when it comes to health services and IoT connected devices, human lives are at stake. A hacker can be a murderer.
And as healthcare gets more sophisticated, the risks intensify. Take AI, for example. AI and machine learning have already entered the practice of medicine, as they automate some basic medical decision processes. Hackers are well aware of this, which frames attack intent as “medical decision manipulation.” In other words, imagine an attack that manipulates the training data of AI modules, providing fake inputs to the system that will eventually result in a wrong medical decision. A hacker can change the inputs into the algorithm — falsifying blood tests, for example — and the “computerized medical advisor” could prescribe medication that can harm or even kill the patient. This is a terrifying notion.
These AI systems need to be supervised by heuristics, expert systems and rules (medical expert rules), which provide some level of control and flag suspicious activities which may represent malicious AI behavior.
Again, remember that actors behind attacks in the healthcare industry are no different than in other industries. Insiders, nation-wide actors, criminal organizations … they are all possible murderers, each their own motivation and intent.
Remember also that the industry is fragmented, increasing the risk. Healthcare IoT devices, like pacemakers, implantable glucose monitoring devices and blood pressure monitoring tools are part of a complex IT ecosystem which presents a wide threat surface for attackers to exploit. These devices rely on communication between various databases, storage and control stations across different networks, and are operated by different people with different levels of trust and cybersecurity knowledge, including in the patient’s home, hospitals, clouds and so on.
These complex inter-relationships make each component, and the entire system, more vulnerable. This was evident in the spring 2017 attack on the healthcare system in the UK, which essentially forced a full system to shut down. Operations were cancelled, ambulances were diverted and documents, such as patient records, were made unavailable in England and Scotland.
A frozen system creates havoc; attackers cherish that pain, and they will always use this complexity to their own advantage!
Healthcare ecosystem: Heal thyself
Because patterns are heuristic, we know that advanced IoT cyberattacks are multistage and multivector, reaching the various related components in different network locations. As a result, they require a security ecosystem that:
- Effectively detects suspicious moves
- Conducts investigative actions to validate intent
- Automates prevention or mitigation actions
These three steps are massively complex. This security architecture must learn how to receive signals from various devices, servers and services in the patient’s home, hospitals and clouds — and then centrally analyze them in near real time, while also “connecting the dots” to reveal possible malicious intent. Only after that can accurate prevention measures be activated.
In order to be proactive, this ecosystem needs to be collaborative at the core. Once a new advanced attack campaign is recognized and analyzed, the appropriate defense strategy should be shared and implemented in various environments (including other hospitals). These don’t necessarily have the same security tools, and most likely are not operated by the same security vendors. Therefore, sharing defense models requires a new level of “defense strategies translators” so they can be quickly and well adapted into any relevant environment.
Solving these issues requires a global effort that defines healthcare as “critical infrastructure” and implements security technologies that include advanced security analytics, and orchestration and automation capabilities. These new systems are being called the next-generation security information and event management, or SIEM 3.
Decades ago, hacking emerged as a nuisance — call it Hacking 1.0. Then it became more sophisticated and advanced targeted attacks turned into Hacking 2.0. The world of IoT, especially in healthcare, creates Hacking 3.0. In a short time we’ve gone from nuisance to possible murder. We are vulnerable, but we don’t have to be victims.
It is estimated that there will be 30 billion connected devices by 2020, according to leading industry analysts. Manufacturers are creating devices to streamline everything we do — from our personal lives with workouts and grocery shopping to our professional environment with conference calls and automating office security systems. Although these devices were created to drive innovation and improve productivity, the concept of security is unfortunately an afterthought or, in some cases, not a thought at all.
One of the bigger problems created by the ubiquity of IoT in the enterprise is the concept of scale. Consider individual devices. On their own, the risk, although still inherent, is somewhat minimal. However, consider the potential damage if every employee within a large enterprise connects three to four of their own personal devices to the network. The risk is tremendous. The problem with scale is that aggregated, infected IoT devices have the potential to take down an entire enterprise.
In October 2016, internet performance management company Dyn was hit by a distributed denial-of-service (DDoS) attack by the Mirai botnet. DDoS attacks work by creating a network of infected computers to bombard a server with traffic until it collapses under the strain. The Mirai attack on Dyn was the first major wake-up call to the risk associated with IoT, because instead of computers, the botnet infected IoT devices, such as digital cameras and DVR players, to bombard the servers of multiple organizations with an overwhelming amount of traffic. Here is where the problem of scale comes in. It is estimated that Mirai used 100,000 devices for the attack, making it twice as powerful as any DDoS attack on record.
Cause for concern
We’ve already seen new IoT botnets with Mirai-like qualities such as Persirai, which took advantage of open Universal Plug and Play ports on connected cameras to infect them. Or even so-called “vigilante” botnets used to protect against the bad ones, including Hajime and Bricker Bot. But regardless of the botnet’s design and capabilities, whether destructive or “protective,” these instances point to a larger issue that highlights that most organizations aren’t any closer to solving their enterprise IoT risk problem.
With that in mind, it is time for enterprises to ask: Just how many IoT devices are currently connected to our corporate network? According to recent survey data, 85% of IT and networking professionals are not confident about the number of connected devices to their network. The issue here is visibility; without proper visibility, organizations are unknowingly increasing their threat surface and enabling potential entry points for cyberattacks. Imagine if your enterprise fell victim to a Mirai scale-like attack — would your network combat the issue or would your enterprise fall flat in a matter of minutes? The good news is there are ways to combat this type of threat.
Where do we go from here?
Now that you know how harmful an IoT hack could be for your business, prevention is key. Visibility into your entire network is critical — after all, you can’t secure what you cannot see. Once you’ve achieved visibility into your network, the next step is control. Take action by setting and controlling security policies to protect these devices. You’ll also want a technology that allows for orchestration of information-sharing and policy-based security enforcement operations to automate security workflows and accelerate threat response. Overall, the solution to combatting the growing threat of IoT is agentless visibility and control of devices. Only then can you mitigate security risk across the growing threat surface.
In IoT environments where devices, applications and people are interconnected across vast and disparate ecosystems, it’s imperative that security is an integral part of IoT deployments.
Threats are everywhere. The attack vector is potentially limitless. IoT ecosystems encompass the network edge/perimeter, the data center, applications, data transmission and networking mechanisms. They also include every piece of company-owned equipment and end user-owned devices. Even the most proactive IT departments find it challenging to keep pace with career-hackers and ever-more efficient, targeted attacks.
There is no such thing as a 100% fully secure environment.
Security is not static; it is a work in progress. Organizations must be vigilant and assume responsibility for their system and network security. ITIC’s latest survey data found that an overwhelming 80% of respondents indicated the “carelessness of end users” poses the biggest threat to organizational security. This far outpaces the 57% who cited malware infections as the largest potential security problem.
Security is a 50/50 proposition between technology and the humans who must implement and manage it. In Part one of a two-part article, we outline the top 10 business and procedural must-do steps organizations should take to safeguard the IoT ecosystem and mitigate risk. Part two will detail the top 10 list of technology safeguards organizations should implement to safeguard corporate data assets.
Top 10 business steps to defend against cybersecurity threats
- Take inventory. Know what people, devices and applications are on your network. That includes the various versions of software your users have installed on their myriad desktops, notebooks, tablets and smartphones. Twenty-five years ago in the early 1990s, before the internet, businesses used to brag about the longevity and reliability of their servers and network operating systems. It was considered a badge of honor if an IT administrator discovered a forgotten Novell Netware 3.x or 4.x server running in a closet that hadn’t been rebooted in nine or 10 years. Ignorance is not bliss. Complacency, forgetfulness and ignorance of what devices are on your network, along with a host of overlooked configuration errors that unwittingly give opportunistic hackers carte blanche to exploit your network could spell disaster and leave a corporation’s data assets unprotected. Compile a list of all devices, applications, transmission mechanism and access levels for all network users (from the CEO down to office temps). Retire old and outmoded equipment or retrofit them with the latest security mechanisms. Take inventory at least every six months and preferably on a quarterly basis. Additionally, corporations that have acquired another firm via a merger or acquisition should do a complete and thorough inventory of the acquired entity’s infrastructure before connecting it to their own. This requires cooperation and collaboration with the acquired company’s IT department, engineers and software developers.
- Regularly review and update computer security policies. As the saying goes, “The best defense is a good offense.” The business case should always precede and drive the technological aspects of computer security. Organizations should construct and/or update existing security policies and procedures involving all aspects of the business. Security policies and procedures should reflect the current business climate. They should provide clear guidelines on how to respond to the latest cyberthreats. The organization’s security policies should have a clear list of “Dos and Don’ts.” It should be disseminated by human resources to all employees via hard copy and email. And it should also be incorporated into the onboarding training process for new employees. Businesses should treat cybersecurity with the same seriousness as they do with issues of discrimination and sexual harassment.
- Enforce computer/cybersecurity policies and procedures. No exceptions. Make it clear that the corporate cybersecurity rules are not made to be broken. The organization should construct a clear, concise list of the penalties associated with various infractions. These should include a sliding scale of actions the corporation may take for first, second and third infractions. Failure to comply with the corporation’s cybersecurity policies may involve myriad actions ranging from a warning to termination and even criminal prosecution.
- Educate all users. Everyone in the organization from the chief executive to the IT department, application developers, knowledge workers, contract workers and office temps must be educated and adhere to the company’s computer and cybersecurity policies and procedures. Additionally, the IT department should regularly inform users about the latest threats via email and hard copy.
- Construct a cybersecurity-specific operational level agreement/response plan. Every organization, irrespective of size or vertical market, should have a detailed OLA plan in place to quickly and efficiently respond to cyberattacks and cyberheists. An OLA is a set of detailed policies and procedures that governs how the company’s internal stakeholders — chief security officer, chief technology officer, director or VP of IT, administrators and security professionals — will work together to respond to issues. The OLA agreement will detail the policies and procedures for dealing with hacks to minimize downtime, data loss and theft. Quick response to a security issue can be the difference between thwarting a hack or suffering downtime and data losses. The cybersecurity OLA should establish and define the organizational chain of command, assign specific duties and responsibilities in the event of an attack, outline daily security operations and provide detailed instructions on how all the various internal stakeholders will work synergistically to respond to security issues. The cybersecurity OLA should also include a list of all outside third-party vendors and service providers and a list of contacts at those organizations.
- Security should be built in. Security cannot be practiced with 20/20 hindsight. It is the company’s responsibility to perform due diligence and work in concert with its vendors, resellers, third-party independent software vendors and professional service providers to ensure that all new devices and applications incorporate the latest security mechanisms. Before provisioning or deploying any device or application, the company should take great pains to ensure they are secure by design, secure by default, secure in usage, secure in transmission and secure at rest (storage).
- Budget appropriately. There’s a lot of competition among the various corporate upgrade projects and individual departments to get their slice of the organization’s capital expenditure (Capex) and ongoing operational expenditure (Opex) budget. Oftentimes, security gets short shrift and loses out to other projects and stakeholders. The adage, “If it ain’t broke, don’t fix it,” definitely does not apply here. There’s intense competition within the IT department for various Capex and Opex projects. Any firm that delays and defers security does so at its peril. Perform due diligence involving all pertinent parties to determine a timetable and construct a budget for hiring skilled security IT staff, or hiring outside third parties to perform vulnerability testing and risk mitigation, purchasing new security software, equipment or upgrading existing devices.
- Deploy security awareness training from the C-suite down to the IT department. ITIC’s most recent security survey found that over 40% of the 600 responding organizations could not identify the type, length or severity of the cyberattacks their firms had experienced. Additionally, 11% of respondents said they were “unsure” if their companies had suffered a hack over the last 12 months. Hardly a day goes by without another new major cyberattack or other security-related issue making the news. Overworked and often under-staffed IT and security administrators are hard pressed to keep pace with the increase in security threats. If you can’t identify a threat, you won’t recognize it when it happens. If your organization fails to implement the appropriate safeguards, such as auditing, authentication and tracking mechanisms, it will be difficult if not impossible to track the culprits.
- Stay current on the latest security patches and fixes. This may seem obvious, but its importance cannot be overstated.
- Calculate the cost of downtime related to cyberattacks and hacks. There is no more sobering wake-up call than for corporations to calculate the monetary costs and business consequences as a result of a cyberattack. These include, but are not limited to, downtime; lost, stolen, damaged/destroyed or altered data; and the cost and amount of time it takes for IT perform remediation. Also consider the monetary cost of potential litigation, civil and criminal penalties, damage to the company’s reputation and brand, and potential lost business.
Ultimately, everyone in the corporate enterprise — from the C-level executives to the IT department and all the end users — must communicate, collaborate and cooperate to defend the data assets. Ask yourself: What have you got to lose?
Tracking and monitoring cargo is costly, but with new technology it’s far cheaper than the alternative of losing $15-$30 billion annually in cargo crimes and stolen goods. Now, IoT is stepping up to provide connected coverage, where insurance companies can put risk management into the hands of the claimant. The introduction of wireless and location-based technologies has a transformative impact on the way assets are monitored.
Insurance firms are at the heart of this challenge. The more theft we see, the faster premiums will grow. In my view, the lack of suitable and affordable monitoring is a core issue; when goods do go missing, the claimant is required to explain to the insurer precisely what has happened to their goods. Difficulty often arises though when evidence is unavailable. If goods vanish without a trace, there are a whole number of potential scenarios: they could have been lost or left on the tarmac somewhere, delivered to the wrong location or, at worst, stolen. Information on where goods are, their status and state of repair can therefore be of particular benefit to owners, logistics providers and insurers alike. If the worst has happened and goods have been stolen, with good tracking technology, retrieval is much more achievable.
There has been appetite within the insurance sector to leverage technology in order to improve access to information such as location and status, better knowledge of which will help in retrieving stolen goods and in turn protecting cargo, while bringing down premiums for insurers.
According to research conducted by ABI Research, the goods tracking market is set to be worth $5.6 billion by 2021, up from $3.6 billion in 2016. These statistics support the fact that there is a growing want and increasing need for investment into the better monitoring of cargo. At present, the issue lies with retrieving stolen goods; the technologies currently being utilized have major flaws that often make it expensive and impractical to effectively track goods in the first place, let alone retrieve them.
Tracking devices, often operating on mobile data networks, are no way near as accurate or reliable as insurers would like them to be. The issue lies primarily with connectivity; when goods are in transit, there will be times they go through zones without data or signal at all, where if something were to happen, data would be unable to be recorded and sent back to the supplier.
Current tracking systems also create an issue of space. The typical means for transmission, satellite and mobile networks, require large powerful devices that have to process the information and then transmit it back to headquarters. The cost of installing monitoring devices and transferring data for all goods across an entire logistics network can sometimes outweigh the savings, thus when goods do go missing, far too many are untraceable.
Furthermore, due to the price and the size of current tracking devices, suppliers often only implement one larger, costlier device to track a whole container, rather than tracking each individual shipment. Cargo thieves often plan ahead and only take the crates, pallets and goods which are untraceable, rendering the expensive tracking device attached to the container useless. Making tracking devices both smaller and more affordable could also drastically improve existing technologies.
It is often the case that when a business’ goods are stolen they won’t report it to the police. Companies fear their reputation and customers thinking that they don’t have proper security in place and could be unreliable. Those companies transporting cargo on a regular basis need to have a technology that is simple, affordable and provides effective connectivity to deter and prevent theft.
The solution to cargo crimes
One technology that defies the norm and is capable of operating across more of the globe than other mobile services is Unstructured Supplementary Services Data (USSD). USSD is a secure messaging protocol which is globally available as part of GSM networks. The technology’s wide availability makes it the ideal technology for insurance and logistics firms looking to track and monitor goods.
It requires simple components and lower power to operate, meaning devices can be active considerably longer than a mobile data-based technology, and SIMs can be installed into devices not much bigger than a USB memory stick, making the cost of space significantly less than alternatives. Given there is no internet being used, there is also no need for expensive microprocessors and components to communicate the data, in turn reducing complexity and costs for manufacturing devices.
Due to the low power requirements and cost of USSD, it would also be feasible to use the network to track multiple shipments within a cargo container, allowing suppliers to more reliably deter thieves and recover stolen goods.
Cargo crimes are challenging insurance firms and costing the logistics industry billions of dollars. The introduction of wireless and location-based technologies has a transformative impact on the way assets are monitored. Despite this, we’ve continued to see issues with cargo going missing or being stolen while in transit. We’re now in a position where one means of connectivity is offering a much more affordable and reliable technology than existing market options. USSD has the capability to provide insurance firms and their clients with assurance that their goods are safe in the most affordable way possible.
As more and more companies become part of the internet of things supply chain, the possibilities for business opportunities grow alongside the digital transformation. But with existing application traffic already straining beleaguered enterprise WANs, how do you avoid the data gridlock that is coming with the onslaught of IoT data?
Adding bandwidth or backhauling all internet traffic over a WAN to corporate data centers are two popular approaches. However, while they may be effective in the short term, they can create several challenges in the long term, including bottlenecks, latency issues, limited business agility and flexibility, and reduced quality of experience (QoE).
Here are three reasons that offloading internet traffic through intelligent traffic segmentation can free your business from IoT gridlock by enabling the shortest path from mobile, broadband and the internet to the edge.
Higher QoE = Lower latency, not just increased bandwidth
The limitations of current networking technology and certain immutable laws of physics do not allow latency-free transmission over long distances. So as exponential growth in traffic and data, as well as the use of cloud services, are placing extreme demands on bandwidth, how do you improve performance?
Strategically placed digital edge nodes act as a local interconnection hub for user, cloud and business traffic at the company’s digital edge. Businesses can bring users/traffic from field area networks into their edge node at the closest point, and then accelerate internet traffic through that edge node.
This allows companies to access local cloud and SaaS offerings directly and shift workloads to those services, resolving their last-mile challenges by offsetting MPLS circuits with high-capacity local Metro Ethernet circuits.
Securing the connection
In addition to providing a “fast pass” for users/traffic from field area networks into the edge node, performance hubs reduce the cyberthreat risk by establishing direct, secure connections via intra-colocations, extending a private LAN to business partners. This keeps the potential attack surface area small, and business traffic remains on private secure (and compliant) connections.
MPLS links to geographic locations are expensive and not sustainable with increasing demand for performance and data growth.
Cost efficiency in the cloud
As businesses become more digitally dependent, they are also more reliant on the public internet. Shifting traffic to the internet can be up to 10 times less expensive, but increases the potential risks in terms of reduced performance and QoE.
Shifting to high-capacity local Metro Ethernet circuits instead of more costly MPLS (T1) circuits allows businesses to take advantage of proximate, high-speed, low-latency connections for greater network optimization and global performance consistency at lower costs. This mitigates the need to move traffic over the internet via high-priced WAN links while still retaining businesses’ desired levels of control and security.
Localizing and optimizing the transfer of sensitive performance-hungry data traffic at the digital edge reduces costs and keeps that traffic on dedicated, private connectivity, where local security controls can be applied.
Recently, as more applications get serious about implementing IIoT designs, I get an increasing number of questions from insurance company executives. The most common question: what is the risk in the industrial internet of things? Their theme seems to be: Connecting things is just too risky. We don’t understand the security or safety risks, so it can’t be good.
I do agree that IIoT is a brave new world in general, and for risk management in particular. There are all sorts of new opportunities for attack. The hack that allowed remote control of a Jeep over the internet is a classic example. More concerning industrial cases include the Stuxnet worm that destroyed Iran’s nuclear program, the European grid backdoor installed by malware and the Excel spreadsheet exploit that caused a blackout in Ukraine.
That said, intelligent machines also have more opportunity to protect themselves. The sad truth today is that most systems are poorly protected (like that Jeep). Security gets orders of magnitude more attention today than only a short time ago. Most industrial systems didn’t even consider anything beyond “eggshell” firewalls or “air gap” offline designs until recently. That has changed 100% today; everyone is thinking security, security, security. The progress is exhilarating. Put another way, the real question isn’t if there are more attackers and threat out there; that’s obvious. The real question is if installing an IIoT design makes your system safer or riskier. In my opinion, despite the rise in bad agents, the “likely real risk” goes down if you connect to IIoT. This is simply because the process of implementing an IIoT system motivates a badly needed security audit.
But my optimism stems from a greater opportunity to implement real change. In my experience (and this may shock security wonks), security is not a change driver. Fear is simply not enough. Industrial systems are usually not willing to implement a new architecture (just) to improve security. The power industry is my favorite example. The industry has been screaming for 20 years that security is a problem. And, it will go right on screaming … unless something else drives the change.
The good news? IIoT is that change driver. While it may not drive the change, security is absolutely a change gate. When implementing a new architecture for any reason, every application insists on security. Since IIoT is motivating many, many industrial applications to redo their architectures, security is getting better. Of course, implementing a new architecture for a major industrial application, or for that matter an entire industry, is daunting. But this is the magic of the sweeping changes offered by IIoT. IIoT is compelling. Change is coming, and it’s coming fast.
Because of this new driver and awareness, we are probably decreasing our collective risk profile over time. The greater attention means we are installing cyber “burglar alarms” faster than the rise in “burglars.” So, our collective “likely real risk” of a major industrial infrastructure event is probably decreasing.
The insurance executives consider this an overly optimistic view of the future. I counter that they hold a too-optimistic view of the present. You see, the situation today is unacceptably, intolerably, unbelievably high risk. Entire industries run without a whit of security. It seems scarier in the future only because the risk you don’t know seems worse than the risk you do know. That’s human nature. But anyone who looks will see that the current risks are very high, and the new designs are much better.
How much better? There are sweeping improvements in both understanding and technology that enable better security.
The Industrial Internet Consortium (IIC) recently published the most complete treatment of security in the industry, the “Industrial Internet Security Framework.” This 162-page tome outlines the security challenge from the broadest concept of trustworthiness to the key considerations in implementation. Perhaps its greatest contribution is simply the cataloging of all the deep and wide considerations of the problem. Even a casual perusal will increase a designer’s awareness of the challenges.
Standards and technologies are also making great strides. For instance, many potential IIoT systems primarily face scalability and system integration challenges. With a little thought, the architects figure out that IIoT systems are all about the data, and then that they really have a high-performance data flow and data transparency challenge. The best way to provide transparent flow is a peer-to-peer or “publish-subscribe design. The IIC’s seminal works on architecture and connectivity call this the “layered data bus” pattern. It scales well, performs well and connects low-level control to high-level intelligence.
Unfortunately, this realization leads to a tail-wagging-the-dog dilemma. The architecture is the dog; systems need the simplicity and performance of a communications pattern that directly sends the data where it’s needed, right now. That data transparency makes large-scale future IIoT systems manageable.
The dog side of the dialog goes something like this:
Hey! Let’s just send the data right where we need it. Pervasive data availability makes systems fast, reliable and scalable. And look how much simpler the code is!
Of course, although data transparency is an integration dream, it’s a security nightmare. So, here comes the security tail:
We can’t maintain thousands of independent secure sessions! How do we keep such a system secure?
Only last year, that was a damn good question. It blocked adoption of IIoT technologies where they are really needed. But then, the DDS standard developed a security architecture that exactly matches its data-centric data flow design. The result? The data-centric dog wags its perfectly matched data-centric security tail. Security works seamlessly without clouding data transparency. Advances like this — that span industries — will make future IIoT systems much more secure than today’s ad-hoc industry-specific quagmire of afterthought security hacks. Security that matches the architecture is elegant and functional. Even better, because it’s a standard, products are coming to market that enable secure interoperability between vendors. The technology is real now.
This argument leaves my insurance contacts searching for Tao in their actuarial tables. So, I can’t resist adding that it’s not really what they should worry about.
Safety engineering will be a much bigger impact on insurance. For instance, I expect the $200 billion auto insurance industry to disappear in the next 10-20 years as advanced driver assistance systems and autonomous cars eliminate 90% or more of accidents. Most hospital errors can also be prevented, and hospital error is currently the third leading cause of death in the U.S. In factories, plants, oil rigs, mining systems and many more applications, automated systems (somewhat obviously) won’t have humans around, thus removing a significant current risk today. Accidents, in general, are mostly the result of human folly. Machines will soon check or eliminate this opportunity for folly. I see this as an extremely positive increase in the quality and preservation of life. Insurance executives see it as an existential threat.
I tell them not to feel bad; most industries will be greatly disrupted by smart machines. Navigating that transition will make or break companies. Insurers certainly understand that losses are easier to grasp than the gains; that principal underwrites their industry. But that perception is not reality. IIoT’s impact on the economy as a whole will be hugely positive; the analysts measure it in multiple trillions of dollars in only a few years. So there will be many, many places to seek and achieve growth. The challenge to find those paths is no less or greater for insurance than for any other industry. But fundamentally, IIoT will drive a greener, safer, better future. It is good.