IoT Agenda


April 12, 2018  12:35 PM

Where the rubber meets the road: Blockchain, AI and automobiles

Scott Zoldi Profile: Scott Zoldi
ai, Artificial intelligence, Bitcoin, Blockchain, Cars, Connected car, driving, FICO, Insurance, Internet of Things, iot, Relationships

Between coin heists and ICOs, much of the attention around blockchain technology centers on bitcoin and other cryptocurrencies. (There’s even a bitcoin heist movie, if you’re interested). But as blockchain, the technology that underlies bitcoin and its brethren, grows in usage, there are intriguing opportunities for AI to work with it to transform the way all manner of financial transactions occur, and not just in the financial industry.

Last year at FICO’s Automotive Mastermind conference, we discussed broad automotive applications for blockchain-focused analytics and AI-driven scoring technology that is the basis of many FICO technologies. These were in addition to blockchain’s micro-transaction enablement, which I blogged about as part of my 2018 analytics and AI predictions. Here’s a recap of that:

Blockchain technology will soon record “time chains of events,” as applied to contracts, interactions and occurrences. In these “time chains,” people and the items we interact with will have encrypted identities. The blockchain that is distributed will be the single source of truth, allowing audit trails of data usage in models, particularly in data permission rights.

Today, to rent a car, you’d go to a traditional car rental agency like Hertz or Avis, or belong to an “alternative” rental organization like Zipcar or car2go. In the future, you will be able walk up to a car to lease it, but you’ll do so with a micro-loan for which you are approved to lease the car for, say, an afternoon. This micro-loan will have insurance contracts attached to the chain and a codified history of the car’s history of drivers, events and maintenance. As you drive through the city and interact with toll roads and parking spaces, all of this information will be automatically recorded and monitored on the blockchain. When you leave the car and lock it, the lease is complete and auditable on the chain.

Blockchain comes to the drivetrain

This concept of auditable “time chains” enabled by blockchain can be applied to, for example, tracking faulty components from their source to every affected vehicle. Such an approach could dramatically improve safety flaws that are today addressed by inefficient mass recalls.

Beyond “time chains,” data event chains will create new opportunities for graph analytics and novel new AI algorithms to consume relationship data at scale. In 2018, we will see new analytics around relationship epochs, which has fascinating implications for the automotive industry. Here’s a quick definition: Think of your car’s daily interactions with drivers, other cars, fuel stations and service establishments. Most days are relatively routine, but sometimes chains of events occur that have new meaning, perhaps indicating impaired or poor driving, collisions and repairs, preventive service opportunities and many others. Clearly, understanding these webs of relationships of events will add more insight to any analysis of individual components. These webs, or relationship epochs, will feature scoring based on shifting chains and graphs, delivering tremendous predictive power.

For example, relationship epochs and scoring techniques could be applied to:

  • Used cars: Purchasing a used car is fraught with uncertainty. Blockchain and AI can be used to create relationship epochs that will inform buyers of the car’s “past life,” i.e., if the car has been in accidents, how rough it was driven, if the parts comprising the vehicle are authentic and correct, and much more.
  • Defensive driving: As roads become increasingly populated with driverless cars, it becomes critical to know how much to trust the car in front of you. What is its history on the blockchain? In the moment, should you back up or change lanes? Cars’ sensors add another dimension of information, helping human drivers to make better decisions.
  • Insurance risk assessment: As the roads we drive on essentially become a network of trust, individuals’ identities may not be known, but there is goodness and normality associated with the individuals who comprise driving populations. In this way, blockchain technology can help insurance companies to assess and score drivers’ safety habits and aggregate risk levels.

As for me, I am excited to hear about how my future used Porsche may be enhanced by blockchain technology. Since I’ve just sold my much loved, and much maligned, classic Porsche 996, a new track car may be in my future sooner than I think. Follow me on Twitter @ScottZoldi.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

April 11, 2018  2:38 PM

The need for network assurance in a world of connected healthcare

Eileen Haggerty Profile: Eileen Haggerty
Connected Health, Internet of Things, iot, IoT applications, IoT devices, IOT Network, iot security, Network performance, visibility

In an environment of ever tighter budgets and resources, where providers are increasingly expected to do more with less, advances in technology are proving to be a boon in the delivery of healthcare services.

Secure, real-time access to a patient’s electronic medical records (EMR) and test results is now widely taken for granted, with the ability to access and input data anywhere, from virtually any device, delivering efficiency savings for healthcare providers. At the same time, ongoing development of innovative IoT applications continues to open up new ways of monitoring patients’ health and the efficacy of their treatment.

The Memorial Sloan Kettering Cancer Center in New York, for example, is working with cloud research firm Medidata Solutions, Inc. to test the use of activity trackers in monitoring the reaction of patients with multiple myeloma to their treatment, enabling improvements to be made over time.

Elsewhere, pharmaceutical giant Novartis AG is undertaking research with Qualcomm Technologies, Inc. and Propeller Health to develop connected inhalers for the treatment of chronic obstructive pulmonary disease. Propeller’s device connects to a dedicated mobile app via sensor to passively record and transmit usage data, allowing doctors to keep accurate track as to whether patients are sticking to their treatment plan.

The application of technologies such as these will only prove beneficial in allowing the secure delivery of more efficient, patient-centric healthcare. However, the complexity they will introduce to a healthcare provider’s network is likely to pose something of a challenge.

Vital visibility

As a result of the transformative effect of digital technology, healthcare staff generally expect to be able to access information almost instantaneously. They are likely to blame the EMR, therefore, when they experience delays in accessing patient records, or curse the email system when they don’t get the message approving a patient’s insurance. In many cases, however, the source of the fault is likely to be a supporting service rather than the EMR or the email system. But without full visibility into the network, this source would be hard to diagnose and remedy.

The fault may well lie with a configuration issue, or a bandwidth problem, or in a poorly designed application. However, when you consider that hospitals and health systems are coming under attack from cybercriminals at a rate of almost one a day in in 2017, any device connected to a network, from iPads to MRI machines to smart beds, can be a target, potentially putting the lives of patients at risk.

One in five healthcare organizations has more than 5,000 devices connected to its network, each one of which represents an endpoint that could be exploited for nefarious purposes. It is easy to understand how the need for visibility becomes significantly more serious.

The value of service assurance can, quite literally therefore, be a matter of life and death. It is vital that hospitals have visibility into their entire networks in order to mitigate any risk before it becomes a problem.

Reliance on the internet of things

Healthcare providers depend on high availability as they adopt new digital services. The increasing use of IoT technology, for example, means that whenever a delay or an outage in network performance occurs, the delivery of patient care can quickly grind to a halt which, in some instances, might prove harmful to patients.

Our reliance on the hyper-connected IoT depends on more than just the application or service currently being used by the healthcare staff, however. The proper function of patient-facing websites and wearable heart monitors alike is dependent on a range of factors, including physical and virtualized infrastructure, hybrid cloud, wired and wireless connectivity, multiple vendors and supporting networks, each of which requires high availability.

IoT has certainly improved the delivery of high-quality healthcare, but faced with the constant and expanding threat from malicious outsiders and with the ongoing development of ever more innovative technologies, the need for complete visibility and service assurance is only set to grow.

This article was co-written by Michael Segal, area vice president of strategic marketing at NETSCOUT.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 11, 2018  12:01 PM

IoT-enabled security cameras: How to avoid the headaches

George Bentinck Profile: George Bentinck
Internet of Things, iot, IoT applications, IoT data, IoT sensors, IP, IT, Scalability, Security Cameras, Sensor, Sensors

Security cameras are meant to hide in plain sight. They are built to be small and unobtrusive. But this doesn’t mean that a security camera can’t make a noticeable impact on your business — in both positive and negative ways. Today, an IP-enabled security camera isn’t just for security. Instead, it is a camera that can monitor and relay useful business information, while also monitoring for physical security purposes. It is also a new endpoint on the growing list that lives on the network, managed by IT.

The security camera industry has come a long way in the past few years. From analog to digital, security cameras have experienced the same transformation as many other workplace technologies that have become IP-enabled. Just like the migration of telephony from traditional private branch exchanges to today’s VoIP systems, cameras are transitioning from a technology traditionally managed by operations to one managed by IT.

IP-enabled security cameras can spell trouble for the network if not managed properly, and the technology you buy to keep your organization safe could end up doing more harm than good. In order for IT to avoid the pitfalls, businesses need to:

  • Make sure IT has a seat at the table during the buying process. Cameras and their associated infrastructure are now connected devices on the network with a myriad of components and applications. The traditional decision-makers in Facilities, loss prevention or even corporate security are not experts in making decisions about IT infrastructure. By leaving IT out of the loop, long-term Opex and platform capabilities aren’t scoped properly, and camera functionality can suffer. This can also lead to problems during deployment and troubleshooting.
  • Not underestimate the threats introduced by IP camera technologies. Again, a camera becomes an endpoint when added to the network, and because of this, needs to be secured. Most network video recorders and cameras either don’t offer encryption, don’t offer encryption for all components of the system or make it very hard to deploy encryption. Ensure the system you choose offers the right level of encryption for your business. Additionally, consider the ease in which you can update firmware and drivers. Many of today’s camera options use outdated security protocols and are poorly managed because they are cumbersome to keep up to date, making them a target for cyberattacks, not unlike those we saw derailing business across Europe when WannaCry and Petya hit in 2017. We see the blame placed on the IT team for network breaches, regardless of whether the cameras are supposed to be under their control.
  • Weigh costs by looking at the full picture, including long-term operations. As always, price is a major factor when making a buying decision, but take the time to look beyond the sticker price and consider the long-term operational costs. Cost isn’t just about the cameras; there are the servers, software packages (like VMS or additional analytics), recabling costs and more to consider. Inexpensive, poorly built cameras may fail after a short period of time, provide poor user experiences or require a lot of time-consuming manual configuration. And as mentioned, cameras that leave businesses vulnerable to a hack could cost the business exorbitant amounts of money. If a camera lacks proper support, customers could be left high and dry if, and when, the camera fails. Choose a system that offers an adequate level of support.
  • Consider scalability and long-term effectiveness. Camera technology is rapidly evolving. How will today’s cameras meet the needs of tomorrow if camera features are set in stone the day they are purchased? The ability to quickly and easily deploy new features is essential. Scalability becomes a major issue with security systems as well. Using cloud storage to solve scaling issues leads to unrealistic bandwidth requirements, and sky-high costs when used in an enterprise deployment. Choose a system that can grow with your business, both in terms of features and in size.

While the IT team may find itself largely responsible for the security and performance of security cameras, there is also an immense opportunity to bring enormous value to the business with IP-based cameras. Cameras are visual sensors by nature, so it is only natural that, by adding analytics capabilities, a business can extrapolate a great deal of useful trend information when the data is anonymized and consolidated. Understanding how traffic flows through a retail store is one example, but we’ve had customers who have used cameras to do much more — whether a city using motion heat maps to see which equipment in the city gym is used most frequently, allowing for better business decisions about when to replace equipment, or a farm in Australia, which was able to monitor and deduce useful grazing and behavior pattern of sheep.

The security camera market is evolving at a rapid rate. IT needs to be involved in the discussion, and is in a prime position to help bring new levels of visibility and analytics to the business.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 10, 2018  3:16 PM

IoT in e-commerce logistics: Where’s the money? Part 3

Subhash Chowdary Subhash Chowdary Profile: Subhash Chowdary
Digital transformation, E-commerce, Ecommerce, Enterprise IoT, Industrial IoT, Internet of Things, iot, Logistics, Supply chain, Supply Chain Management

In my previous post, the top three “where’s the money” opportunities mentioned using IoT in logistics were e-commerce, the food chain and third-party information logistics. I addressed opportunities in agri-food supply chains that benefit a large global population. This post addresses the opportunities for IoT in e-commerce.

Global growth markets

Retail e-commerce global sales are estimated to reach $4.5 trillion by 2021. We just witnessed another record-breaking season of e-commerce with Amazon making headlines in the U.S. and Alibaba in China. Alibaba set a new single-day sales record of $25.3 billion, a 40% increase from last year.

If you are an enterprise thinking of what’s included in the scope of e-commerce, it may be easier to think of what cannot be included in e-commerce. Nothing prevents e-commerce technologies and concepts from being extended to B2B commerce.

What’s IoT got to do with e-commerce?

Ninety percent of the $25 billion order value on Alibaba’s single-day sales was done using mobile devices. Smartphones play an essential role in the success of e-commerce. As an IoT device, it has transformed how humans search and buy products from anywhere in the world at any time. The ease of ordering a product is setting new expectations on delivery of product. The delivery time expectations are reaching “read my mind” speed, or with IoT devices like Alexa, “are you listening?” While the ordering process moves at digital speed, the physical goods delivery logistics processes are limited by humans. IoT devices that don’t require human interaction will soon digitize and transform the supply chain. Digitizing the collection and flow of data without human intervention using IoT will be critical to improving the speed of delivery logistics.

Digital transformation

Supporting $25 billion in sales in a day is an example of digital technology, people and processes working without breaking down. The Alibaba digital infrastructure at its peak supported 325,000 orders per second, from 225 countries/regions. The next challenge is in order fulfillment logistics to ensure that deliveries are made to keep the customers coming back to order more the next time. To build technologies that will sustain your business into the future, you have to think different from the current practices.

Is delivery logistics a supply chain function or customer service? Delivery of goods ordered is taken for granted. But, customer satisfaction is a self-service digital experience of knowing and seeing order fulfillment meeting the customer’s expectations. Intervention because of exceptions, if any, needs to be proactive, preferably before the customer is aware of an exception that could result in cancellations, returns, support costs and additional avoidable costs. To be proactive, actionable information (not data) and intelligence is critical.

Digital transformation does not mean waiting for robots to replace humans. IoT provides a clean, new source of accurate and real-time data and events to generate intelligence to expose the inefficiencies in your existing systems, processes and KPI/metrics used to make decisions. If you are considering blockchain for a more trusted source of data, IoT-based data and events are a new, clean source for recording in blockchain that is not corrupted by existing systems.

The ‘big squeeze’ is coming

Investors have poured billions into e-commerce. Now, investors want a return on their investments. As e-commerce growth stabilizes, fulfillment cost reduction will be targeted. The squeeze will be felt by the ecosystem of suppliers and supply chain logistics services providers to become more cost efficient and competitive. The pressure on lowering total costs across the entire e-commerce business will force changes. This is the opportunity for innovative technologies to break though legacy thinking and practices. Every aspect of the supply chain management process, including procurement, first mile, last mile and everything in between, is an opportunity for improvement.

Live case study

India is a live case study of e-commerce in evolution with no dominant market leader. A billion potential customers and a growing economy with an appetite for smartphones is an obvious target.

eCommerce market competition

E-commerce market grab

Amazon, Alibaba, Walmart and investors such as Softbank have their sights set on Flipkart and India. Billions of dollars are being invested. The Indian government has its own initiatives to support digitization, including “Made In India” for manufacturing; regulatory changes, such as the Goods and Services Tax Bill; e-payments; e-waybills; modernization of ports, road and rail transportation; and cold chains are just a few of many initiatives in progress. The recognition of supply chain logistics as key to the success of the Indian economy by the government is a perfect opportunity for IoT to make a big difference. What works in the U.S. or China does not directly transfer to India; ethical digital transformation of supply chains with Indian “jugaad” can make a big difference to millions of people.

Similar thinking is taking place on the African continent. What works in India will be easier to transfer to Africa.

Opportunities for using IoT in e-commerce logistics

IoT bridges information technology with supply chain operations to make supply chains faster, better and cheaper. Digitization of data enables automation to scale operations. A few operational areas where IoT can make a big difference immediately, in addition to the obvious last mile, are:

  • Ocean ports: Inbound/outbound container/reefer logistics and operations
  • Airports: Inbound/outbound cargo logistics and operations
  • Intermodal transportation: Road-rail-road-port/distribution centers
  • Inland container depots: Container, chassis, truck, yard logistics
  • Central customs and excise: Compliance, verification, monitoring
  • Cold chain: End-to-end management
  • Food chain: Agriculture lifecycle logistics
  • Pharma chain: End-to-end management
  • Security: Product integrity, quality, leakage/diversion
  • Procurement: Multi-tiered supply chain network management

Can you see the opportunities that Amazon, Alibaba, Walmart and Softbank see in e-commerce? Do you think they will be using inefficient processes and legacy technologies? Doing nothing or cutting costs are not viable options for survival. Working smarter using IoT-based technology with intelligence is an option worth considering. Smart and ethical use of IoT-based technologies can differentiate, mitigate and overcome competitive barriers. Eliminate your implementation risk with technologies that deliver the measurable value you want on first use.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 10, 2018  10:35 AM

Behavioral eManipulation: Attacking the care delivery workflow to deliver harm

Ted Harrington Profile: Ted Harrington
Authentication, Compliance, Connected Health, Internet of Things, iot, IoT devices, iot security, medical devices, Workflow

Here, we introduce and analyze the novel security concept of Behavioral eManipulation, the attack methodology with which an attacker uses technical exploits to influence human behavior in order to arrive at an adverse outcome.

Technical exploit + human behavior = Adverse outcome

Behavioral eManipulation entails tricking a competent professional into making an error that he otherwise might not make. This concept is relevant to scenarios where all of the following conditions are present:

  • The workflow relies on technology that could be exploited,
  • The workflow relies on humans, whose behavior could be influenced, and
  • There is potential for a tangible or intangible adverse outcome resulting from modified data, stolen assets, denial of service or system availability, etc.

A high-profile example of Behavioral eManipulation was Stuxnet, where an Iranian nuclear centrifuge was attacked while simultaneously the alarms to alert technicians of mechanical issues were silenced, thereby preventing the necessary human intervention. This denial of human interaction resulted in severe damage to the centrifuge, which would take years to repair.

Other adverse outcomes might include scenarios such as revenue prevention, profit erosion, negative customer/user experience, customer attrition/exodus, stock price decline, brand damage or physical harm. For illustrative purposes, we narrow scope to focus on the lattermost of those outcomes, by analyzing the specific workflow of delivery of healthcare. We do this for a few reasons: first, it provides compelling examples for illustration; second, it involves the most valuable possible asset (human safety); and third, it is an industry that is not adequately preparing itself to address this concept. In the context of the healthcare delivery workflow, the formula for Behavioral eManipulation can be summarized as follows:

Technical exploit + human behavior = Patient harm

Research context
The foundation for this research is based upon the key findings of “Hacking Hospitals,” a 24-month study of hospitals across the United States, that published exploitable technical vulnerabilities inherent in medical devices and their supporting networks, business shortcomings inherent within the healthcare industry that lead to the introduction of these exploitable technical vulnerabilities, and a blueprint for how to solve these issues. We focused the following body of research on this concept of Behavioral eManipulation to extrapolate the ways in which exploitable security vulnerabilities manifest themselves in the real-world delivery of care. For this phase of the research, we consulted renowned trauma surgeon Konstantinos M. Triantafillou, MD, a practicing orthopaedic trauma surgeon, certified by the American Board of Orthopaedic Surgery, who serves patients at University of Tennessee Medical Center, a Level I trauma center. Over the course of the 14 months following the publication of “Hacking Hospitals,” through interviews and exploit mapping exercises, we have used Dr. Triantafillou’s perspective as an actively practicing trauma surgeon to help parse out the unlikely theoretical outcomes from the high-risk areas on which the security community should focus.

Assets
Core to this paper and the concept of Behavioral eManipulation is the assumption that the most important asset that the healthcare ecosystem must protect is patient safety. The key finding of the seminal “Hacking Hospitals” is that the healthcare industry is focused primarily on protecting patient data, with inadequate protections of patient safety; the concept of Behavioral eManipulation reinforces this assumption by demonstrating how this misalignment of the security mission in healthcare could undermine the Hippocratic Oath, a concept which most healthcare professionals have sworn to uphold. This oath advocates that first and foremost, medical professionals shall do no harm. The integrity of this oath can be undermined through Behavioral eManipulation.

Medical devices: Active versus passive
Much of the healthcare-oriented security research to date has focused on active medical devices, which are those that actively do something to the patient, such as delivering medication or modifying the heartbeat. Some groundbreaking findings related to active medical devices have been published in the areas of insulin pumps, drug infusion pumps and pacemakers. It’s fairly straightforward to understand how manipulating active devices might result in harm to a patient. However, prior to “Hacking Hospitals,” minimal attention was paid to passive medical devices, which are not doing something to the patient, but instead are reacting to the patient. Examples of passive medical devices include patient monitors, imaging and lab equipment. While the direct correlation between device input and patient outcome in the context of active medical devices makes it easy to understand how manipulating active medical devices could harm a patient, what is lesser understood is how similar patient safety outcomes could arrive in the context of passive medical devices. Behavioral eManipulation shows how this is possible, by using such devices to attack the workflow; the workflow in turn delivers the adverse outcome rather than the device directly.

Attack scenarios

Methodologies
Behavioral eManipulation is best understood in the context of conventional cyberattack methods:

  • Technical exploits attack systems. This is where an attacker finds and exploits a security vulnerability (or vulnerabilities) in an application or network. A well-known example of this is when an American teenager hacked the International Space Station’s life support systems, causing $41,000 to repair the damage.
  • Social engineering attacks people. This is where an attacker takes advantage of human susceptibility to exploit the human. Perhaps the most famous example of all time is the original Trojan Horse, where a seemingly defeated Greek army left a trophy for the seemingly triumphant Trojans, who then unwittingly brought the trophy inside the walls only to later find it filled with invading Greeks.
  • Behavioral eManipulation attacks workflows. This is where an attacker exploits a device or system that an unwitting human victim relies upon to make decisions, and then the workflow delivers the adverse outcome as a result of reliance on that bad information. Examples of this include the variety of scenarios described below in this section.

Behavioral eManipulation has a few discrete phases to the attack flow:

  1. Understand the workflow
  2. Manipulate the data
  3. Rely on humans to unwittingly deliver the adverse outcome
    1. In the context of healthcare, phase 1 is actually a mitigating factor, it requires many years of medical training for aspiring physicians to learn the many care delivery workflows. However, while that limits the potential pool of viable attackers who can use this attack method efficiently, it does not prevent any attacker from acquiring this knowledge.
    2. Phase 2 is where most security research to date has focused.
    3. Not including accidental exploits, phase 3 is only possible when the combination of phases 1 and 2 have concluded.

Diagnostic attack scenarios: Overview
Most medical errors are communication errors. Behavioral eManipulation is essentially an amplification of communication errors, thereby compounding medical errors. In a healthcare context, Behavioral eManipulation manifests itself most dramatically in diagnostic workflows, which are the parts of the care delivery workflow that require a physician, nurse or other medical professional to interpret data in order to diagnose a patient’s health issue, for purposes of prescribing a treatment plan. Such diagnostic scenarios largely fall into one of two categories:

  1. Workflows that have failsafes built in, also known as “system of checks,” or
  2. Workflows that do not have failsafes and rely heavily on the physician to interpret the data.

Each of these diagnostic attack scenarios has different implications to patient safety if the information upon which the physician bases decisions is manipulated.

Diagnostic attack scenarios: Systems of checks
Diagnostic attack scenarios with systems of safety checks are typically those which involve biographical data, systems or other information which are known to deliver harm if the workflow is flawed. An ever-growing body of statistics demonstrates that errors related to blood and allergies are known to have harmful or even fatal implications; for that reason, healthcare providers have numerous safety checks in place to ensure such errors are minimized. If, in a typical care workflow, the physician notes an anomaly, there are procedures in place that would trigger certain actions1 by the physician to verify the anomaly before prescribing a potentially harmful procedure or dosage. Through Behavioral eManipulation, an attacker would trigger anomalies that require physicians to re-perform diagnostics, order new tests, and/or consult additional medical professionals. Time is critical in the kinds of settings where such diagnostics would take place (e.g., acute care settings, operating rooms, etc.), and such additional steps all require critical time. As a result, in some cases time impedes care, while in other cases the physician may lose trust in the diagnostics system – assuming it to be faulty – and revert to other diagnostic methods that circumvent the established system of checks. All of these conditions dramatically increase the likelihood of medical error that could result from such manipulation of the care delivery workflow.

Proper workflow

  1. Patient produces data
  2. Physician reviews data
  3. Physician consults duplicative sources for data (failsafe)
  4. Physician makes diagnosis
  5. Physician administers treatment plan

Manipulated workflow

  1. Patient produces data
  2. Physician reviews data
  3. Attacker exploits security vulnerability*
  4. Attacker manipulates duplicative data sources
  5. Physician consults duplicative data sources (failsafe)
  6. Physician notes anomalies
  7. Physician measures patient again
  8. Physician reviews new data
  9. Physician consults/compares with duplicative sources for data (failsafe)
  10. Physician notes anomalies
  11. Physician measures patient again
  12. Physician reviews new data
  13. Physician proceeds on bad data, or loses trust in system
  14. Physician makes diagnosis
  15. Physician administers invalid treatment plan

*Example technical exploit: As outlined in “Hacking Hospitals,” we discovered a privilege escalation security vulnerability in lobby check-in kiosks that allowed an attacker to take the kiosk out of kiosk mode, gain administrative control, pivot to other systems on the same subnet and manipulate the data accessible by those other systems. One such system was a bloodwork system.

Diagnostic attack scenarios: Data interpretation
By contrast, some attack scenarios do not have safety checks to them and rely solely on the physician’s experience and interpretation of data to process the diagnosis and resulting treatment plan. Examples of these would pertain to monitoring equipment or lab results. The physician is unlikely to question the validity of the information coming off such systems unless the readouts fall unusually outside of expected parameters. As there are not additional failsafes in place, if acting upon manipulated information, the physician operating in the care delivery workflow would likely unwittingly deliver harm.

Proper workflow

  1. Patient produces data
  2. Physician reviews data
  3. Physician makes diagnosis
  4. Physician administers treatment plan

Manipulated workflow

  1. Patient produces data
  2. Attacker exploits security vulnerability*
  3. Physician reviews data
  4. Physician makes diagnosis
  5. Physician administers invalid treatment plan

*Example technical exploit: As outlined in “Hacking Hospitals,” we discovered an authentication bypass vulnerability that, when combined with a remote code execution exploit, allowed a remote attacker to take administrative control over a patient bedside monitor. This enabled the attacker to manipulate the data feeding the screen, thereby triggering invalid physician response or preventing valid and necessary physician response.

Remedies

The safety implications introduced with the attack concept of Behavioral eManipulation are significant; they usher in a new area of how the security community and the device manufacturing community should think about security vulnerabilities. However, many of the recommended solutions to the situation are either long-proven approaches or are new takes on long-proven approaches.

Return on investment: Training physicians versus securing devices
Many security professionals advocate for training physicians as a necessary solution to the many security vulnerabilities that plague healthcare. However, training of physicians is unlikely to lead to a considerable improvement on security posture. This is due to the fact that physician behavior is a byproduct of decades of medical training that prioritizes delivery of care in the most time-efficient manner. The most important aspect of this is that physicians are trained to recognize patterns; when they notice irregularities is when they investigate further for possible intervention. Compound this with the fact that medical professionals already invest extreme hours per week on the core medical training required to become a licensed and effective physician and there is simply insufficient time to teach a new behavior on something that is not aligned with either pattern recognition or the core medical foundation of their education. Given the confluence of these reasons, it is our position that the most effective model for addressing the workflow-related security implications introduced by this article is by focusing not on physician training, but rather on securing the devices themselves. By eliminating the workflow problems at the source of the data manipulation, the entire concept of Behavioral eManipulation is thwarted. What follows are some effective approaches for securing medical devices.

Think like an attacker thinks
First and foremost, it should go without saying that if devices have the ability to be manipulated to arrive at an outcome of harm, then they should first be investigated for how they will be attacked. This is commonly known as a security assessment, or in some vernacular as penetration tests. However, these approaches are significantly different, do not share the same goal and do not both best remediate security risk. Vendors of connected medical devices – especially of passive medical devices, which until now have not historically been considered relevant to patient safety in a security context – should not be satisfied with cursory approaches to security like automated scanning, commodity penetration testing, baseline compliance and deferring risk to the hospital. Instead, vendors should:

  • Apply the same level of rigor to security assessment as a sophisticated adversary would;
  • Engage in white box security assessments, not just black or gray box;
  • Investigate for not just known and common vulnerabilities, but also custom and zero-day vulnerabilities; and
  • Work with proven, research-focused, services-only consulting firms.

Security models: Threat model + trust model
Over the years of executing security assessments and performing security research, we’ve found that many companies either aren’t familiar with the concepts of both threat modeling and trust modeling, or of those who do, very few have both implemented. This inherently undermines a successful security mission, as both are required in order to deploy a successful security program. A threat model is an exercise through which an organization identifies what it wants to protect (i.e., its assets), who is it trying to defend against (i.e., its adversaries), and the areas against which attacks will be launched (i.e., its attack surfaces). By contrast, a trust model is an exercise through which an organization defines who and what it trusts, why it trusts those people and systems, and how access based on that trust is provisioned, revoked and validated for authorization and authentication. All device vendors should:

  • Define and implement their threat model
  • Define and implement their trust model
  • Update both frequently
  • Work with trusted third-party security experts to investigate how well the actual design and implementation of the systems adheres to both security models

Recognize role of compliance
The Health Insurance Portability and Accountability Act (HIPAA) governs much of what the healthcare industry does related to security. However, many of the issues inherent with Behavioral eManipulation are actually out of HIPAA’s scope. When it comes to medical devices in particular, the Food and Drug Administration (FDA) has published guidelines that outline security approaches relevant to medical device security. However, there is common and widespread misunderstanding about these guidelines and whether they require new rounds of FDA approval – and these approvals typically take many years. In some cases, these guidelines are not enforceable (an issue that this author investigated in a separate analysis).

In either the case of HIPAA or FDA, it is important to note that compliance typically does an adequate job of establishing the baseline requirements for the foundation of a security program; however, compliance should not be seen as the entire security program unto itself. Effective security organizations recognize the role of compliance as being important to satisfying stakeholder needs, but will go beyond the outlined minimum if delivering a robust security posture is important. To be effective in this domain, organizations should:

  • Define what a successful outcome of the security model looks like,
  • Define the delta between compliance and the desired outcome, and
  • Mobilize security investments accordingly to address the delta between regulatory compliance and security mission.

Summary

Behavioral eManipulation is the combination of technical exploit and human behavior to arrive at adverse outcomes. It represents an evolution in long-standing attack models, and should be immediately incorporated into defense schema across all organizations in all industries, with the baseline condition that those organizations have something worth protecting. For the purposes of this article, we narrowed the scope of the analysis to focus on patient harm issues related to exploited passive medical devices, but the principles of attacking workflows are relevant across virtually every industry, such as the Stuxnet example provided earlier. We advocate for suppliers of applications, devices and systems to work with security experts to identify and remediate any workflow-related security risks that may not have historically been considered in threat models and trust models for your systems. This research will first be presented at RSA Conference USA 2018.

Konstantinos M. Triantafillou, M.D. is a practicing orthopaedic trauma surgeon, certified by the American Board of Orthopaedic Surgery. He serves patients at University of Tennessee Medical Center, a Level 1 trauma center. Dr. Triantafillous is active in advancing the field of orthopaedic surgery through research, and pursues his interest in medical security due to concerns from a practitioner’s perspective about the vulnerability in medical devices deployed across surgical, acute care and in-patient settings. He is widely published across medical journals and has presented medical research at conferences such as the American Academy of Orthopaedic Surgeons. He pursued his BA, MD and residency at Georgetown University, and his fellowship in orthopaedic traumatology at the renowned Campbell Clinic.

1 For the purposes of this article, we intentionally leave details of the care workflows opaque, as it would be irresponsible to equip malicious entities with an attack blueprint on well-established medical workflows that are unlikely to change quickly. Such details can be disclosed under signed agreements – contact authors for more information.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 9, 2018  2:02 PM

Computer vision holds the key to disrupting TV advertising space

Ran Baror Profile: Ran Baror
advertising, ai, Artificial intelligence, audience, Cookies, IoT devices, IoT sensors, Metrics, Smart TV

For the last several years, the television industry has experienced a significant decline in advertising spent as marketing budgets have shifted toward online advertising. This decline is fueled by several factors, including the decrease in subscriber growth as streaming content over over-the-top devices continue to become more popular and the decline in cost-per-thousand dollar amounts advertisers are willing to spend on TV ads.

Online advertisement currently offers advertisers several main benefits over TV advertising:

  1. User profiles — By using cookies, online advertisers are able to gather significant data on the user and offer the most relevant ads.
  2. Real-time targeting — Based on the profiles created, online ads are delivered immediately to the relevant audience to maximize conversion. For example, if the user is in the market for a new car, relevant car ads will be shown.
  3. Transparency — Online tools offer advertisers clear reporting of the amount of exposure of each ad, which allows for clear ROI decision-making.

As a result of this shift, TV advertising must go through an evolution of its own. The industry needs to adapt so cable providers and networks can offer advertisers a better and clearer return on their investment to win back advertising budgets.

Historically, cable and satellite providers have relied on rating companies to learn about the consumption habits and preferences of their customers by using a sample of homes that represent the market. This audience measurement data is used to help determine which ads to place at which times and their cost based on exposure. While this data has proven to be informative, it is still subpar compared to online. First and foremost, ratings are a sample and provide an indication of viewership, but not actual numbers. It is also not real time. Moreover, ratings do not reflect accurate viewer statistics, such as the user’s level of attention on the content consumed or how many people were in the room at the time the content was displayed or their demographics.

The TV industry should move toward personalized, targeted real-time advertising and abandon the traditional way of gathering viewer analytics.

Computer vision and AI-powered personalized advertising

Embedded computer vision technologies can accurately provide viewer analytics and ratings to cable and satellite providers, TV networks and advertisers.

These technologies have the potential to offer the following insights:

  • Real-time audience measurement — Detecting the presence of a user, or group of users, for reporting real-time viewership and ad exposure.
  • Viewer recognition — Recognizing the face of returning viewers to enable customization based on past behavior and predefined preferences.
  • Viewer demographics — Recognizing viewer age and gender for demographic segmentation and content personalization.
  • Viewer attentiveness — Tracking the user’s head pose direction to monitor the viewer’s attention on the displayed content.

For advertisers, computer vision allows for the most accurate reported viewership numbers. With computer vision, the user’s head pose is detected, indicating not only presence, but also the user’s attentiveness. This is a game changer for advertisers looking to get the most out of their ads as they can easily determine their targeting based on age, gender and user viewing history, and can pay for actual exposure. This is a transition from passive impression to actual viewership, including how many people viewed an ad, their age and gender, as well as their viewing histories. It would also provide more accurate information about an ad’s cost per impression, allowing advertisers to increase the relevancy and cost-efficiency of their targeting.

Computer vision also offers the consumer an improved real-time viewing experience, for example, showing child-friendly content/ads when a child or family is detected. Face detection using computer vision can offer a similar experience such as logging into your account on Netflix. The face detection technology can automatically detect an individual and allow him to pick up a show exactly where he left off or offer more relevant content to watch based on his previous viewing preferences.

How the technologies work

In order to display truly personalized content, data is gathered through a sensor in the user’s home, which provides cable operators and advertisers with analytics about which ads and programs viewers are watching in real time. The sensor also benefits the user by allowing personalized content and gesture-based interaction with the TV. The touch-free gesture control allows users to change the channel, control volume and more using simple hand gestures for a natural and seamless experience with the TV which can replace the use of the remote control.

Additionally, because it’s an embedded technology, viewership data is fully secure and private as it is processed using custom-designed, cloud-free processing that does not store images, nor does it send them to the cloud. All data runs locally using proprietary embedded computer vision algorithms, allowing for full privacy and minimizing latency.

Bringing computer vision to TV

With these technologies and the right back-end infrastructure, advertisers can work together with cable operators to slice and dice the information and automatically deliver truly personalized experiences with real-time reporting. While the computer vision systems exist and are ready for market, there is still work to be done by the operators and advertisers on the back end for these technologies to be implemented. Cable operators’ systems and the advertising delivery ecosystem need to be redesigned to be able to deliver this type of real-time personalized, targeted content and ads. While it will require major effort to change the traditional systems, the long-term financial benefit will be well worth it in the end, as it brings TV advertising in line with web and in-app advertising.

Transform TV to a world of “cookies”

Websites already gather data from users to allow advertisers to target their chosen profile using cookies, and this has proven to be successful. If a user visits a clothing website, she’s likely to see ads for it the next time she logs onto her personal social media accounts or visit another website. This happens because of artificial intelligence-based programs pulling from data pools to automatically make ad buying decisions for brands based on demographics and cost-versus-benefit.

Targeted TV advertising would work similarly, and in some ways bypass internet advertising with the amount of insights on the user, and with specific ads appearing based on the viewer’s viewing preferences, their age and gender. This means you and other people may be watching the same show but receive different ads based on your viewing history, demographic data and more. The more personalized ad delivered, the higher the conversion and the return on investment.

While addressable TV advertising is still in its infancy, it could be the technology that fuels the TV industry. In order to do so, cable and satellite providers need to embed computer vision technologies as an end goal to their advertising partners. With targeted advertising, the days of advertisers relying solely on program ratings could be history.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 9, 2018  11:24 AM

Five ways adaptive diagnostics improves repair of mission-critical equipment

Dave McCarthy Profile: Dave McCarthy
IIoT, Industrial IoT, Internet of Things, iot, IoT analytics, IoT data, Predictive maintenance, troubleshoot, Troubleshooting

Repairing complex, mission-critical equipment quickly and accurately is a top priority when uptime is essential to success. Every hour of downtime increases service costs and reduces revenue opportunities. Fortunately, industrial businesses with IoT-connected devices now have a vastly superior alternative to archaic manual repair processes. Called adaptive diagnostics, this advanced approach harnesses equipment data to enhance troubleshooting processes. As a result, companies are able to perform a real-time, detailed assessment of a problem and its causes, accompanied by structured servicing workflows that quickly lead technicians to the right resolution.

Here’s how it works: IoT software sifts through petabytes of real-time and historical data to determine a problem’s root causes — examining fault codes, interrelated operating conditions and repair history. It then ranks probable causes and produces an optimized repair plan that can be integrated into other enterprise workflow applications, such as parts, maintenance and service warranty systems.

With adaptive diagnostics, technicians arrive at an asset armed with detailed and accurate information about the problem, the right parts to address it, and the interactive guidance they need to rapidly fix it right the first time. This is in stark contrast to manual approaches where a technician arrives at an asset to make a visual inspection, thumbs through a manual for repair ideas or may call over a more senior technician for their advice.

Over time, as your asset datasets grow and technicians feed repair input back to the software, businesses are able to build a knowledge base of proven troubleshooting paths, enabling technicians of any skill level to fix the most complex failure scenarios.

Based on experiences with industrial businesses, there are five critical ways adaptive diagnostics is helping improve repair of mission-critical equipment:

  1. Helps technicians troubleshoot issues faster: Evaluates real-time conditions, contextual data and past failure events for probable root causes, with corresponding “confidence levels.”
  2. Cuts to the source of a problem sooner: Suggests the most effective fix for a problem based on historical fault resolutions, probable causes and current conditions.
  3. Speeds time to resolution, continually improves diagnostics: Uses two-way interaction to give technicians relevant repair steps, then receives and incorporates results to guide the next steps.
  4. Triggers enterprise workflows with ERP systems: e.g., automatically orders parts through a parts management system, reports technician time to a workforce management database.
  5. Reduces overhead for technician training: Enables easy creation and management of an IoT-connected electronic service manual; as the ESM grows, probable-cause prediction and resolution recommendations improve.

Adaptive diagnostics in the wild

One company I’ve worked with, a heavy-duty trucking business, wanted to improve the uptime and reliability of its vehicles. This would help eliminate unplanned downtime, which is expensive, disruptive and can potentially tarnish service reputations. Adaptive analytics helped the company eliminate these expenses, which cut directly into revenue mile profits.

For decades, the trucking company’s vehicle engines have displayed warning lights for a variety of conditions, but they failed to provide drivers with sufficient information to determine whether the issue was an imminent failure that would disable the truck or a minor condition. The trucks needed to be taken to service centers where technicians could use equipment connected directly to the truck that could help diagnose a single condition, but didn’t take into consideration related events or conditions that might accelerate mean time to repair and get the truck back on the road more quickly.

This manual approach ate into uptime as technicians spent hours troubleshooting and fixing problems. Finally, lengthy and imprecise warranty processing resulted in high warranty costs for the business and lengthy reimbursement periods for its third-party service centers.

With adaptive diagnostics, the trucking company was able to tie multiple relevant data sources together with business logic to create a user-friendly, holistic IoT system that scales to hundreds of thousands of trucks. Fleet operators and drivers can now schedule truck service at times and locations that minimize disruption thanks to the insight provided by predictive failure analytics. As a result of this improved repair process, revenue miles have increased, providing faster ROI on truck investment.

Adaptive diagnostics is ideal for any business that wants to minimize downtime of equipment that is under repair. By improving repair turnaround and accuracy, faulty equipment gets back online faster, at lower cost, for better operational efficiencies.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 6, 2018  12:39 PM

Challenges and changes: A look at today’s healthcare IoT

Josh Garrett Profile: Josh Garrett
Access management, Connected Health, Data, Encryption, health, Healthcare, Internet of Things, Inventory, iot, iot security, IT, MDM, Network access, Network security, nurse, patient, Security

In recent years, the internet of things has drastically improved patient care in the healthcare industry. Whether monitoring temperature, automatically alerting physicians or doing something else entirely, these devices allow doctors and treatment facilities to track real-time data feedback more intensely than ever.

However, these innovations can also pose a significant risk to enterprise security efforts. Consider that the average cost of a healthcare organization data breach exceeded $3.6 million last year — that’s $380 per individual data record! No other industry spends more to recover from data losses.

So, why has healthcare struggled to make IoT safe? More than anything else, signs today point to two primary industry challenges: device manufacturer priorities and network access security.

Utility over security

In today’s IoT marketplace, devices are being manufactured at a blistering pace to keep up with this technology’s skyrocketing popularity. As healthcare organizations rush to adopt and implement these next-generation data devices, the priority for global device makers has shifted toward convenience and ease of use to eliminate enterprise deployment downtime. Unfortunately, this means security usually falls far lower on the priority list than it should.

In fact, many healthcare professionals believe IoT technologies are responsible for the industry’s recent uptick in data exposure and breach incidents. Last year alone, medical devices vulnerabilities increased 525% — a concerning fact to say the least.

Even in today’s turbulent digital climate, most IoT devices aren’t designed to have passwords or encryption capabilities. That means most healthcare organizations implement these devices before cybersecurity teams have a chance to ensure they won’t harm enterprise network and data security. Compared to 2016, the healthcare industry saw a 211% jump in disclosed cybersecurity incidents last year — many of which were caused by failures to address IoT’s prevalent software vulnerabilities.

Connected imaging systems, for example, have become major security threats to today’s healthcare technology ecosystem. In 2017, 65% of all healthcare-related ransomware infections targeted these less-than-secure systems, and 45% of all IoT device-related security alerts were issued from these innovations.

In addition to the absence of manufacturer security standards, IoT technologies are also rarely updated whenever new risks are identified. This fact, when combined with the inability for these devices to generate alerts or monitor system integrity in most circumstances, means fewer and fewer healthcare providers have visibility into how their IoT program is truly performing. Even if an organization uses security software, like mobile device management, 64% fail to enroll every enterprise IoT endpoint anyway.

A need for network security

Device-level protections aren’t the only serious challenge in healthcare’s never-ending IoT security search, though. Defending internal networks from infected endpoints trying to breach data or cause system failure is something this industry’s IT professionals deal with on a continuous basis.

Unfortunately for these employees, most IoT devices don’t include controls to protect their connected network from threats should an emergency scenario occur. Instead, organizations are forced to identify alternatives if they wish to increase security and reduce potential IoT risks. While 95% of healthcare executives are confident their practice is completely safe from cybersecurity threats, only 36% have access management policies and only 34% have a formal cybersecurity audit currently in place.

IoT network security is complex because these devices don’t connect to just one location like traditional business technologies — segmenting and/or isolating IoT traffic is a complex task. Plus, few devices can even be configured to deny network access by default.

And, since network security is such a time- and resource-heavy task, it’s not uncommon for healthcare organizations to deploy more devices than they can implement and operate safely. Considering only 31% of the industry plans to train its employees on IoT security practices or establish formal IoT device policies this year, it’s no wonder hackers see these companies as unusually easy targets compared to most other industries.

That said, IoT’s business benefits and disruptive healthcare potential tend to outweigh these rather sizable risks in the mind of most industry decision-makers. If your organization is struggling to manage IoT security, here are a few tips you can use to improve program protections:

Keep track of all tech
Without an accurate inventory of all network-connected technologies, it’s impossible for healthcare organizations to completely safeguard IoT. If your company isn’t already maintaining one, make sure it starts keeping track of things like vendor name, model and serial number, version, physical location, support contacts or any other relevant data points that can help you quickly identify a device and minimize the damage it does in a worst-case scenario.

Additionally, this allows an IT team to stop infected devices from being used as pivot points — devices that an attacker uses to distribute a security threat across an entire corporate mobility environment.

Pre-authorized security teams
When an IoT intrusion happens, IT needs to act fast if there’s any hope of protecting sensitive data. By pre-authorizing trained and specialized security experts, a healthcare organization can remove its network’s most vulnerable devices and sensors as soon as possible. If an attack happens, this policy also helps maximize protections to (hopefully) quarantine any malicious device activity.

Restricted access
Since IoT devices lack so many protections other mobile technologies have, it can be beneficial to manage them in a unique way. If that’s the case where you work, the easiest solution involves creating a separate, IoT-specific network. While this maximizes IoT network and program protections, it’s also not feasible for healthcare organizations that rely on data flows and integrations between multiple systems.

The more advanced (and effective) approach is a combination of encryption and network access restrictions. Using this method, enterprises can protect data in transit or at rest while ensuring only authorized devices can access networks and communicate across them.

Have a plan
You’d be surprised how far a little IoT prep work can go. Most healthcare companies are so worried about keeping up with the latest technology that they forget to simultaneously prepare backup systems and devices. That makes it much more difficult to perform large-scale updates, deployments or any other activity that could potentially take devices out of employees’ hands for an extended period.

Knowing how to secure a device and manage it in the event of complete failure is also important because no technology is ever 100% safe. If a device or a network unexpectedly goes offline, enterprise leaders need to know how to react if they want to minimize the organization’s recovery time.

Improve visibility first
Advanced IoT devices are tremendous enterprise tools, but if you can’t uncover or interpret the insights and feedback they provide, it’s ultimately a wasted technology. Without visibility, there’s no way to identify trends, uncover risks or know which countermeasures are the best for a specific situation.

A network monitoring platform built for IoT is something every organization needs to have. By creating and utilizing one, an organization’s IT administrators can keep track of every connected device and corresponding network — including what state they’re in and whether there are any immediate threats that require action.

IoT is just beginning to make its mark on the healthcare industry. Moving forward, will these organizations overcome the technology’s greatest challenges, or will hackers and cybercriminals continue to take advantage of the confusion and complexity instead?

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 5, 2018  9:35 AM

Can technology reduce the impact of natural disasters?

Haden Kirkpatrick Profile: Haden Kirkpatrick
Data Analytics, Disaster planning, Internet of Things, iot, IoT analytics, IoT sensors, Sensors, Smart sensors, Weather, Wildfire

An onslaught of tornadoes in the Midwest, a barrage of hurricanes in the gulf, a drought in Montana and record-breaking fires walloping California — there has been a melee of natural disasters of late.

Since the National Ocean and Atmospheric Administration (NOAA) began recording severe weather events in 1980, it has noted that the “U.S. has experienced a rising number of events that cause significant amounts of damage.”

However, 2017 was the costliest year on record for natural disasters in the United States. That’s 16 different disasters, each amounting to billions of dollars in damages. The NOAA reported that the final bill racked up more than $300 billion, which obviously doesn’t include the mental and emotional costs.

While studying weather patterns is still important, the solution encompasses more than that. Early warning systems are key to mitigating the impact of natural disasters. Here’s how the tech industry can help us become better prepared for the instantaneous challenges of natural disasters — and recover quicker in their wake.

Saving homes from wildfires

Parched neighborhoods make perfect kindling for a wildfire, but smart sprinklers may prove effective in fending them off. Back in 2015, for example, an Australian man activated his sprinklers with his smartphone before a wildfire nearly swallowed his ranch.

It’s amazing that he was able to save his home from the palm of his hand, but he was only able to because he was alerted of the fire by friends and family. That said, what if a smart irrigation system could respond automatically when it detects, say, rising temperatures?

That’s what one engineer asked in the wake of the North Bay Fires, noting that installing such a system wouldn’t be so expensive.

In combination with non-flammable building materials, exterior sprinkler systems could be triggered by rising temperatures. How? Relatively inexpensive smart home sensors on the market detect heat and humidity, and they could broadcast a Wi-Fi signal when temps near the home surpass a set threshold.

The fire detection system from Semtech, for example, uses real-time analytics, reporting and geolocation to detect the presence of smoke, gas or flames. Plus, it automatically notifies building owners in an emergency. It’s not so far-fetched that it might also work to trigger automatic response systems against an advancing blaze.

For greater water power, such a system might also employ a pump to wield more water from nearby koi ponds, swimming pools or standby cisterns or tanks. In the event of power failure, solar-powered generators keep the whole system working. Existing generators could power a home’s water supply for several hours. What’s more, prices for generators have dropped dramatically, making them more accessible.

It’s also important to design homes that are fire-resistant in the first place. The Flex House is paving the way, using inflammable fiber siding and an ecosystem of smart home features that support cutting-edge water conservation to curb the threat of fires. Given its small square footage, the cost is expected only to be $125,000 to $150,000 per unit — and that may go down even once more prototypes are on the market.

Advanced emergency response systems

The cornerstone to emergency management is identifying a city’s vulnerable areas and setting up “triage zones” that prioritize aid and medical attention, especially if resources are scarce.

Thanks to advancing technology, decision-makers can better isolate areas of greatest damage to direct their efforts efficiently. Through machine learning, companies like One Concern are able to predict (with impressive accuracy) the timing and severity of a natural disaster. That way, emergency management personnel can take action before, during and after a catastrophe.

One Concern factors in “hyperlocal” data sets of both manmade and natural environments. That means they know a city’s data down to “the smallest rock,” as they put it. From there, they construct a high-resolution model providing minute-by-minute insights on an impending hazard, including a detailed analysis of its potential social and economic effects. The company’s platform offers a detailed plan of action tailored around that specific event and accrues data each time, helping it get even smarter.

Additionally, the integration of patient data and medical records across IoT platforms could help make triage zones more efficient, ensuring proper care is carried out in a timely manner if a natural disaster strikes. Semtech (mentioned above) is behind much of that endeavor. Similar to One Concern, Semtech provides in-depth analysis of catastrophic events to better predict when they’ll strike.

Quelling the quake

When a fault line slips or ruptures, seismic waves are emitted through the ground, creating an earthquake. Primary waves make the ground shake back and forth. Secondary waves cause it to churn up and down. What’s more, seismic waves need something to pass through — like buildings and city infrastructure.

What if technology could not only predict the magnitude of an earthquake, but also stop it in its tracks? Well, it’s not as big a what-if as you might think — and it’s actually been in the works for a few years already.

You’ve likely heard of shock absorbers on your car’s suspension, which slow the unwanted jolting of an uneven road. This technology has helped make buildings resistant to earthquakes, and now it might help reduce the effects of the earthquake itself.

In 2014, a team of researchers in France set out to test this idea. In a very controlled setting, they drilled holes in the ground — of specific size and spacing — to reflect the vibratory motions of ground movement. What they found was that the intensity of seismic waves dropped significantly in areas where holes were drilled.

To be sure, the experiment was limited to predictable conditions, and what makes earthquakes so unsettling is that they’re inherently unpredictable. Even so, their experiment — and several others like it — demonstrated that seismic dampening is possible with the right technology.

For now, the best emergency management plan means monitoring and preparedness. As environmental models and IoT sensors advance — and with them, predictive modeling — we’ll not only know earlier when a disaster will strike, but also the preciseness of its impact. The result? More saved lives and life going back to normal as quickly as possible.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


April 5, 2018  9:25 AM

PKI: Helping IoT device makers trust their supply chain

Nisarg Desai Profile: Nisarg Desai
identity, Identity management, Internet of Things, iot, IoT devices, iot security, PKI, Public-key infrastructure, security in IOT, Supply chain

We already know that public key infrastructure is going to be the future of securing IoT devices; this is because PKI can be implemented in a relatively lightweight fashion on different classes of devices. It will help to identify and secure devices by limiting the number of opportunities for bad actors to hack them.

PKI can also be used to secure IoT devices during the manufacturing process, its supply chain as well as the distribution and delivery process. Every vertical market will be disrupted by IoT, and these will find PKI especially beneficial. Some such examples are healthcare and the smart electric grid.

How implementing PKI during the manufacturing process protects IoT device makers

With the IoT market being relatively new, it’s not uncommon for some device makers to be unfamiliar with the manufacturing process. Seldom does a device designer also manufacture their own product. In most cases, they usually end up working with an original design manufacturer (ODM), such as Flextronics, or an electronic manufacturing services (EMS) provider, like Foxconn or Celestica.

Once an IoT device is built, usually it is shipped by the manufacturer to the device maker or directly to customers. But, some shady things can occur along the way because in the process of building all these devices the ODM or EMS now has all the plans, blueprints and design specifications. That is a huge risk for a device maker. It would be relatively easy for a nefarious company or employee to take these plans and sell them on the black market for a much lower price than the device maker. In addition, certain geographies have looser or stricter controls on ownership of these design specifications, potentially creating more problems for the device maker. Making matters worse, it could be years before a device maker realizes something illegal has occurred. While there are checks and safeguards in place to protect IP, this is a real problem that smaller device makers need to contend with.

Fortunately, there are a number of ways an IoT device maker can mitigate such problems. For instance, you could incorporate a certificate into every device, enabling you to carefully control the number of them being manufactured by having your certificate authority (CA) provide only as many certificates as the number of devices you want to manufacture. Another step is to verify the source of a certificate request, i.e., a specific location, so the assigned ODM or EMS must present certain credentials to verify who is requesting the certificate. You can even gate access to these certificates based on user authentication and IP address.

In addition, using PKI enables you to verify, either during or after the manufacturing process, who is actually using the certificate. Ideally, that “who” should only be the device. This can be determined by having the device contact a trusted third-party server also called a registration authority. This can be managed by CAs, who will verify that the certificate issued and in use by the device is chained to the correct PKI hierarchy, which belongs to the specific device maker. In addition, a CA can take the extra step of revoking the previous certificate on the IoT device and issue a new certificate. This process is called re-enrollment. Alternatively, there can be two types of certificates used: a “birth” certificate that forms devices’ identity, and an “operational” certificate that can be used to perform certain actions.

By taking these steps, PKI enables IoT device makers to prevent over-production and counterfeiting, ensuring only the right device gets the right certificate. Even if it is discovered at a later date that the manufacturer turned out to be untrustworthy and more devices were manufactured than the original order called for, the devices would be unusable because the birth certificates have now been revoked, and the device is no longer able to get operational certificates.

Concerns like the ones mentioned above are a regular occurrence. In fact, in March, iPhone supplier Wistron was accused of using unauthorized components in its production of the iPhone 8 in China. Scenarios like this are exactly why it is a smart idea to implement PKI for IoT devices during the manufacturing process.

PKI for the supply chain

PKI turns out to be one of the best technologies for assuring authenticity of devices within the supply chain, including the verification of all the components within an IoT device. For example, when an IoT device maker designs a device, she uses components from different manufacturers. These components are sourced from all over the world, carried by distributors and assembled together with other components by an EMS or ODM.

Once these components arrive at a distributor, in reality they could then sit in a warehouse for an extended period. Finally the device is manufactured and then tested, after which sellers can provide it to the customer. Often times, these are installed by third-party installers and deployed in remote locations. The end customer or buyer may not even physically see these devices.

The average IoT device has a very complex route, changing ownership multiple times. Naturally, the seller wants to ensure the device is authentic and that through the various stages of its supply chain, this authenticity was maintained. This real-world problem is very hard to solve. Increasingly, we are hearing stories about breaches within supply chains, such as with Wistron. You have manufacturers from different countries whose suppliers are not necessarily following best practices for security and privacy, therefore it is difficult to trust a component’s source. As a result, some companies could end up with an IoT device that has both authentic and fake (or modified) components. But if a device functions as intended, it may be difficult to know the truth. However, it’s a possibility that a nefarious actor embedded extra code within the device software that tracks and intercepts data. This could have been injected during the manufacturing process of the device, or one of its components. This is yet another reason PKI can help eliminate or at least significantly reduce these scenarios.

Identity is the foundation of security

In today’s hyper-consumerism era, the thirst for new and innovative technology is insatiable. There has been a sharp rise in hardware-focused technology startups that are providing a real-world solution by integrating networking and smarts into a device. This is especially common on crowdfunding sites like Kickstarter and Indiegogo. However, the focus for these device designers and makers is on functionality, and security is seldom part of the plan. This is dangerous and can lead to problems later down the line, as seen with recent IoT botnet attacks.

Starting on the right foot is important, and if we can get security integrated into a device right at its starting point, the manufacturing plant and the associated supply chain, then we have a strong foundation to build upon. Starting with a strong identity, we can bootstrap into other security functions like authentication and authorization. This shouldn’t be hard — and PKI makes this especially easy.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: