IoT Agenda

August 1, 2017  1:57 PM

Internet of energy: Extracting value from data silos in utilities

e russell Profile: erussell
Artificial intelligence, Data Management, energy, Energy Consumption, Energy efficiency, Internet of Things, iot, IoT analytics, IoT data, Machine learning, Service providers, utilities

In the industrial world, and specifically the energy sector, the amount of connected devices, sensors and machines is continuously growing, resulting in the internet of energy, or IoE. IoE can be broadly defined as the upgrading and automating of electricity infrastructures, making energy production more clean and efficient, and putting more power in the hands of the consumer.

Given the vast amount of data the energy sector generates and the increasing number of sensors added, it is the perfect environment for machine learning applications. Artificial intelligence (AI) excels at finding subtle patterns in data sets of all shapes and sizes, particularly under complex or changing conditions.

Although data within IoE is growing at exponential rates, much of that data is traditionally siloed across business units (generation, transmission and distribution, energy trading and risk management, and cybersecurity). Extracting the wealth of data out of each of the silos and putting that data to work is needed to promote a better IoE experience and receive the benefits out of machine learning. Artificial intelligence capabilities can be incorporated to gain insight from all the data uniformly, allowing business units to transform into a collaborative system.

Generation: Prescriptive maintenance of turbines

Generation, the first major silo in the energy sector, is largely dependent on the work of turbines. Turbines consist of thousands of moving parts, and even the smallest disturbance can create major problems, causing unscheduled downtime, loss of power, safety concerns and other issues.

Applying AI and machine learning techniques to prevent unplanned downtime and catastrophic breakdowns is revolutionizing how utility companies operate. A standard approach of subject matter experts (SMEs) developing static, first-principle models places a tremendous burden on organizations to maintain and update them. Furthermore, the static nature of traditional models means operators are only able to view steady state operation of turbines, whereas the meaningful data is transient events like startups and coastdowns.

Transient conditions are where critical issues first materialize, but they are challenging to monitor because they occur over indeterminate lengths of time. Where a static model-based system is unable to solve this issue, an AI-based technology can. An artificial intelligence approach can start analyzing data and providing insights on day one, and continue to improve upon its own accuracy and effectiveness by learning from SME input.

Transmission and distribution: More than just smart meters

For the second silo of transmission and distribution, AI is able to tackle much larger problems. While smart meters and end user control of home appliances have generated excitement, they are not the most challenging big data problems being solved by machine learning.

Three specific areas in transmission and distribution where AI is playing a key role are:

  1. Energy disaggregation
  2. Power voltage instability monitoring
  3. Grid maintenance

In these areas, the collection, ingestion and action upon the data have created efficiencies in expenses and operations for companies using machine learning and AI technologies, as well as for their customers.

Energy disaggregation requires the utilization of machine learning because thousands of energy “signatures” must be analyzed to find patterns of usage. An analysis of energy signatures can predict suspicious consumption values, for example, due to physically or digitally manipulated devices, sophisticated thefts or meter malfunctions.

The second area, power voltage instability monitoring, faces an explosion of dynamic data surrounding minute instabilities in which human analysis falls short. Researchers can utilize machine learning techniques to identify voltage instabilities, thus preventing brownouts and blackouts on the grid.

The last area of transmission and distribution where AI is playing a key role is grid maintenance. While many companies are still struggling to use the data they are collecting, a machine learning algorithm can use the data or features to classify and ultimately predict failures well in advance. Because machine learning algorithms can automatically break features down into additional data and analyze them at machine speed, previously unseen correlations in the data are leading to new discoveries.

Cybersecurity: The modern battleground

The third major data silo in utilities is cybersecurity. The recent and continuous onset of attacks to critical infrastructure makes the need for new cybersecurity methods vital. An AI offering can identify, categorize and remediate a variety of threats including loss of personally identifiable information, zero-day malware and advanced persistent threat attacks.

To a mathematical algorithm, there is little difference between the aforementioned data and cybersecurity data. All input, regardless of source (a vibration sensor or a firewall log, for example), is simply a piece of information with unique patterns to an algorithm.

To combat the cyber front of industrial threats, an artificial intelligence product can automate the threat research process, prioritize threats based on confidence and display corroborating evidence to the analyst, significantly reducing both time to threat remediation and overall risk.

Energy trading and risk management

Energy trading and risk management is the final data silo in the energy sector. In the highly competitive and regulated utility business, there is a clear link between the company’s bottom line and forecast accuracy and reliability. If new techniques can provide more accurate forecasting, utilities can begin to offer better pricing to their customers.

AI techniques are providing insight into this process. With thousands of features from hundreds of sources, there are infinite ways to combine and correlate information. Looking for subtle, transient movements of price data on an hourly or even a second-by-second basis with millions of combinations is where AI excels.

Because utility companies need to buy oil, gas, coal, nuclear fuel and electricity, they are constantly at the mercy of volatile commodity prices. For this reason, utilities are using AI techniques to develop methodologies for market and credit risk aggregation.

With improvements in the sharing of data from data silos, the utilities industry can reap the wealth of new knowledge. From prescriptive maintenance to energy trading to cybersecurity, analytics will play an important role in how energy is produced and provided to consumers long into the future. As adoption increases, AI technologies will continue to learn and adapt, providing more value in the internet of energy.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

August 1, 2017  11:15 AM

Mirai and Amnesia: Early lessons in attacks against IoT, part one

Christopher Budd Profile: Christopher Budd
Botnet, cybersecurity, DDOS, Internet of Things, iot, security in IOT, WannaCry

Since the early days of the internet of things, those of us who work in the world of vulnerabilities and threats have been warning about the risks associated with IoT.

When the Mirai botnet attacks came in late 2016, many felt that IoT attacks were finally here and started looking at the past for parallels. We didn’t have to look far: Over sixteen years after the distributed denial-of-service attacks that took down Yahoo,,, Dell, E-Trade, eBay and CNN in February 2000, here was another massive DDoS attack.

These early attacks came at the beginning of what turned out to be years of large-scale attacks against PCs. So the logical question is: Does Mirai represent the same thing? Are IoT attacks here and are we looking at the beginning of another era of large-scale attacks?

At first glance, this would look to be the case. After all, one thing that enabled the large-scale PC attacks was the lack of truly effective patching against vulnerabilities. It’s notable that major attacks like Code Red, Nimda, Blaster, Sasser, Zotob and Conficker all attacked vulnerabilities that patches were available for when the attacks hit. When we look at IoT, and the fact that in many cases vulnerable devices will never be patchable, let alone patched, it’s reasonable to think that this problem will be even worse. Add to this the sheer scale of IoT compared to PCs in the early 2000s, and not only does it seem reasonable to conclude that IoT attacks will be like those that we saw in the PC era, it already seems like a foregone conclusion.

And the specifics of the Mirai attacks seem to support this conclusion. One thing that made everyone take notice of Mirai was, again, the sheer scale. The Mirai attacks against Brian Krebs’ site was clocked at up to 620 gigabits per second of network traffic, and a follow-on attack against French web host OVH hit a peak at 1.1 terabits per second. As the world was reeling from these attacks, the Mirai source code went public and everyone was bracing for the worst.

But then something funny, and important, happened.


In the months since Mirai there’s been no additional follow-on attacks. The security press has moved off IoT altogether, focusing in the spring and summer of 2017 on WannaCry and then Petya. You’d be excused if you happened to miss Mirai last fall and thought that we were still waiting for IoT attacks to begin.

Much like the dog that didn’t bark in Conan Doyle’s “Silver Blaze” Sherlock Holmes story, the post-Mirai non-events tell us a lot about what the world of IoT attacks on the internet may look like. And it’s looking less dire than it did during the PC-era internet.

Check back to this column for part two in the series, which will take a closer look the Amnesia botnet as another recent example of a large-scale IoT attack that can be leveraged for lessons learned when securing the IoT.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 31, 2017  3:58 PM

Hybrid OT: A new role emerging within industrial IoT companies

Jason Andersen Jason Andersen Profile: Jason Andersen
IIoT, Industrial IoT, information technology, Internet of Things, iot, IT convergence, IT professional, IT workforce, Operational technology

In my conversations with industrial companies looking to start or accelerate their journey toward the industrial internet of things, I’ve begun to see a phenomenon among the ranks of industrial technologists that’s not all that different from Darwin’s theory of evolution. Adaptation is the key to this theory, something industrial technologists need to do well as their environment is changing around them.

In the past, there has been a clear divide between IT teams — that control the data center — and operational technology teams — which are responsible for the care and feeding of operational automation systems. These two distinct teams had different skill sets, backgrounds and priorities. Today, in order to bridge the gap that has traditionally separated the two, a new breed of what I like to call “hybrid OT” professionals is emerging. This is where IT and OT responsibilities and skillsets are converging, making the individual who can do both a valuable technologist.

What is causing this shift? There are two big things I see driving this change:

New responsibilities breed new roles — As more computing power and data collection have made their way to the edge of industrial networks, a new combination of skills is required to manage these assets (historically the domain of OT), giving “birth” to the IT/OT hybrid. We saw a similar shift occur with the rise of cloud computing: developers struggled to get IT to respond to their needs, so they turned instead to public cloud services for answers. As developers took this responsibility of securing the IT infrastructure needed to run their applications in clouds upon themselves, the role of DevOps was born.

A generation ready for 21st century challenges —┬áMany OT professionals who have been in the industry a long time are now approaching retirement and a new generation is taking their place. This generation of younger, digital natives is not intimidated by technology — they were in fact raised on it. They see the potential of IIoT and will look to realize its potential as they push intelligence out to the edge and leverage data and analytics in new ways.

The most forward-looking industrial enterprises are the ones that see the value in hiring professionals that are just as comfortable working with servers as they are working with machine tools, packaging lines, pumps and valves. Enterprises actively recruiting these hybrid OT professionals are attracted to the skills they’re seeing that will be valuable in managing both IT and OT technologies. Whatever their background — IT, data science, industrial engineering — these individuals share a passion for the intersection of technology and industrial operations.

New expectations for the technology they use will also come along with the role. System availability has become an absolute necessity for business continuity and is something hybrid OT professionals will expect out of their systems and technology vendors. Similarly, these hybrid OT professionals see the value in the data produced at the edge and will lean on technology vendors to help make data protection a top priority for the enterprise.

When will we see this new breed emerge? The answer is that the evolution will happen soon — much more quickly than it occurs in the natural world. Over the next two to three years, I believe the industry will see a major influx of this hybrid breed.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 31, 2017  12:09 PM

Drone delivery is buzzing — but where does it go from here?

Roei Ganzarski Profile: Roei Ganzarski
Amazon, delivery, drone, Drones, FedEx, Internet of Things, iot, IoT hardware, UPS

With the drone industry netting $8 billion last year and consumer spending for on-demand services topping $57 billion, the implementation and deployment of drone delivery fleets is becoming more of a reality each day. From Amazon’s futuristic drone delivery tower proposal to parachuting packages on the horizon, we are nearing a complete delivery transformation. While these innovations will need to be fine-tuned as the commercial drone industry flourishes, several factors, such as operational inefficiencies, continue to work against the Amazons and Ubers of the world, hindering supply chain innovation.

The mainstream drone delivery timeline

We have already seen the start of drone deliveries, but it likely won’t become truly widespread and accepted until 2025. Over the next seven years, we’ll begin to see an overall increase in this new vehicle for deliveries, but a large portion of these test runs, followed by operational deployments, will be heavily focused in rural areas where the safety risks are smaller and logistics are much simpler to manage. Contrary to popular belief, dense metro areas present numerous challenges and risks — think traffic, privacy issues, power lines, high-rise buildings, sudden wind gusts and crowded streets below.

Instead of thinking about smart urban cities, we should shift our focus, at least for drones initially, to smart rural areas. It’s much more likely we’ll see drones used to deliver common goods like food and medication to remote homes and offices versus a busy suburb. Instead of a consumer driving over an hour one-way into town to pick up a prescription, he could choose to have it delivered by drone. Autonomous vehicles and drones could, and should, even partner for optimal efficiency while delivering packages — companies like UPS and Mercedes are already testing a self-driving van and drone combination for the ultimate rural delivery challenge solution. Meanwhile, cities will shift their attention to small and less risky sidewalk bots to help enable the “life on-demand” luxury in metro areas.

Implementation challenges

Before these services can become a reality, we need to start addressing the inherent challenges to implementation. While there are several concerns that need to be addressed, the ones we should be thinking about sooner rather than later relate to the safe deployment of these technologies. The obvious safety element everyone is looking at, which I am sure will indeed be addressed, is that of the single drone: How will it fly safely, avoid obstacles, carry its packages and so forth? However, it is the deployment of a fleet of drones I am more concerned about. Before they can become commonplace, live operational testing is mandatory to ensure we can deploy several drone fleets without overcrowding the airways — a scenario that could completely erode the potential benefits these technologies present, and moreover create a potentially unforeseen safety risk.

To adopt this technology and do it right, we need to start thinking about a higher-level network of connected drones — no matter the parent company. This means taking a look at how they should be deployed, managed and used so not only individual consumers can experience the benefits of these technologies, but society can as well. Whenever a new method is introduced into everyday use, there is a great deal of strategy that needs to go into thinking about which services are implanted where.

Who will be the first adopters?

One benefit drones exhibit is the lack of overhead required as opposed to traditional delivery methods like trucks and airplanes. A large-scale delivery model requires an enormous amount of capital investment including drivers, trucks, space to store equipment, regional depots and much more in order to be successful. This presents a substantial barrier to small competitors trying to enter the market, even if their model is more efficient than the delivery giants. On the other hand, drones are less expensive to buy, store, maintain and operate and because of this, we’ll likely see many small companies launch drone delivery services. Because of the technologies’ flexibility, there is an opportunity for both corporate giants and mom-and-pop shops to operate in this industry successfully. This in itself also poses an added risk of very large numbers of drones all flying around the same airspace in an uncoordinated fashion. Inefficiency and risk galore.

Additionally, companies like FedEx and UPS will also look to break into the drone business in order to stay competitive. As drone technology improves and new businesses emerge, it’s likely drones will slowly begin to replace some of the last-mile routes traditionally fulfilled using trucks and vans, thus resulting in increased efficiency across the board. Just think: if one delivery in a driver’s route is three miles out of the way of every other stop, that’s six miles roundtrip for a single package. The time spent driving to the delivery destination and back is not only an inefficient use of the employee’s time, but a costly waste of company resources (assuming, of course, the package is small enough to be delivered by a drone instead). In the future, delivery vans will likely be equipped with drones to deploy one-off package deliveries, optimizing the driver’s daily route while simultaneously reducing operating costs. In combination with the great distribution networks and infrastructure in place, FedEx and UPS will have a huge leg up on potential competitors looking to enter the drone arena.

Looking ahead: The future is buzzing

Look for Amazon to continue to push the envelope very publicly, but don’t be shocked to see Wal-Mart come out with a surprise from left field. It has the money, it just has to find the right focus and partner — I think it will. I also anticipate seeing brand-new delivery companies emerge and try to conquer the space, only eventually to get bought out by the delivery giants such as UPS and FedEx to try and maintain pace with Amazon.

At the end of the day, there are two technologies that are going to be required to make this all happen, and happen successfully to the benefit of the consumer and service provider: a safe and effective drone at a price point that is affordable, and dynamic network-level, multimodal scheduling software that will enable the service provider to efficiently deploy a fleet of drones and integrate them with other resources.

Finally, don’t limit your thinking to delivery being of only physical goods. Drones will also be used to deliver data. Insurance companies could rapidly deploy a drone to a car accident scene (especially that of a driverless car) or a home disaster to take photos from multiple directions and collect data in near-real-time to increase the accuracy of the data. That same drone could deliver information rapidly to emergency services, such as the police and fire department, to increase treatment quality and survival rates.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 28, 2017  1:15 PM

The internet of things and cybersecurity vulnerabilities

Brian Berger Profile: Brian Berger
#eHealth #Healthcare IOT #Wearables #wireless medical devices, Internet of Things, iot, iot security, Ransomware, security in IOT, smart home

The internet of things is defined as the interconnectivity or networking of devices, sensors and electronics to communicate through software. With IoT and the typical computing infrastructure that is very familiar, the change in data, access, connectivity and monitoring requires a cybersecurity approach. We tend to add technology to the existing fabric of our lives without considering how the bad guys will attack. We are seeing IoT in our homes, automobiles, food safety, medical devices, critical infrastructure and manufacturing — just to name a few.

Let’s talk about our homes and us as consumers of IoT first. We have access to some cool and innovative technologies at home. A favorite is Amazon’s Alexa digital assistant. Alexa can turn on lights, change the temperature of a thermostat, change watering days and times on your irrigation controller, manage webcams, and turn on and off the television. All this is amazing, but it raises the question: Have we opened ourselves up to more vulnerabilities at home? An illustration of webcam vulnerability was widely seen in the distributed denial-of-service attack of Dyn in late 2016.

Medical devices are just as vulnerable, now that blood pressure cuffs, glucometers, insulin pumps, pacemakers, ICU real-time monitors and many others are connected to the internet. Home healthcare uses wearables for monitoring of statuses and medication reminders. All of these connected devices have been purpose-built for function with limited security and data protection. We have seen hacks into insulin pumps manipulating the dosing. The seriousness of a personal attack is not outweighed by the threat to healthcare systems and records that can be accessed through these devices, all labeled as, personally identifiable information.

Lastly, manufacturing sensors and devices are a common threat, as they are unmanaged. As seen with Petya, NotPetya and WannaCry, unmanaged devices have been the target for spreading ransomware across networks. The attackers are looking for the easiest entry point and the sensors of unmanaged IoT devices, which have become active targets. Manufacturing under government contracts has been a key target and supply chain SMBs now have required guidance for compliance. Some of the most critical aerospace designs have been stolen through cyberattacks that have a significant effect on our national security, as well as the economy for lost programs from these smaller manufacturers, whereas in food safety, the monitoring and prevention of agroterrorism is paramount to protect our national food supply.

What should we do? The list of actions remains very similar. Make sure all devices are not set to default (i.e., change passwords) — this is a typical flaw in the devices of SMBs as well as consumer devices. Verify all devices and sensors are managed and monitored. Properly segment your network — create an internal, guest and IoT network at a minimum. Some other helpful considerations around a cybersecurity program include updating firewalls, securing remote access, reviewing security configurations, operating system updates and patches, training staff members, improving security policies and changing control procedures.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 28, 2017  11:43 AM

Blockchain: IoT’s new best friend?

Ron Lifton Profile: Ron Lifton
Blockchain, Data-security, Distributed database, Internet of Things, iot, IoT applications, iot security

Has the internet of things won a new best friend in blockchain? Following a compound annual growth rate of 52%, it’s expected that capital markets applications for blockchain technology will reach a value of $400 million by 2019. On top of that, by 2020 a fifth of IoT deployments will use blockchain services. With these statistics in mind, the answer to the question posed at the start of this article would be “yes,” and you’d be hard pressed to find anyone who would say “no.” Of course, it’s not quite that straightforward, but let’s first consider the many benefits blockchain can offer to IoT before we address how it must be managed.

Using blockchain for IoT transactions and data sharing will ease data concerns, remove single points of failure, cut costs and, perhaps most importantly, streamline processes. When it’s then supported by service assurance, IoT and blockchain will deploy incredible innovation potential in every industry it is deployed in.

The benefits are wide and varied. A drug prescribed to a patient can become visible to all relevant providers regardless of electronic health record compatibility. A connected car that pays for tolls and parking automatically could also use barcode technology to open its trunk to receive a dropped off package. A mobility-as-a-service station could offer transport to passengers and automatically collect payment for public transport, electric car charging, bike-sharing … the list is endless. Combining IoT devices and blockchain is a key, and what it can unlock is going to be the new reality.

Increased resistance means improved security

The key piece of information about blockchain is that its distributed database decentralizes ledgers by sharing a chain of transactions between multiple nodes. The blocks are publicly visible, but their contents can only be seen by organizations with the correct key to their encryption. Due to transactions needing multiple parties’ authorization before acceptance, blockchain has a high degree of trust value.

In addition to this, transactions can only be added, not subtracted or altered. From a regulatory point of view this is very attractive. A chain of accountability is visible. It also means organizations that are subject to HIPAA, the EU’s Data Protection Directive and soon GDPR will adopt such technology as a way to improve their transparency.

When it comes to IoT network deployments, blockchain can facilitate not only financial transactions, but also messaging between devices. By operating in line with embedded smart contracts, two parties can share data without anonymity being compromised. Although blockchain doesn’t solve every security concern and problem for devices that are IoT compatible, such as the hijacking of IoT devices in distributed denial-of-service botnets, it does help protect data from nefarious actors with malicious intent.

Given that IoT devices are projected to increase by astronomical levels, up to 24 billion by 2020, traditional methods of handling network traffic such as server/client will eventually be far too cumbersome and too unwieldy to be effective. The simplicity of distributed blockchain is what makes it brilliant. Supported by a growth of edge computing devices and 5G networks, this simplicity will enable faster and more efficient communications between autonomous devices, without them passing through any single points of failure. When it comes to records of IoT device function, blockchain will also have its own part to play here, making it possible for devices to communicate autonomously without a single centralized authority.

IoT, blockchain and service assurance

The challenge, then, comes in how this technology will be managed. Blockchain and IoT, like any digital transformation technology, will add a level of complexity to IT infrastructure that has never been seen before. This will include edge devices and servers that participate in the blockchain transactions, middleware for encryption and authentication, and virtual machines that distributed databases and applications will rely on. Even though these autonomous devices and additions will boost efficiency, and the improved availability and added security within could cut costs dramatically, it means that service assurance has become more necessary than ever before.

In an IoT and blockchain environment, load and latency can impact service delivery. Not forgetting that because blockchain is basically a highly distributed database, assuring that same service delivery just becomes far more difficult. It needs a holistic end-to-end visibility tool that delves into your packet and session flow — across all load balancers, gateways, service enablers like DNS, network, servers and databases — whether they’re distributed or not — and all their interdependent components.

DNS is but one example here. The coming growth in IoT devices combined with blockchain will mean a surge of DNS requests and DNS dependent services, which will have a severe impact on not only service delivery, but also on performance. In terms of business continuity, ultra-low latency of typical DNS services should concern businesses, and rightly so, as this can mean an effect on assuring IoT performance quality. If DNS performs badly, those IoT and blockchain services will also perform badly. This will mean that parts of the connected world that are becoming ever more dependent on automation may come to a standstill.

Healthcare, manufacturing, energy distribution, transportation — all critical services — could be derailed because of the slow performance of one device or DNS problems. However, losing control is avoidable as it leads us back to the holistic end-to-end visibility mentioned earlier. IT teams will need that visibility into DNS issues such as errors and busy servers; this will cut down average troubleshooting periods drastically and enable those same teams to fix any issues that may arise.

The combination of smart data, superior analytics and visibility into a network will allow IT professionals to understand the full context of service performance, and any DNS anomalies, that may be contributing to poor user experience, application performance and service delivery. This is the future of the IoT network, and it’s not a matter of “if,” it’s a matter of “here it comes, ready or not.”

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 27, 2017  4:28 PM

Security operations is broken, and AI can fix it

Greg Martin Profile: Greg Martin
ai, Artificial intelligence, cybersecurity, iot security, IT employees, Security, security in IOT

Every day, it seems, we read headlines about a new data breach or cyberattack. Then we talk about how to improve cybersecurity to prevent similar attacks from happening in the future. Chief among the issues to address is a lack of security personnel to fill vacant positions: How can we improve security if we don’t have the people to perform the work?

To uncover how to improve security, we must first consider that the way we perform security operations is broken. Security operations teams — often part of a centralized security operations center — are responsible for defending a given organization from the latest emerging threats. Teams of analysts monitor intelligence sources, including the news, social media, vendor intelligence partners and the FBI’s publicly available guidance, for information about potential new cyberattacks that might target their organization. We have supported our humans thus far with a complex deployment of layered cybersecurity defenses. This approach has proven to be fairly unsuccessful.

One way security operations teams can improve their ability to identify and combat threats is by improving the speed with which they process and react to those threats. Introducing speed into today’s security environment requires artificial intelligence (AI).

Human scale does not meet today’s cybersecurity demand

Analysts must collect information then quickly turn that data into something they can use — threat detection, response and remediation. Currently, this process is a manual, human analyst-driven activity that often takes too much time to complete: Just 33% of respondents to the 2016 SANS Institute Incident Response Survey reported they were able to remediate events in less than one day.

By some estimates, the average data breach now costs organizations $4 million, a sure sign that closing the gap between event occurrence and remediation should be a priority. To close the gap, companies are investing more heavily in information security, especially in the areas of security testing and data loss prevention. Organizations are making these investments in technology and security infrastructure in part to make up for the lack of humans to fulfill security analyst roles.

Yet, even if enough human security analysts existed to fill the open positions, the work humans do cannot scale to deal with the amount of attacks we see today. Security operations is at a tipping point. We do not have the capacity to process the amount of threats the average organization sees today. To keep pace, we must lean on machine learning technology and artificial intelligence.

AI’s maturity means it’s ready for security

Due to recent innovations in the last few years, AI is finally technically mature enough to apply to a security environment. Specifically, deep learning is far enough in its development to push us past this tipping point. Deep learning allows computers to go through large amounts of data and find abnormalities. The abnormalities a deep learning algorithm detects in a cybersecurity environment represent potential threats. The progress we’ve made with deep learning is important — computers endowed with the technology can collect, analyze and disseminate information much faster than a team of human security analysts can.

Human brains aren’t designed to work through millions of computer log messages each day. Deep learning AI is designed to do just that. Security operations teams can teach machine learning algorithms, via deep learning, to accelerate the process of what a security operations analyst would do, by a factor of 100 or even 1,000 in terms of time and efficiency. That 1,000-fold improvement is what operationalized speed looks like in a cybersecurity environment.

Organizations can start operationalizing speed today

Organizations can begin to operationalize speed in their security operations teams today. They must create an environment that supports and ingrains artificial intelligence, specifically deep learning-enabled algorithms. The following recommendations detail steps the heads of security can take toward operationalizing speed via AI:

  • Develop a security big data lake. Organizations can train algorithms with existing data so that they will be better equipped to collect and analyze incoming threat data in the future.
  • Implement data fusion. Once security operations teams establish a data lake — a holding tank for collecting and storing threat data — they must also ensure this lake collects data from a number of different sources into one easily accessible place.
  • Hire a data scientist, or involve data scientists in the security operations team. While security operations teams must lean on AI to make their tasks more efficient, they will still need someone trained to analyze and understand data to help the team grasp what it is seeing.

The lack of cybersecurity analysts, combined with the glut of data they must analyze, makes for a problem that scales beyond what humans alone can handle. Today’s security analysts spend too much time on events like common malware infections — events that should not require their time and energy — instead of the threats that matter. To alleviate this problem, security operations teams must improve the efficiency with which they identify and remediate threats. They cannot scale their speed to meet those threats without leaning heavily on artificial intelligence.

Security need only look at Netflix for inspiration

We believe organizations can bring the same AI innovations that allowed Netflix to revolutionize how we choose our entertainment to security operations and the millions of security events and alerts that need categorization and prioritization.

Once security analysts can investigate real threats, as opposed to sifting through endless streams of alerts to determine what is worth investigating, we’ll experience the improvements in security necessary to combat the multitude and complexity of threats we see today. Now, it is time to look forward to a bright future for those bold enough to take advantage of these developments and innovate in this new security paradigm.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 27, 2017  1:31 PM

Thanks to IoT, care for farm animals is improving and evolving

Scott Amyx Scott Amyx Profile: Scott Amyx
Agriculture, Animals, farm, farmers, farms, food, IIoT, Internet of Things, iot, IoT analytics, IoT data, IoT sensors, Sensors, Wearables

The emergence of the internet of things in recent years has fired the imagination of innovators. IoT is increasingly finding innovative use in new areas, thereby opening new possibilities. One of the areas that IoT technology is now finding use in is care for farm animals.

With IoT to their aid, farm owners are maintaining data more accurately for farm animals and increasing their productivity. In addition, using IoT is also laying the foundation for precision farming.

The need

Consumer preferences and demands are evolving, in turn forcing industries to evolve for the better. Regulations are set in place and keep evolving regularly to ensure customer satisfaction. The agricultural industry has also experienced this evolution. Consumers today want to know the origin of their products — the state of the livestock that produce the milk, the condition of the livestock at the time of slaughter to acquire meat, the medication for the animals, its effects and side effects, and so on.

Farmers, therefore, need to keep a meticulous record of the diet and medication of livestock. For example, farmers in Germany must maintain documentary details of antibiotics used (according to DART 2008; USDA 2011). Conventional means of acquisition and maintenance of data are not efficient enough to do the job. To fulfill industry requirements and consumer demands, several variables need to be rigorously monitored. IoT is emerging as a credible enabler towards these efforts.

According to the United Nations in 2013, the world’s population will grow to approximately 9.6 billion in 2050. This implies the demand for food, inclusive of the kind that originates from the agricultural industry and specifically from farm animals, will grow substantially. Therefore, it is imperative that the industry grows proportionally and farm animals are more productive.

By 2050, the world’s demand for meat is expected to reach around the following figures: 106 million tons of beef, 25 million tons of mutton, 143 million tons of pork, 181 million tons of poultry and 102 million tons of eggs.

How IoT addresses the needs

Sensors are now added to wearables to gather information from livestock to improve productivity and livestock health. Information is acquired about an animal’s behavior, health, injury, medical regime and other similar statistics, such as lactation and fertility. With the inclusion of technology, data acquisition has become simpler and more effective. Consequently, extensive data processing is possible. Scientists can collect data and improve medicine and diet regimes based on the collected information. Farmers know what to do and when to do it. For example, the sensors deployed on farms inform farmers of system failures such as ventilation system failure. In-time notification of possible failures saves lives and prevents health issues in livestock.

IoT is addressing the demands of the agricultural industry in various ways some of which are as follows:

  • Improving offspring care: Monitoring the health of the offspring and the variables around it to ensure a healthy and productive livestock. Doing so reduces the numbers of lives lost due to variable factors.
  • Mitigating health hazards: Sensors can detect potential health hazards such as toxic gases that might result from incompetent or failing ventilation systems. This also helps maintain livestock health. The result is an overall increase in productivity.
  • Round-the-clock animal tracking: This facilitates the location of animals for larger pastures and farms.
  • Leveraging fertility windows: Each animal has its own season and a specific window where maximum results are achievable. These sensors monitor livestock health and conditions so that window is not missed.
  • Enhancing lactation: Sensors detect which cow needs a specific diet to improve its milk production. In addition, it also notifies about the best time to milk a specific cow.

Precision farming

The inclusion of IoT in agriculture is adding an exciting new dimension to agriculture and helping it evolve to cater to growing demand for food. Monitoring various small variables enables researchers and farmers to efficiently tend to the needs of their livestock. This lays the foundation of precision farming. By addressing minute needs before they develop into big challenges, the system becomes efficient and productivity steadily increases in a way that enables it to meet the ever-increasing demand for food.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 27, 2017  11:37 AM

How data is gaining value

Gordon Haff Profile: Gordon Haff
Data, Data Analytics, Google, Internet of Things, iot, IoT analytics, IoT data, Kubernetes, Machine learning, Open source

Historically, much of the value that vendors associated with software was in the algorithms and the code. Well, that and the lock-in created by the dominant market share of proprietary software products and their proprietary formats. Sure, many companies build proprietary technologies on top of open source code. This sort of arrangement is especially prevalent in the public cloud world. Because cloud providers don’t distribute their products in the traditional sense, licenses impose fewer restrictions on how they can use open source software as part of their technology.

However, some public cloud providers also contribute to open source projects. The contributions by Google to Kubernetes and TensorFlow are cases in point. Open source projects with strong communities have demonstrated an effectiveness as a development model that’s impossible to ignore.

TensorFlow is a particularly interesting case in the context of IoT and machine learning. Google describes it as “an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.” Open sourced by the Google Brain team, it reached version 1.0 earlier this year.

Why interesting?

Why companies choose to open source

Consider that you might reasonably think that a lot of Google’s (and, more broadly, Alphabet’s) IP is wrapped up in algorithms, software and general know-how around artificial intelligence. Or, really, ways to extract insights, deliver results or take actions based on data — whether that means displaying a personalized ad or enabling a car to drive autonomously. TensorFlow would seem to be squarely in the domain of these “crown jewels.” And yet here it is open sourced.

In part, this reflects a wider pattern around open source generally. You may write (and tightly control) a lot of internal code relevant to your business. But you can also benefit from opening up related software to a wider pool of developers and users. It’s not either/or.

Thus, you have financial institutions cooperating on blockchain, messaging and other technologies while also holding back plenty of proprietary trading algorithms. TensorFlow likewise represents a small slice of Google’s machine learning research.

It’s also about the data

However, there’s something else going on too. With the almost countless petabytes of data that will be generated by IoT systems and connected devices more broadly, value is shifting from code to data. Of course, it’s far from trivial to figure out how to do useful things with data you collect. But to the degree than an organization controls and owns data, they may choose to focus on effectively monetizing that data while maximizing the community and development velocity of the software.

We see this happening elsewhere as well. Baidu has open sourced autonomous driving technology. The storyline is that this is a way to level the playing field against other vendors taking a more traditional proprietary approach. But it also appears as if the company views the data it is collecting as a competitive differentiator that it doesn’t plan to release.

The new data-driven services

We see the value of data in many of the new services organizations are starting to deliver. For example, as noted in an MIT Sloan Management Review article, “GE wants to go beyond helping its customers manage the performance of individual GE machines to managing the data on all of the machines in a customer’s entire operation.” Other services can include optimizing preventative maintenance schedules on jet turbines and other industrial machinery.

Huge data lakes that aren’t curated or properly analyzed aren’t a benefit, of course. In fact, data that’s not used intelligently can have a negative ROI because of its collection, storage and management costs. But when data can be effectively used to guide actions and gain useful insights, it’s an increasingly large part of a technology company’s value. (And almost every organization is a technology company today.)

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

July 26, 2017  3:57 PM

Using artificial intelligence for forgery: Fake could be eerily real

Scott Amyx Scott Amyx Profile: Scott Amyx
ai, Artificial intelligence, criminals, IIoT, Internet of Things, iot, Neural network, privacy, Security

Artificial Intelligence is rapidly finding application in a myriad of fields, enhancing both the pace and quality of work. The tasks performed by AI are evolving at such a rapid pace that scientists already fear the rise of machines. That might be far-fetched, but AI does bring along some genuine areas of concern. This is primarily because AI has become a powerful tool, which simplifies high-skill tasks.

AI is at the disposal to anyone who wants to perform a task that requires extensive training without any prior experience. Analytics, big data and machine learning help us to analyze a vast amount of information and use it to predict future outcomes. It can, however, also be used to mislead, forge and deceive.

Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. It is enabling some effective but downright scary new tools for manipulating media. Therefore, it holds power to alter forever how we perceive and consume information. Time will come when people will struggle to know whom and what to trust. Even today, we live in a world of Photoshop, CGI and AI-powered selfie filter apps. The internet democratized the knowledge by enabling free but unregulated and unmonitored access to information. As a result, floodgates of all types of information opened up, ushering in a staggering amount of rumors and lies.

Criminals are utilizing this technology to their benefit. Readily available tools can create high-quality fake videos that can easily fool the general population. Quintessentially, using AI to forge videos is virtually transforming the meaning of evidence and truth in journalism, government communications, testimony in criminal justice and national security.

Audio forgery

Lyrebird is a deep learning AI startup based in Montreal that synthesizes surprisingly realistic-sounding speech in anyone’s voice with a one-minute recording.

Creative software giant Adobe has been working on a similar technology “VoCo.” It labeled its project as “Photoshop for audio.” The software requires a 20-minute long audio recording of someone talking. The AI analyzes it, figures out how that person talks and then learns to mimic the speaking style. Just type anything, and the software will read your words in that person’s voice.

Google’s Wavenet is providing similar functionality. It requires a much bigger data set than Lyrebird or VoCo, but it sounds creepily real.

MIT researchers are also working on a model that can generate sound effects for an object being hit in a mute video showing on the screen. The sound is realistically similar to that when the object is hit in real life. The researchers envision the future version automatically producing realistic sound effects good enough for use in movies and television.

With such software, it will become easy to record something controversial in anyone’s voice, rendering voice-based security systems helpless. Telephonic calls could be spoofed. No one will be exactly sure if it is you on the other end of the phone. At the current pace of progress, it may be within two to three years that realistic audio forgeries are good enough to fool the untrained ear, and only five to 10 years before forgeries can fool forensic analysis.

Visual forgery

Tom White at Victoria University School of Design created a Twitter bot called “SmileVector” that can make any celebrity smile. It browses the web for pictures of faces and then it morphs their expressions using a deep-learning-powered neural network.

Researchers at Stanford and various other universities are also developing astonishing video capabilities. Using an ordinary webcam, their AI-based software can realistically change the facial expression and speech-related mouth movements of an individual.

Pair this up with any audio-generation software and it’s easy to potentially deceive anyone, even over a video call. Someone can also make a fake video of you doing or saying something controversial.

Jeff Clune, an assistant professor at University of Wyoming, along with his team at Evolving AI Lab is working on image recognition capabilities in reverse, by adopting neural networks trained in object recognition. It allows generating synthetic images based on text description alone. Its neural network is trained on a database of similar pictures. Once it has gone through enough pictures, it can create pictures on command.

A startup called Deep Art uses a technique known as style transfer — which uses neural networks to apply the characteristics of one image to another — to produce realistic paintings. A Russian startup perfected its code developing a mobile app named Prisma, which allows anyone to apply various art styles to pictures on their phones. Facebook also unveiled its version of this technique, adding a couple of new features.

Other work being done on multimedia manipulation using artificial intelligence includes the creation of 3D face models from a single 2D image, changing the facial expressions of someone on video using the Face2Face app in real time and changing the light source and shadows in any picture.

Identity theft

A team of researchers at University College London developed an AI algorithm titled “My Text in Your Handwriting,” which can imitate any handwriting. This algorithm only needs a paragraph’s worth of handwriting to learn and closely replicate a person’s writing.

Luka is also aspiring to create bots that mimic real-life people. It is an AI-powered memorial chatbot. It can learn everything about a person from his/her chat logs, and then allow the person’s friends to chat with the digital identity of that person long after he/she dies. The chatbot, however, can potentially be used before a person dies, thereby stealing the person’s identity.

Socio-political impact

A study by the Computational Propaganda Research Project at the Oxford Internet Institute found that half of all Twitter accounts regularly commenting about politics in Russia were bots. Secret agents control millions of botnet social media accounts that tweet about politics in order to shape national discourse. Social media bots could even drive coverage of fake news by mainstream media and even influence stock prices.

Imagine those agents and botnets also armed with artificial intelligence technology. Fake tweets and news backed by real-looking HD video, audio, written specimens and government documents is eerily scary. Not only is there the risk of falsehood being used to malign the honest, but the dishonest could misuse it to their defense.

Before the invention of the camera, recreating a scene in the court of law required witnesses and various testimonies. Soon, photographs started assisting along with the witnesses. But later, with the advent of digital photography and rise of Photoshop, photographs were not admissible as reliable evidence. Right now, audio and video recordings are admissible evidence, provided they are of a certain quality and not edited. It’s only a matter of time before the courts refuse audio and video evidence too, howsoever genuine it might seem. The AI-powered tool that imitates handwriting can allow someone to manipulate legal and historical documents or create false evidence to use in court.

Countering the rise of forgery

With potential misuse of artificial intelligence, the times ahead do indeed seem challenging. But, the beautiful thing about technology is that you can always expect to have solutions to any problem.

Blockchain, the technology securing cryptocurrencies, is promising to provide a cybersecurity solution to the internet of things, offering one possibility to counter forgery. With the widespread use of IoT and advancements in embedded systems, it may be possible to design interconnected cameras and microphones that use blockchain technology to create an unimpeachable record of the date of creation of video recordings. Photos can be traced back to the origin by their geotags.

The Art and Artificial Intelligence Laboratory at Rutgers University is currently developing a neural network, which can appreciate and understand art and the subtle differences within a drawing. It uses machine learning algorithms to analyze the images, counting and quantifying different aspects of its observation. This information is processed in its artificial neural network to recognize visual patterns in the artwork.

Similarly, neural networks can be trained to detect forged documents, historical evidence and currency notes. It can also identify fake identities and other bots on the internet by observing their functioning pattern and clashing IP addresses. Forged video and audio could be compared across various dates and platforms to identify their origin.

In addition, regulatory and procedural reforms are required to control this menace.

Even though the audio and video manipulation tools aren’t entirely revolutionary, they no longer require professionals and powerful computers. We can’t stop criminals from getting their hands on such tools. If anything, making these tools available to everyone just to show people what’s happening with AI will make the public aware of the power of artificial intelligence — and hence, aware of the easily forgeable nature of our media.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: