In my conversations with industrial companies looking to start or accelerate their journey toward the industrial internet of things, I’ve begun to see a phenomenon among the ranks of industrial technologists that’s not all that different from Darwin’s theory of evolution. Adaptation is the key to this theory, something industrial technologists need to do well as their environment is changing around them.
In the past, there has been a clear divide between IT teams — that control the data center — and operational technology teams — which are responsible for the care and feeding of operational automation systems. These two distinct teams had different skill sets, backgrounds and priorities. Today, in order to bridge the gap that has traditionally separated the two, a new breed of what I like to call “hybrid OT” professionals is emerging. This is where IT and OT responsibilities and skillsets are converging, making the individual who can do both a valuable technologist.
What is causing this shift? There are two big things I see driving this change:
New responsibilities breed new roles — As more computing power and data collection have made their way to the edge of industrial networks, a new combination of skills is required to manage these assets (historically the domain of OT), giving “birth” to the IT/OT hybrid. We saw a similar shift occur with the rise of cloud computing: developers struggled to get IT to respond to their needs, so they turned instead to public cloud services for answers. As developers took this responsibility of securing the IT infrastructure needed to run their applications in clouds upon themselves, the role of DevOps was born.
A generation ready for 21st century challenges — Many OT professionals who have been in the industry a long time are now approaching retirement and a new generation is taking their place. This generation of younger, digital natives is not intimidated by technology — they were in fact raised on it. They see the potential of IIoT and will look to realize its potential as they push intelligence out to the edge and leverage data and analytics in new ways.
The most forward-looking industrial enterprises are the ones that see the value in hiring professionals that are just as comfortable working with servers as they are working with machine tools, packaging lines, pumps and valves. Enterprises actively recruiting these hybrid OT professionals are attracted to the skills they’re seeing that will be valuable in managing both IT and OT technologies. Whatever their background — IT, data science, industrial engineering — these individuals share a passion for the intersection of technology and industrial operations.
New expectations for the technology they use will also come along with the role. System availability has become an absolute necessity for business continuity and is something hybrid OT professionals will expect out of their systems and technology vendors. Similarly, these hybrid OT professionals see the value in the data produced at the edge and will lean on technology vendors to help make data protection a top priority for the enterprise.
When will we see this new breed emerge? The answer is that the evolution will happen soon — much more quickly than it occurs in the natural world. Over the next two to three years, I believe the industry will see a major influx of this hybrid breed.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
With the drone industry netting $8 billion last year and consumer spending for on-demand services topping $57 billion, the implementation and deployment of drone delivery fleets is becoming more of a reality each day. From Amazon’s futuristic drone delivery tower proposal to parachuting packages on the horizon, we are nearing a complete delivery transformation. While these innovations will need to be fine-tuned as the commercial drone industry flourishes, several factors, such as operational inefficiencies, continue to work against the Amazons and Ubers of the world, hindering supply chain innovation.
The mainstream drone delivery timeline
We have already seen the start of drone deliveries, but it likely won’t become truly widespread and accepted until 2025. Over the next seven years, we’ll begin to see an overall increase in this new vehicle for deliveries, but a large portion of these test runs, followed by operational deployments, will be heavily focused in rural areas where the safety risks are smaller and logistics are much simpler to manage. Contrary to popular belief, dense metro areas present numerous challenges and risks — think traffic, privacy issues, power lines, high-rise buildings, sudden wind gusts and crowded streets below.
Instead of thinking about smart urban cities, we should shift our focus, at least for drones initially, to smart rural areas. It’s much more likely we’ll see drones used to deliver common goods like food and medication to remote homes and offices versus a busy suburb. Instead of a consumer driving over an hour one-way into town to pick up a prescription, he could choose to have it delivered by drone. Autonomous vehicles and drones could, and should, even partner for optimal efficiency while delivering packages — companies like UPS and Mercedes are already testing a self-driving van and drone combination for the ultimate rural delivery challenge solution. Meanwhile, cities will shift their attention to small and less risky sidewalk bots to help enable the “life on-demand” luxury in metro areas.
Before these services can become a reality, we need to start addressing the inherent challenges to implementation. While there are several concerns that need to be addressed, the ones we should be thinking about sooner rather than later relate to the safe deployment of these technologies. The obvious safety element everyone is looking at, which I am sure will indeed be addressed, is that of the single drone: How will it fly safely, avoid obstacles, carry its packages and so forth? However, it is the deployment of a fleet of drones I am more concerned about. Before they can become commonplace, live operational testing is mandatory to ensure we can deploy several drone fleets without overcrowding the airways — a scenario that could completely erode the potential benefits these technologies present, and moreover create a potentially unforeseen safety risk.
To adopt this technology and do it right, we need to start thinking about a higher-level network of connected drones — no matter the parent company. This means taking a look at how they should be deployed, managed and used so not only individual consumers can experience the benefits of these technologies, but society can as well. Whenever a new method is introduced into everyday use, there is a great deal of strategy that needs to go into thinking about which services are implanted where.
Who will be the first adopters?
One benefit drones exhibit is the lack of overhead required as opposed to traditional delivery methods like trucks and airplanes. A large-scale delivery model requires an enormous amount of capital investment including drivers, trucks, space to store equipment, regional depots and much more in order to be successful. This presents a substantial barrier to small competitors trying to enter the market, even if their model is more efficient than the delivery giants. On the other hand, drones are less expensive to buy, store, maintain and operate and because of this, we’ll likely see many small companies launch drone delivery services. Because of the technologies’ flexibility, there is an opportunity for both corporate giants and mom-and-pop shops to operate in this industry successfully. This in itself also poses an added risk of very large numbers of drones all flying around the same airspace in an uncoordinated fashion. Inefficiency and risk galore.
Additionally, companies like FedEx and UPS will also look to break into the drone business in order to stay competitive. As drone technology improves and new businesses emerge, it’s likely drones will slowly begin to replace some of the last-mile routes traditionally fulfilled using trucks and vans, thus resulting in increased efficiency across the board. Just think: if one delivery in a driver’s route is three miles out of the way of every other stop, that’s six miles roundtrip for a single package. The time spent driving to the delivery destination and back is not only an inefficient use of the employee’s time, but a costly waste of company resources (assuming, of course, the package is small enough to be delivered by a drone instead). In the future, delivery vans will likely be equipped with drones to deploy one-off package deliveries, optimizing the driver’s daily route while simultaneously reducing operating costs. In combination with the great distribution networks and infrastructure in place, FedEx and UPS will have a huge leg up on potential competitors looking to enter the drone arena.
Looking ahead: The future is buzzing
Look for Amazon to continue to push the envelope very publicly, but don’t be shocked to see Wal-Mart come out with a surprise from left field. It has the money, it just has to find the right focus and partner — I think it will. I also anticipate seeing brand-new delivery companies emerge and try to conquer the space, only eventually to get bought out by the delivery giants such as UPS and FedEx to try and maintain pace with Amazon.
At the end of the day, there are two technologies that are going to be required to make this all happen, and happen successfully to the benefit of the consumer and service provider: a safe and effective drone at a price point that is affordable, and dynamic network-level, multimodal scheduling software that will enable the service provider to efficiently deploy a fleet of drones and integrate them with other resources.
Finally, don’t limit your thinking to delivery being of only physical goods. Drones will also be used to deliver data. Insurance companies could rapidly deploy a drone to a car accident scene (especially that of a driverless car) or a home disaster to take photos from multiple directions and collect data in near-real-time to increase the accuracy of the data. That same drone could deliver information rapidly to emergency services, such as the police and fire department, to increase treatment quality and survival rates.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The internet of things is defined as the interconnectivity or networking of devices, sensors and electronics to communicate through software. With IoT and the typical computing infrastructure that is very familiar, the change in data, access, connectivity and monitoring requires a cybersecurity approach. We tend to add technology to the existing fabric of our lives without considering how the bad guys will attack. We are seeing IoT in our homes, automobiles, food safety, medical devices, critical infrastructure and manufacturing — just to name a few.
Let’s talk about our homes and us as consumers of IoT first. We have access to some cool and innovative technologies at home. A favorite is Amazon’s Alexa digital assistant. Alexa can turn on lights, change the temperature of a thermostat, change watering days and times on your irrigation controller, manage webcams, and turn on and off the television. All this is amazing, but it raises the question: Have we opened ourselves up to more vulnerabilities at home? An illustration of webcam vulnerability was widely seen in the distributed denial-of-service attack of Dyn in late 2016.
Medical devices are just as vulnerable, now that blood pressure cuffs, glucometers, insulin pumps, pacemakers, ICU real-time monitors and many others are connected to the internet. Home healthcare uses wearables for monitoring of statuses and medication reminders. All of these connected devices have been purpose-built for function with limited security and data protection. We have seen hacks into insulin pumps manipulating the dosing. The seriousness of a personal attack is not outweighed by the threat to healthcare systems and records that can be accessed through these devices, all labeled as, personally identifiable information.
Lastly, manufacturing sensors and devices are a common threat, as they are unmanaged. As seen with Petya, NotPetya and WannaCry, unmanaged devices have been the target for spreading ransomware across networks. The attackers are looking for the easiest entry point and the sensors of unmanaged IoT devices, which have become active targets. Manufacturing under government contracts has been a key target and supply chain SMBs now have required guidance for compliance. Some of the most critical aerospace designs have been stolen through cyberattacks that have a significant effect on our national security, as well as the economy for lost programs from these smaller manufacturers, whereas in food safety, the monitoring and prevention of agroterrorism is paramount to protect our national food supply.
What should we do? The list of actions remains very similar. Make sure all devices are not set to default (i.e., change passwords) — this is a typical flaw in the devices of SMBs as well as consumer devices. Verify all devices and sensors are managed and monitored. Properly segment your network — create an internal, guest and IoT network at a minimum. Some other helpful considerations around a cybersecurity program include updating firewalls, securing remote access, reviewing security configurations, operating system updates and patches, training staff members, improving security policies and changing control procedures.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Has the internet of things won a new best friend in blockchain? Following a compound annual growth rate of 52%, it’s expected that capital markets applications for blockchain technology will reach a value of $400 million by 2019. On top of that, by 2020 a fifth of IoT deployments will use blockchain services. With these statistics in mind, the answer to the question posed at the start of this article would be “yes,” and you’d be hard pressed to find anyone who would say “no.” Of course, it’s not quite that straightforward, but let’s first consider the many benefits blockchain can offer to IoT before we address how it must be managed.
Using blockchain for IoT transactions and data sharing will ease data concerns, remove single points of failure, cut costs and, perhaps most importantly, streamline processes. When it’s then supported by service assurance, IoT and blockchain will deploy incredible innovation potential in every industry it is deployed in.
The benefits are wide and varied. A drug prescribed to a patient can become visible to all relevant providers regardless of electronic health record compatibility. A connected car that pays for tolls and parking automatically could also use barcode technology to open its trunk to receive a dropped off package. A mobility-as-a-service station could offer transport to passengers and automatically collect payment for public transport, electric car charging, bike-sharing … the list is endless. Combining IoT devices and blockchain is a key, and what it can unlock is going to be the new reality.
Increased resistance means improved security
The key piece of information about blockchain is that its distributed database decentralizes ledgers by sharing a chain of transactions between multiple nodes. The blocks are publicly visible, but their contents can only be seen by organizations with the correct key to their encryption. Due to transactions needing multiple parties’ authorization before acceptance, blockchain has a high degree of trust value.
In addition to this, transactions can only be added, not subtracted or altered. From a regulatory point of view this is very attractive. A chain of accountability is visible. It also means organizations that are subject to HIPAA, the EU’s Data Protection Directive and soon GDPR will adopt such technology as a way to improve their transparency.
When it comes to IoT network deployments, blockchain can facilitate not only financial transactions, but also messaging between devices. By operating in line with embedded smart contracts, two parties can share data without anonymity being compromised. Although blockchain doesn’t solve every security concern and problem for devices that are IoT compatible, such as the hijacking of IoT devices in distributed denial-of-service botnets, it does help protect data from nefarious actors with malicious intent.
Given that IoT devices are projected to increase by astronomical levels, up to 24 billion by 2020, traditional methods of handling network traffic such as server/client will eventually be far too cumbersome and too unwieldy to be effective. The simplicity of distributed blockchain is what makes it brilliant. Supported by a growth of edge computing devices and 5G networks, this simplicity will enable faster and more efficient communications between autonomous devices, without them passing through any single points of failure. When it comes to records of IoT device function, blockchain will also have its own part to play here, making it possible for devices to communicate autonomously without a single centralized authority.
IoT, blockchain and service assurance
The challenge, then, comes in how this technology will be managed. Blockchain and IoT, like any digital transformation technology, will add a level of complexity to IT infrastructure that has never been seen before. This will include edge devices and servers that participate in the blockchain transactions, middleware for encryption and authentication, and virtual machines that distributed databases and applications will rely on. Even though these autonomous devices and additions will boost efficiency, and the improved availability and added security within could cut costs dramatically, it means that service assurance has become more necessary than ever before.
In an IoT and blockchain environment, load and latency can impact service delivery. Not forgetting that because blockchain is basically a highly distributed database, assuring that same service delivery just becomes far more difficult. It needs a holistic end-to-end visibility tool that delves into your packet and session flow — across all load balancers, gateways, service enablers like DNS, network, servers and databases — whether they’re distributed or not — and all their interdependent components.
DNS is but one example here. The coming growth in IoT devices combined with blockchain will mean a surge of DNS requests and DNS dependent services, which will have a severe impact on not only service delivery, but also on performance. In terms of business continuity, ultra-low latency of typical DNS services should concern businesses, and rightly so, as this can mean an effect on assuring IoT performance quality. If DNS performs badly, those IoT and blockchain services will also perform badly. This will mean that parts of the connected world that are becoming ever more dependent on automation may come to a standstill.
Healthcare, manufacturing, energy distribution, transportation — all critical services — could be derailed because of the slow performance of one device or DNS problems. However, losing control is avoidable as it leads us back to the holistic end-to-end visibility mentioned earlier. IT teams will need that visibility into DNS issues such as errors and busy servers; this will cut down average troubleshooting periods drastically and enable those same teams to fix any issues that may arise.
The combination of smart data, superior analytics and visibility into a network will allow IT professionals to understand the full context of service performance, and any DNS anomalies, that may be contributing to poor user experience, application performance and service delivery. This is the future of the IoT network, and it’s not a matter of “if,” it’s a matter of “here it comes, ready or not.”
Every day, it seems, we read headlines about a new data breach or cyberattack. Then we talk about how to improve cybersecurity to prevent similar attacks from happening in the future. Chief among the issues to address is a lack of security personnel to fill vacant positions: How can we improve security if we don’t have the people to perform the work?
To uncover how to improve security, we must first consider that the way we perform security operations is broken. Security operations teams — often part of a centralized security operations center — are responsible for defending a given organization from the latest emerging threats. Teams of analysts monitor intelligence sources, including the news, social media, vendor intelligence partners and the FBI’s publicly available guidance, for information about potential new cyberattacks that might target their organization. We have supported our humans thus far with a complex deployment of layered cybersecurity defenses. This approach has proven to be fairly unsuccessful.
One way security operations teams can improve their ability to identify and combat threats is by improving the speed with which they process and react to those threats. Introducing speed into today’s security environment requires artificial intelligence (AI).
Human scale does not meet today’s cybersecurity demand
Analysts must collect information then quickly turn that data into something they can use — threat detection, response and remediation. Currently, this process is a manual, human analyst-driven activity that often takes too much time to complete: Just 33% of respondents to the 2016 SANS Institute Incident Response Survey reported they were able to remediate events in less than one day.
By some estimates, the average data breach now costs organizations $4 million, a sure sign that closing the gap between event occurrence and remediation should be a priority. To close the gap, companies are investing more heavily in information security, especially in the areas of security testing and data loss prevention. Organizations are making these investments in technology and security infrastructure in part to make up for the lack of humans to fulfill security analyst roles.
Yet, even if enough human security analysts existed to fill the open positions, the work humans do cannot scale to deal with the amount of attacks we see today. Security operations is at a tipping point. We do not have the capacity to process the amount of threats the average organization sees today. To keep pace, we must lean on machine learning technology and artificial intelligence.
AI’s maturity means it’s ready for security
Due to recent innovations in the last few years, AI is finally technically mature enough to apply to a security environment. Specifically, deep learning is far enough in its development to push us past this tipping point. Deep learning allows computers to go through large amounts of data and find abnormalities. The abnormalities a deep learning algorithm detects in a cybersecurity environment represent potential threats. The progress we’ve made with deep learning is important — computers endowed with the technology can collect, analyze and disseminate information much faster than a team of human security analysts can.
Human brains aren’t designed to work through millions of computer log messages each day. Deep learning AI is designed to do just that. Security operations teams can teach machine learning algorithms, via deep learning, to accelerate the process of what a security operations analyst would do, by a factor of 100 or even 1,000 in terms of time and efficiency. That 1,000-fold improvement is what operationalized speed looks like in a cybersecurity environment.
Organizations can start operationalizing speed today
Organizations can begin to operationalize speed in their security operations teams today. They must create an environment that supports and ingrains artificial intelligence, specifically deep learning-enabled algorithms. The following recommendations detail steps the heads of security can take toward operationalizing speed via AI:
- Develop a security big data lake. Organizations can train algorithms with existing data so that they will be better equipped to collect and analyze incoming threat data in the future.
- Implement data fusion. Once security operations teams establish a data lake — a holding tank for collecting and storing threat data — they must also ensure this lake collects data from a number of different sources into one easily accessible place.
- Hire a data scientist, or involve data scientists in the security operations team. While security operations teams must lean on AI to make their tasks more efficient, they will still need someone trained to analyze and understand data to help the team grasp what it is seeing.
The lack of cybersecurity analysts, combined with the glut of data they must analyze, makes for a problem that scales beyond what humans alone can handle. Today’s security analysts spend too much time on events like common malware infections — events that should not require their time and energy — instead of the threats that matter. To alleviate this problem, security operations teams must improve the efficiency with which they identify and remediate threats. They cannot scale their speed to meet those threats without leaning heavily on artificial intelligence.
Security need only look at Netflix for inspiration
We believe organizations can bring the same AI innovations that allowed Netflix to revolutionize how we choose our entertainment to security operations and the millions of security events and alerts that need categorization and prioritization.
Once security analysts can investigate real threats, as opposed to sifting through endless streams of alerts to determine what is worth investigating, we’ll experience the improvements in security necessary to combat the multitude and complexity of threats we see today. Now, it is time to look forward to a bright future for those bold enough to take advantage of these developments and innovate in this new security paradigm.
The emergence of the internet of things in recent years has fired the imagination of innovators. IoT is increasingly finding innovative use in new areas, thereby opening new possibilities. One of the areas that IoT technology is now finding use in is care for farm animals.
With IoT to their aid, farm owners are maintaining data more accurately for farm animals and increasing their productivity. In addition, using IoT is also laying the foundation for precision farming.
Consumer preferences and demands are evolving, in turn forcing industries to evolve for the better. Regulations are set in place and keep evolving regularly to ensure customer satisfaction. The agricultural industry has also experienced this evolution. Consumers today want to know the origin of their products — the state of the livestock that produce the milk, the condition of the livestock at the time of slaughter to acquire meat, the medication for the animals, its effects and side effects, and so on.
Farmers, therefore, need to keep a meticulous record of the diet and medication of livestock. For example, farmers in Germany must maintain documentary details of antibiotics used (according to DART 2008; USDA 2011). Conventional means of acquisition and maintenance of data are not efficient enough to do the job. To fulfill industry requirements and consumer demands, several variables need to be rigorously monitored. IoT is emerging as a credible enabler towards these efforts.
According to the United Nations in 2013, the world’s population will grow to approximately 9.6 billion in 2050. This implies the demand for food, inclusive of the kind that originates from the agricultural industry and specifically from farm animals, will grow substantially. Therefore, it is imperative that the industry grows proportionally and farm animals are more productive.
By 2050, the world’s demand for meat is expected to reach around the following figures: 106 million tons of beef, 25 million tons of mutton, 143 million tons of pork, 181 million tons of poultry and 102 million tons of eggs.
How IoT addresses the needs
Sensors are now added to wearables to gather information from livestock to improve productivity and livestock health. Information is acquired about an animal’s behavior, health, injury, medical regime and other similar statistics, such as lactation and fertility. With the inclusion of technology, data acquisition has become simpler and more effective. Consequently, extensive data processing is possible. Scientists can collect data and improve medicine and diet regimes based on the collected information. Farmers know what to do and when to do it. For example, the sensors deployed on farms inform farmers of system failures such as ventilation system failure. In-time notification of possible failures saves lives and prevents health issues in livestock.
IoT is addressing the demands of the agricultural industry in various ways some of which are as follows:
- Improving offspring care: Monitoring the health of the offspring and the variables around it to ensure a healthy and productive livestock. Doing so reduces the numbers of lives lost due to variable factors.
- Mitigating health hazards: Sensors can detect potential health hazards such as toxic gases that might result from incompetent or failing ventilation systems. This also helps maintain livestock health. The result is an overall increase in productivity.
- Round-the-clock animal tracking: This facilitates the location of animals for larger pastures and farms.
- Leveraging fertility windows: Each animal has its own season and a specific window where maximum results are achievable. These sensors monitor livestock health and conditions so that window is not missed.
- Enhancing lactation: Sensors detect which cow needs a specific diet to improve its milk production. In addition, it also notifies about the best time to milk a specific cow.
The inclusion of IoT in agriculture is adding an exciting new dimension to agriculture and helping it evolve to cater to growing demand for food. Monitoring various small variables enables researchers and farmers to efficiently tend to the needs of their livestock. This lays the foundation of precision farming. By addressing minute needs before they develop into big challenges, the system becomes efficient and productivity steadily increases in a way that enables it to meet the ever-increasing demand for food.
Historically, much of the value that vendors associated with software was in the algorithms and the code. Well, that and the lock-in created by the dominant market share of proprietary software products and their proprietary formats. Sure, many companies build proprietary technologies on top of open source code. This sort of arrangement is especially prevalent in the public cloud world. Because cloud providers don’t distribute their products in the traditional sense, licenses impose fewer restrictions on how they can use open source software as part of their technology.
However, some public cloud providers also contribute to open source projects. The contributions by Google to Kubernetes and TensorFlow are cases in point. Open source projects with strong communities have demonstrated an effectiveness as a development model that’s impossible to ignore.
TensorFlow is a particularly interesting case in the context of IoT and machine learning. Google describes it as “an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.” Open sourced by the Google Brain team, it reached version 1.0 earlier this year.
Why companies choose to open source
Consider that you might reasonably think that a lot of Google’s (and, more broadly, Alphabet’s) IP is wrapped up in algorithms, software and general know-how around artificial intelligence. Or, really, ways to extract insights, deliver results or take actions based on data — whether that means displaying a personalized ad or enabling a car to drive autonomously. TensorFlow would seem to be squarely in the domain of these “crown jewels.” And yet here it is open sourced.
In part, this reflects a wider pattern around open source generally. You may write (and tightly control) a lot of internal code relevant to your business. But you can also benefit from opening up related software to a wider pool of developers and users. It’s not either/or.
Thus, you have financial institutions cooperating on blockchain, messaging and other technologies while also holding back plenty of proprietary trading algorithms. TensorFlow likewise represents a small slice of Google’s machine learning research.
It’s also about the data
However, there’s something else going on too. With the almost countless petabytes of data that will be generated by IoT systems and connected devices more broadly, value is shifting from code to data. Of course, it’s far from trivial to figure out how to do useful things with data you collect. But to the degree than an organization controls and owns data, they may choose to focus on effectively monetizing that data while maximizing the community and development velocity of the software.
We see this happening elsewhere as well. Baidu has open sourced autonomous driving technology. The storyline is that this is a way to level the playing field against other vendors taking a more traditional proprietary approach. But it also appears as if the company views the data it is collecting as a competitive differentiator that it doesn’t plan to release.
The new data-driven services
We see the value of data in many of the new services organizations are starting to deliver. For example, as noted in an MIT Sloan Management Review article, “GE wants to go beyond helping its customers manage the performance of individual GE machines to managing the data on all of the machines in a customer’s entire operation.” Other services can include optimizing preventative maintenance schedules on jet turbines and other industrial machinery.
Huge data lakes that aren’t curated or properly analyzed aren’t a benefit, of course. In fact, data that’s not used intelligently can have a negative ROI because of its collection, storage and management costs. But when data can be effectively used to guide actions and gain useful insights, it’s an increasingly large part of a technology company’s value. (And almost every organization is a technology company today.)
Artificial Intelligence is rapidly finding application in a myriad of fields, enhancing both the pace and quality of work. The tasks performed by AI are evolving at such a rapid pace that scientists already fear the rise of machines. That might be far-fetched, but AI does bring along some genuine areas of concern. This is primarily because AI has become a powerful tool, which simplifies high-skill tasks.
AI is at the disposal to anyone who wants to perform a task that requires extensive training without any prior experience. Analytics, big data and machine learning help us to analyze a vast amount of information and use it to predict future outcomes. It can, however, also be used to mislead, forge and deceive.
Audio and video forgery capabilities are making astounding progress, thanks to a boost from AI. It is enabling some effective but downright scary new tools for manipulating media. Therefore, it holds power to alter forever how we perceive and consume information. Time will come when people will struggle to know whom and what to trust. Even today, we live in a world of Photoshop, CGI and AI-powered selfie filter apps. The internet democratized the knowledge by enabling free but unregulated and unmonitored access to information. As a result, floodgates of all types of information opened up, ushering in a staggering amount of rumors and lies.
Criminals are utilizing this technology to their benefit. Readily available tools can create high-quality fake videos that can easily fool the general population. Quintessentially, using AI to forge videos is virtually transforming the meaning of evidence and truth in journalism, government communications, testimony in criminal justice and national security.
Creative software giant Adobe has been working on a similar technology “VoCo.” It labeled its project as “Photoshop for audio.” The software requires a 20-minute long audio recording of someone talking. The AI analyzes it, figures out how that person talks and then learns to mimic the speaking style. Just type anything, and the software will read your words in that person’s voice.
Google’s Wavenet is providing similar functionality. It requires a much bigger data set than Lyrebird or VoCo, but it sounds creepily real.
MIT researchers are also working on a model that can generate sound effects for an object being hit in a mute video showing on the screen. The sound is realistically similar to that when the object is hit in real life. The researchers envision the future version automatically producing realistic sound effects good enough for use in movies and television.
With such software, it will become easy to record something controversial in anyone’s voice, rendering voice-based security systems helpless. Telephonic calls could be spoofed. No one will be exactly sure if it is you on the other end of the phone. At the current pace of progress, it may be within two to three years that realistic audio forgeries are good enough to fool the untrained ear, and only five to 10 years before forgeries can fool forensic analysis.
Tom White at Victoria University School of Design created a Twitter bot called “SmileVector” that can make any celebrity smile. It browses the web for pictures of faces and then it morphs their expressions using a deep-learning-powered neural network.
Researchers at Stanford and various other universities are also developing astonishing video capabilities. Using an ordinary webcam, their AI-based software can realistically change the facial expression and speech-related mouth movements of an individual.
Pair this up with any audio-generation software and it’s easy to potentially deceive anyone, even over a video call. Someone can also make a fake video of you doing or saying something controversial.
Jeff Clune, an assistant professor at University of Wyoming, along with his team at Evolving AI Lab is working on image recognition capabilities in reverse, by adopting neural networks trained in object recognition. It allows generating synthetic images based on text description alone. Its neural network is trained on a database of similar pictures. Once it has gone through enough pictures, it can create pictures on command.
A startup called Deep Art uses a technique known as style transfer — which uses neural networks to apply the characteristics of one image to another — to produce realistic paintings. A Russian startup perfected its code developing a mobile app named Prisma, which allows anyone to apply various art styles to pictures on their phones. Facebook also unveiled its version of this technique, adding a couple of new features.
Other work being done on multimedia manipulation using artificial intelligence includes the creation of 3D face models from a single 2D image, changing the facial expressions of someone on video using the Face2Face app in real time and changing the light source and shadows in any picture.
A team of researchers at University College London developed an AI algorithm titled “My Text in Your Handwriting,” which can imitate any handwriting. This algorithm only needs a paragraph’s worth of handwriting to learn and closely replicate a person’s writing.
Luka is also aspiring to create bots that mimic real-life people. It is an AI-powered memorial chatbot. It can learn everything about a person from his/her chat logs, and then allow the person’s friends to chat with the digital identity of that person long after he/she dies. The chatbot, however, can potentially be used before a person dies, thereby stealing the person’s identity.
A study by the Computational Propaganda Research Project at the Oxford Internet Institute found that half of all Twitter accounts regularly commenting about politics in Russia were bots. Secret agents control millions of botnet social media accounts that tweet about politics in order to shape national discourse. Social media bots could even drive coverage of fake news by mainstream media and even influence stock prices.
Imagine those agents and botnets also armed with artificial intelligence technology. Fake tweets and news backed by real-looking HD video, audio, written specimens and government documents is eerily scary. Not only is there the risk of falsehood being used to malign the honest, but the dishonest could misuse it to their defense.
Before the invention of the camera, recreating a scene in the court of law required witnesses and various testimonies. Soon, photographs started assisting along with the witnesses. But later, with the advent of digital photography and rise of Photoshop, photographs were not admissible as reliable evidence. Right now, audio and video recordings are admissible evidence, provided they are of a certain quality and not edited. It’s only a matter of time before the courts refuse audio and video evidence too, howsoever genuine it might seem. The AI-powered tool that imitates handwriting can allow someone to manipulate legal and historical documents or create false evidence to use in court.
Countering the rise of forgery
With potential misuse of artificial intelligence, the times ahead do indeed seem challenging. But, the beautiful thing about technology is that you can always expect to have solutions to any problem.
Blockchain, the technology securing cryptocurrencies, is promising to provide a cybersecurity solution to the internet of things, offering one possibility to counter forgery. With the widespread use of IoT and advancements in embedded systems, it may be possible to design interconnected cameras and microphones that use blockchain technology to create an unimpeachable record of the date of creation of video recordings. Photos can be traced back to the origin by their geotags.
The Art and Artificial Intelligence Laboratory at Rutgers University is currently developing a neural network, which can appreciate and understand art and the subtle differences within a drawing. It uses machine learning algorithms to analyze the images, counting and quantifying different aspects of its observation. This information is processed in its artificial neural network to recognize visual patterns in the artwork.
Similarly, neural networks can be trained to detect forged documents, historical evidence and currency notes. It can also identify fake identities and other bots on the internet by observing their functioning pattern and clashing IP addresses. Forged video and audio could be compared across various dates and platforms to identify their origin.
In addition, regulatory and procedural reforms are required to control this menace.
Even though the audio and video manipulation tools aren’t entirely revolutionary, they no longer require professionals and powerful computers. We can’t stop criminals from getting their hands on such tools. If anything, making these tools available to everyone just to show people what’s happening with AI will make the public aware of the power of artificial intelligence — and hence, aware of the easily forgeable nature of our media.
Industry 4.0 is here; this also substantiates the start to a new era of industrialization where machines are moving toward being autonomous. For asset-intensive enterprises, it is imperative to keep watch on various processes such as material production, overall equipment efficiency and asset management. The rise in adoption in using sensors, edge devices and artificial intelligence is a product of growing costs for managing assets. Experts have predicted that businesses would invest near half a trillion dollars for asset management by the end of the next decade.
Here is where machine learning comes handy. Though its path is marred with numerous challenges, it still commands a success rate better than any of its contemporary solutions. It might sound weird, but the fact remains that researchers are spending billions on the research and development of a working machine learning technology capable of aiding asset management for industries. Some of the top scientists have spent more than three years simply developing a machine learning algorithm, and to date, the success rate of machine learning technologies clocks below 10%.
Dead investment or exemplary foresight?
In all the commotion, the main question that arises is, “Why is the industry still pursuing such a venture?” The answer to that question lies in machine learning capabilities. Companies are facing challenges to ensure that their machines are running at a heightened efficiency level. For this, they will require the ability to monitor their assets remotely. IIoT offerings have made it possible to reach a certain success rate, but machine learning holds the key.
Data is the future
Big data has taken off at a never-before-expected rate, and this might be the catalyst for the success of machine learning. Assets in an industry are tough to manage, but automation and the use of data through IIoT has provided a new pathway towards better asset management. And not to mention, the additional challenge of the human error element through increased human intervention. But machine learning promises to do away with all of this, as it has been hyped to use machine-generated data for the benefit of those very machines ensuring optimum asset management at all times.
Having a feedback cycle with human intervention is not only time-consuming, but it also increases the chance of errors through miscommunication. Instead, if a machine itself can analyze data and provide alerts at the right moment, the issue of asset management is sorted forever. The machine learns, analyzes and adapts itself to ensure maximum output is realized every single time. Using algorithms, machine learning allows enterprises to unlock hidden insights from their asset data. For instance, a forecast regarding an asset failure can help in scheduling preventive maintenance of yet to fail asset. Such machine learning algorithm-driven predictive analytics software can enable enterprises to make fully vetted and well-timed decisions towards improved asset management.
Though many machine learning algorithms have been around, an ability to apply complex calculations to big data automatically, faster and faster, over and over is the latest development. These machine learning technologies can enhance growth for an enterprise to yield substantial profits through optimum utilization of resources at hand, all made possible by machine learning. A few of the machine learning algorithms that are being applied widely include linear regression, decision tree, logistic regression, random forest, naive Bayes classifier algorithm, neural networks and gradient boosting.
The following are a few of the advantages of machine learning in asset management:
- Highest uptime/runtime and improved machine performance
- Constant machine health monitoring
- Advanced analytics of all assets drilled down to various levels such as machine, plant, facility, and so on
- Reduced consumption of raw materials and resources such as air, water, heat and electricity
- Facilities and operator performance monitoring
- Live alerts, reports and detailed data logs for each instance
In conclusion, industry experts have deemed machine learning as an unrealistic absurdity and all the other negative adjectives you could think of, but the promise of improving our future and simplifying lives through improved asset management is what keeps the spark going.
Read how a utility service provider and meter manufacturer leverages Azure Machine Learning to remotely monitor its IoT-based smart water meters. Notably, the company reduced water consumption by more than 30% by effectively managing meter failures and water leakages.
The demand for and technology of IoT devices are rapidly passing the consumer market and landing squarely in medical electronics, IT/enterprise, industrial and military markets. And those IoT markets are experiencing dramatic growth.
However, those markets are highly demanding when it comes to high reliability. IoT devices in these instances must be reliable to the point where there can be no flaws or failures, and definitely no latent failures — meaning the IoT device shouldn’t fail once it’s being used in the field.
How do you maintain IoT device high reliability especially at the printed circuit board (PCB) level?
Remember, as I said in an earlier blog, most IoT PCBs are not conventional printed circuits. Rather, in most cases, they are small combinations of rigid and flexible circuits or flexible circuits alone. It can be argued that some IoT products may be larger and, in those cases, more of a conventional rigid PCB is used. But in a majority of cases, IoT products are small and call for the rigid-flex or flex circuitry as the basic PCB.
Vias take on special significance for IoT rigid-flex and flex circuit reliability. Vias are tiny drilled openings that create electronic or power connections between a circuit’s multiple layers. Via placement is of paramount importance in maintaining high reliability. Location of via placement is critical because flex circuitry has bending curvature and radius, which can weaken these vias after long periods of time.
The general rule of thumb for IoT circuits is for these drill holes to be as small as possible due to limited space. Sticking to five to six mil finished drill hole size is a good compromise. Going below four mils, like drilling at three mils, calls for laser drilling, which is more time-consuming and costly. Going higher, for example at six to seven mil via hole drilling, means too much valuable real estate is consumed, and that’s not a good move for a small IoT PCB design.
Keeping smallness in mind, those IoT rigid-flex and flex circuits have limited space. Components placed on those small circuits are miniscule as well.
For instance, capacitors are part of an IoT circuit’s electronics. The trend has been to capsulize capacitors into increasingly smaller packaging like the 01005 package, which is a challenge in terms of placement and inspection. This means the IoT rigid-flex and flex circuit assembly house must have sufficient experience with its various processes so it can effectively produce highly reliable products. In particular, savvy assembly houses must have extensive knowhow about solder joints for IoT circuits since devices and surface mount pads are so miniscule.
A misstep during the pick-and-place and reflow process can create latent failures as mentioned earlier. So, solder printing, placement of components and reflow temperature have to be accurately dialed-in and maintained.
Printing, in particular, takes on special meaning. If overprinting is inadvertently performed, shorts may result between the leads or balls of a micro BGA, QFN or flip chip packaged device. A short or solder bridge is a solder extension from one lead or ball to the next and forms a short circuit.
However, if enough paste isn’t dispensed on the extremely small surface mount pads, there will be a shortage of solder paste, thereby creating an open (a broken interconnect of a component’s package) in the worst case scenario or big voids in the best case scenario.
Also, one has to be wary about using through-hole components in an IoT design. One reason is they lack the important functionality. Also, they have a high probability of adversely affecting reliability. Through-hole technology is older and was routinely used for earlier conventional PCBs before surface mount technology came into prominence. Still, there can be some cases where through-hole components are used for IoT products.
Through-hole components are larger compared to similar ones in surface mount packaging such as micro BGAs, CSPs and QFNs. Thus, through-hole components placed on small rigid-flex and flex circuits during the pick-and-place assembly operation will pose difficulties. In effect, they create the probability of reliability issues, again due to the bending and curve radius that are associated with flex circuits.
In summary, distancing an IoT design from using through-hole components is a good idea. Also, printing consistency is highly critical, as well as proper solder joint consistency. These key steps are the foundation of high reliability, which is absolutely required in specific IoT medical, industrial and military/aerospace applications.