I think everyone is now familiar with cloud computing, but have you ever heard of edge or fog computing? I hadn’t until recently, but it turns out to be a new way to use computing resources, especially to enable your digital transformation process for IoT devices.
With the advent of IoT, we see more and more data being generated by sensors, somewhere in the field (often quite literally). In some use cases, we want to do a lot of local processing. Take, for instance, sensors on a modern-day car. Processing of the data needs to be done locally; and with the increasing amounts of data being sent over it, it would be impractical to send all of that to centralized location. But mention the fact that action that needs to be taken is local, and an example would be a distance sensor on the front of the car that start registering the diminishing distance between sensor and the car in front. The action, of course, is to brake.
Fog computing, fog networking or fogging is an architecture that uses edge devices to carry out a substantial amount of computation. The National Institute of Standards and Technology (NIST) defines fog computing as follows:
Fog computing is a layered model for enabling ubiquitous access to a shared continuum of scalable computing resources. The model facilitates the deployment of distributed, latency-aware applications and services, and consists of fog nodes (physical or virtual), residing between smart end devices and centralized (cloud) services.
The document from NIST is something that is nice to read and will give you more insight into the concepts of fog computing. For me, it opened up a whole new set of concepts including mist computing (I am not kidding). But what would I’d like to do rather than going into all the technical details is to discuss a use case of these new paradigms and what they mean for you!
Everyone can be a provider
You can see it as an alternative to cloud providers like Amazon, Microsoft or Google. Rather than a centralized and massive cloud computing platform, you are now creating computing resources at what they call the “edge” of the network. You can add your computing resources to a network and start making some money with some spare computing resources that you might have. Together, these devices at the edge of the network make up a massive computing platform, very much like the grid computing efforts in the beginning of the 21st century. One such company that offers such a service is SONM.
SONM offers general-purpose, cloud-like computing services (IaaS, PaaS) based on fog computing as a back end. Computing power suppliers (hosts) all over the world can contribute their computing power to SONM marketplace. Users will, according to SONM, get cheaper computing power compared to cloud providers. The system SONM is offering is Linux- and Docker-based, with payment based on Ethereum smart contracts.
Although SONM says it is for general-purpose computing, some of the examples that it gives are for a very specific use case — for instance, rendering training models in the case of machine learning and video rendering. There are also examples that are more toward the nature of fog computing (the edge) — for instance, video distribution and content distribution networks.
Would you use it?
The question is, of course, would you use it? Or even would I use it? At this moment, it’s hard to say — we get an idea about what fog computing is, but the devil is always in the details. As a company, we use cloud computing for the services we offer to clients, and we pay for what we use. We hardly have any spare computing resources that we can add to such a market — and I’m not sure if we would want to if we did.
The world is changing, and we see more and more IoT devices arising, along with the demand for edge processing. As a part of a digital transformation strategy, fog computing, be it the form that SONM is offering or not, could very well be part of the roadmap. It’s a question of additional research from our side that is necessary to see if this will be beneficial to use as an alternative to cloud computing, or even to identify if there would be any cases for our clients use for computing. But this doesn’t mean that fog computing is useless; it just means that we haven’t fully identified the use cases.
In case you do have some spare computing power that you might want to recover some of the cost associated with, look into SONM or a similar offering; if you are on a shoestring budget, cheaper computing resources might be beneficial. So, it all depends on your use case.
What do you think? Is fog computing a hype? A trend? A fad? Let me know in the comments!
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Preparing for battle: Stopping IoT attacks
With Gartner predicting a world filled with more than 20 billion IoT devices by 2020, we’re becoming more connected by the minute. All of these devices and sensors will improve and enhance many aspects of life, but they also present risks when the devices are infiltrated by hackers.
Many IoT devices are built without even a minimum of security controls, so they’re exposed and vulnerable from the outset. There’s also a lack of standards throughout devices, which prevents enforcing uniformity in security settings and establishing universal security parameters. Poor patch management is another area of concern, as many IoT device manufacturers require end users to update devices and obtain the latest patches.
For an example of the risks, consider the infamous Mirai botnet, a distributed denial-of-service attack that knocked many internet services offline. Mirai targeted routers and other connected devices that used default user names and passwords, and quickly spread to infect millions of devices. Shutting down botnets such as Mirai is difficult because it’s next to impossible to “lock out” infected machines from internet access, and finding and prosecuting the botnet’s creators is very challenging. IT and individual users frequently are unaware their device is taken over as a botnet, so advanced monitoring and prevention tools are recommended to help proactively stop such intrusions before damage occurs.
For consumer devices, there’s a focus on features such as speed, image quality and value of the data, but consumers don’t typically ask for robust security features from their IoT devices. Companies that develop and deploy these devices should push for better security controls and begin to talk to consumers about the need for improved protections.
Understand the risks
The risks with IoT are fundamentally dynamic because IoT itself is adjusting and growing at such a rapid pace. This growth and sophistication brings with it parallel interest in attacking IoT for financial gain or to simply cause disruptions. Companies should carefully review the legal obligations that come with IoT devices, especially when sensors and other IoT components are used in settings that could potentially cause physical harm. Self-driving cars and connected factories are just two examples of such IoT-based environments where a hacking incident could result in death, not just inconvenience. However, this does not mean there should be complacency in the security protections afforded to wearables or other sensors that aren’t managing life-threatening situations. IoT devices of all kinds are producing data, and it’s the requirement of companies to manage and protect the produced data.
To combat the challenges with IoT, companies should perform risks assessments to fully understand where they might be exposed and how they can remedy those risks. They need a log of every connected device in their network along with a way to automate patching and updating.
Managing the devices and the data
Corporate IT must consider the security needs of provisioning and authenticating IoT devices throughout the company. This includes accounting for the role and location of all of these devices, along with details on updates and patches. The actual data sent between the devices and the network must also be protected. Many companies rely on IoT-derived data to make impactful decisions, so the integrity and security of the data is supremely important. Considerations should include how the data is protected at rest and in transit, and if tools such as encryption should be used to render stolen IoT data unusable. Complex IoT deployments warrant the use of device manager platforms that allow users to control devices remotely, update firmware and control authentication for every device.
IoT deployments are growing by orders of magnitude, and the number and complexity of attacks will follow suit. Companies should demand better products from device manufacturers, with requests for automated updating and patching and the closing of known security flaws before devices go to market. Firms that deploy IoT devices should also anticipate more stringent data management from various governments, and know they’ll need to improve how IoT data is collected, stored and transmitted. Improved visibility into IoT, enhanced devices and a security-focused mindset will all need to come together if companies want to use insights from IoT while also thwarting attacks.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
IoT is no longer a nascent technology, and with its adolescence comes growing pains. Fittingly, there has been much talk lately in the IoT world about stalled implementations or pilots that are stuck in purgatory. From these missteps, one thing is clear: To successfully implement, advance and maintain an IoT project or implementation, an organization must invest in developing policies and guidelines that steer the work. More substantial upfront work and additional forethought into the issues and ramifications of connecting networks and devices that may have never aligned in the past can help alleviate many of the problems organizations face as they advance their efforts in this space.
As with any modern technology, different components of the IoT ecosystem transition across the maturity curve at their own pace. Many Fortune 500 companies, small and medium-sized businesses and government entities are aiming to transform their business, develop new markets and optimize operations with IoT practices. There is a consistent demand to create a reusable and repeatable framework that can enable this realization while continually accommodating evolving business needs and revolutionizing technology into a broad IoT ecosystem in manufacturing, retail, public safety, government, transportation, education and other segments.
The value proposition of IoT is being enhanced continuously with the rapid pace of innovation and exponential growth of connected devices, diversity of networks, platforms that manage the complete system, data analytics, data lakes, data ponds, security services, mobile applications and end-user services. However, an IoT ecosystem also carries inherent risks due to the evolving nature and sensitivity of data.
The IoT ecosystem is complex, ever-evolving and solving new business needs daily. This complexity is driving a need for guidelines that enable a successful and responsive IoT practice within an enterprise. The Midwest IoT Council recently developed a policy framework which can serve as a guide for any business or government entity providing or using IoT products and systems.
There are four key areas where it’s imperative for organizations to establish guidelines and policies:
- Governance — Effective governance policies strike a thoughtful balance between participation, accountability, access, privacy, coherence and safety without constricting innovation and creativity;
- Security — It’s essential to build a system with security in mind from the start, rather than including security in hindsight, with focus and consideration on the protection of the public and resilience to attacks;
- Data management — Any IoT system should be open and transparent about the ownership and retention of data, any transfers of data and the chain of custody of data; and
- Privacy — IoT deployments must protect and respect the privacy of individuals and the confidentiality of the enterprise/agency; there is always a balance that needs to be established between transparency and access right.
While the specifics of an IoT policy will vary between industries and organizations, the presence of a plan itself that is readily accessible and widely understood should not. As IoT enters its second decade, it remains an area of tremendous promise for many industries. However, taking steps to ensure that proper procedures and parameters are in place before taking the plunge is essential in any effective and transformative IoT implementation.
As IoT systems become more ubiquitous, there is a natural weeding out of the good ideas and the value propositions. Beyond the simplest applications, the highest-value opportunities involve increasingly complex systems. For example, smart cities will feature IoT-connected surveillance, automated transportation, smarter energy management systems and environmental monitoring. In healthcare, IoT monitoring of vital signs will help patients avoid infections and assist in earlier care of medical conditions. All these IoT systems need to be implemented in a manner highly cognizant of and protected from evolving security concerns.
Complex IoT technologies require more attention to product design details. Common mistakes companies make when they embark on an IoT product design project can delay or even cripple implementation.
Here are six IoT design mistakes to avoid for successful IoT product development.
1. Connect or not?
Technology adds a cost layer to traditional non-tech-oriented products. In particular, adding communication technology can invoke both a non-recurring and monthly recurring cost. While it is “de rigueur” these days to want to create new IoT products or add an IoT technology layer to existing products, it is important to understand the business case and value. Adding this layer involves embedding cost into the product with possible monthly subscription costs, as well as an initial and continuing stream of expenditures on product development and lifecycle support. Without a clear rationale of business value, the product will flounder. Solid research conducted before embarking on a project will inform your decision and determine whether the system you consider designing makes business sense.
2. Pick the right platform
When adding intelligence to a product that wasn’t connected before, many startups select hobbyist-grade boards. The trouble is that these developer platforms are not suitable for large-scale deployment. If the device proves successful and starts generating serious demand, production can’t scale because you can’t source thousands of that type of hobbyist board. Off-the-shelf platforms are useful for proof-of-concepts and as platforms for software developers, but do not confuse these POC systems with those that are production-ready. Any experienced hardware developer who has been creating production-volume products will know a development system is not a high-volume extensible platform. You should only source components and modules for your product that will be available and appropriately costed now and in the future.
3. Don’t forget regulatory impact
Regulatory testing is another important part of any IoT product design effort. Regulatory requirements and certifications must be factored into the design. Because they are connected, IoT products must be tested for radiated emissions and susceptibility. If they plug into an outlet, conducted emissions and susceptibility could come into play. Additionally, cellular carriers must perform testing to provide certification for your product for deployment on their infrastructure. Depending on how you implement the cellular technology, this step could take months and be very costly. Selecting components that are pre-certified will drastically cut down the time and expense. Pre-certified parts are more expensive, but they radically reduce the headaches involved with getting certifications later. Of course, the size of pre-certified modules can be an issue in highly dense devices and this can force the designer into a more fundamental design invoking the higher cost and longer development/certification cycle.
Lastly, where and when do you plan to sell? It is not practical, particularly in a startup, to expect ubiquitous worldwide sales of fully certified devices on the same calendar date. While most standards of safety, communications and cellular certifications are similar, pick your target countries carefully and think about international rollouts over a period of time. Many countries are grouped to work under common set of standards, but there are outlier countries with unique standards which, while similar, are not identical to the more widely used standards.
4. Security is job one
Security needs to be baked into your IoT product design process, not added on as an afterthought. It’s a must-have, not simply a nice-to-have. The number of connected devices is astounding. Already there are more connected devices than people on the planet, according to Norio Nakajima, an executive vice president at Murata Manufacturing Co., Ltd. By 2020, there will be 50 billion connected devices, outnumbering people by more than 6 to 1. The potential for a breach is enormous, and the results could be devastating. Bad guys often scan for poor or misconfigured security. Consider end-to-end security mechanisms, end-to-end data encryption, access and authorization control, and activity auditing. A security chain is only as strong as the weakest link. Low-end and poorly protected IoT endpoints are a frequent point of entry for attacks when they are not carefully and intentionally secured.
5. Get a top product development team
While many startups spend a lot of time researching the contract manufacturers (CM) that will handle product assembly, they don’t do their due diligence when selecting a product designer. Sometimes, a CM will include the skills of their design engineers on staff at very low or subsidized costs to the startup. CMs will sometimes do this in order to lock in the manufacturing rights to the product. Unless you are a tech giant willing to place initial orders for hundreds of thousands of units, do not expect to get a high-quality design and engineering CM team responding to your needs. Nothing burns through a budget (and time) as quickly as having to redesign a product that does not meet the required functionality or, worse yet, gets deployed with poor quality. Such problems can destroy your company. Instead, look for an independent product design firm that can be your full partner in development.
6. Design for maximum user productivity
Ultimately, the person using the device is directing its activity, whether it’s connected to other devices via the internet or not. Sometimes there is no Wi-Fi, so it makes sense for a product to have some functionality even when it isn’t able to connect to other systems or devices. By making the device usable when disconnected, you’ve created a product in which the user is in control and can remain productive even in zones of poor connectivity. That’s important for some potential customers who may be uneasy with all the connectivity inherent in IoT products.
While the progress of complex IoT devices is exciting, it is important to remember the basics as well. You need a sound business case, as with any investment. Solid project management is just as important as avoiding the above mistakes when shepherding a leading-edge technology device from inception to the manufacturing floor. Selecting the right engineers for the design team, who have technical as well as communications skills, is also critical to success. Finally, staying within budget parameters and meeting deadlines ensures the plan will be completed successfully, increasing the chances of the business’s success.
The internet of things isn’t an entirely new concept. Long before smart fridges and Amazon Alexa began dominating conversations, connected devices were augmenting human experiences in industrial settings. For decades, embedded systems using proprietary frameworks and protocols have provided the ability for previously “dumb” devices to share operational attributes and statistics. Aircraft manufacturers, for instance, have used this technology within aircraft engines to determine when an aircraft needed preventative care. Engines would signal when the threshold of takeoff and landing cycles was reached and maintenance was required.
As technology has evolved, connected devices and communication have become pervasive, yet must still coexist with legacy systems. In manufacturing and other industrial settings, we often see a modernization of older technology — a cohabitation of new and old. All over the world, there are factories running totally viable, essential machinery that was built 50 to 100 years ago, yet this equipment is being monitored by new connected sensors developed in the past few years. These systems send predictive analytics to the plant owners, notifying them that a part or tool is overheating or may be running slower than it should be, indicating a repair needs to take place or that a new part needs to be ordered soon. In this way, modern IoT capabilities are helping maintain the longevity of the older machines, enabling plant operators to be more efficient in their use of the machines, and improving safety conditions for both workers and end users.
These connections are invaluable when it comes to safety and oversight. When it comes to the automotive industry, today’s connected vehicles provide auto manufacturers with a new level of data when it comes to recalls and part issues. Snapshot data can be pulled from the vehicle regarding what was happening in the system at the date and time of an event and used that to reveal anomalies. That provides a much richer context about that specific engine that was produced, and it can be compared to the data from the millions of other engines they’ve made over the years. Analyzing this information can reveal interesting trends and identify opportunities to improve quality, such as the durability of parts down to the mile.
While industrial IoT brings with it incredible opportunities for quality control and improvement, one area that still needs to be addressed is the standardization of connected devices. The ability to access the data on each individual machine or device is helpful, but the ability to have these machines communicate with one another and report back to those monitoring them is where the real value is found.
The idea of standardizing communication among connected systems isn’t new. There have been numerous protocols that existed for years under the guise of embedded networking — Zigbee and CAN-BUS being two. None of these networks were ever as pervasive as the internet of things, however, and they were largely proprietary to each company or device. The proliferation of IoT provides an opportunity to broadcast specific operational information about machinery, equipment and other resources that may have not been as widely connected. The ability of these systems to talk to one another in this way — providing contextual information and reporting issues — will be paramount to the success of IoT. For example, a sensor in a farm that indicates a drop in temperature and increase in humidity may indicate to the connected sprinkler system that a scheduled afternoon watering may not be needed because of impending rain, and alert a monitoring system of that.
To fully realize the benefits of industrial IoT, we will need a protocol around how devices talk to one another. One possible approach takes a page from the DNS playbook in standardizing naming conventions. In our factory example, the IPv6 address of a sensor or actuator in a piece of machinery would have a human-friendly name that is specific to that network — essentially taking the DNS paradigm and topology that is found within corporate IT networks interacting with the internet at large. Naming these connected “things” on the same network enables them to recognize and communicate with one another. Internal DNS, therefore, will also play a monumental role in managing the naming and access to these connected resources as we move to a fully connected world.
With billions of devices coming on in the next few years, having the naming structure in place to accommodate the eventual shift toward IPv6 addressing will be paramount to the success of all parts of the internet, regardless of size or significance of a connected device. As it stands today, businesses are accustomed to the domain name system as the gateway to all connected devices and services on the internet. The advent and expansion of IoT in all forms will merely add to the number of the devices that can be addressed in this fashion. While the Internet Engineering Task Force and The Internet Corporation for Assigned Names and Number will ultimately have the last say, DNS will play a vital role in these systems, as it does today, on an even larger scale.
Spend some time in a modern industrial enterprise and you’ll see a transformation underway: The network edge is getting smarter as the industrial internet of things continues to take shape. This transformation is changing the way data is used — and how it is valued.
Initially, edge systems were deployed simply to automate a process. The data generated by SCADA systems had a very narrow use, usually to alert operators of an impending issue that required attention (e.g., the pressure in a tank reaching a predetermined critical threshold). Operators would assess the information using their expertise and experience and execute a process to respond to the condition.
An emerging trend is for edge systems to provide the intelligence to make these decisions in real time, taking the human out of the equation. The data generated at the edge — data produced by sensors and automation systems throughout the plant or production process — is like the “sensory input” for these decision-making systems. Just as our five senses help us monitor and respond to changes in our environment, so it is with the intelligent industrial environment.
But the data generated at the edge isn’t just critical for real-time decision-making. The real value is in how the data collected can be analyzed over time, and the capability to identify trends and benchmarks. For example, we recently worked with a company that manages office buildings in Asia that realized it had 30 years of data gathered from 30,000 elevators. What insights could analyzing this data reveal for optimizing elevator control systems, maintenance or energy usage? Frankly, the company isn’t sure yet. But it is determined to find out.
This represents a significant shift in mindset from automation data being viewed as something of only passing importance to an asset of potentially strategic importance. Data generated by sensors and automation systems is now viewed as critically important. And that changes the way you think about the systems entrusted to store and analyze the data, and the risk to that data.
This changing view of data value demands an expanded view of how to protect the integrity of that data. It is no longer just about system availability. That’s critically important, but there’s more to it. It also means ensuring the data is secure and compliant. It means making sure that the applications processing the data are performing properly and that the information is not being corrupted. It means ensuring that the data is flowing to the real-time analytics engines with acceptable latency and that connectivity is maintained with proper security.
IIoT is changing the industrial enterprise in significant ways, including transforming the value of data and the ways it is used. Adopting a new mindset when considering how that data is stored and handled — from the sensors gathering data to the systems using it to the repositories where it is stored — is a critical success factor for creating an edge computing environment that is as safe and resilient as it is intelligent.
It’s a fascinating time to be a creative: Traditional crafts and skills are coming back into vogue at the same time that hundreds of novel uses for technology are being pioneered and made available to everyone. As the digital and physical worlds converge, blurring the line between old-school craft and new-school tech, there are a lot of exciting possibilities for physical makers to expand their horizons by learning how to integrate or augment their work with digital tools.
The most obvious way to merge the digital and physical is by 3D printing, which literally takes a digital creation and brings it to physical form. If you’re unfamiliar with how it works, a 3D printer deposits small amounts of a physical substrate (like plastic or metal) much the same way that a traditional printer deposits ink. The 3D printer, however, repeats this process over many layers, allowing the creation of a physical object in three dimensions.
Existing makers can integrate the technology into their own practice — imagine, for instance, being able to model and print a prototype of a project before committing to rendering it fully in your chosen material. A sculptor, for instance, may want to model their creation in inexpensive plastic before carving it in an expensive medium like stone or casting it in metal.
Even if you don’t have any intention of producing a physical model using 3D printing, learning to use digital modeling software can save you huge amounts of time and material waste in the shop. Knowing exactly what you’re going to make, how it fits together and, perhaps most crucially, having precise measurements generated by the computer, makes every aspect of your craft easier.
This is especially true for industrial and product designers who may be working on complex shapes, products with many parts or working in a huge variety of media. Planning your project from start to finish and generating a 3D model takes a lot of the hassle out of the process and gives you images you can share with clients for approval or as a part of a digital portfolio.
Automation and microcontrollers
Another technology that’s becoming increasingly common is the use of sensors, microcontrollers and tiny computers (like Raspberry Pi) to create interactive objects and add smart functionality to traditional crafting. Computer-controlled movement and automation of objects is not new to the world of theater, for instance. But the development of relatively powerful, relatively low-cost options like Arduino and Raspberry Pi are making this kind of technology available to makers on a budget, no matter what your field.
Learning a little about how to use these technologies will help you expand your creative palette and open a new world of possibilities into your craft.
Augmented reality — the layering of digital images over a view of the physical world (as on a smartphone, for example) provides a lot of interesting possibilities for all kinds of makers. While visual artists and illustrators may be familiar with software for illustrating or drawing on the computer, augmented reality technology offers a whole new way for them to interact with the physical world or to work on custom projects with other makers. A mixed-media artist could, for instance, produce a sculpture, collage or other physical work that can then be overlaid with an illustrated component when viewed through a smartphone app.
Installation artists could similarly work with their venue to create interactive elements available in augmented reality. Some designers and programmers are already using augmented reality to increase the immersiveness of tabletop games.
If you’re a sculptor, a welder or a designer of any kind, there may be digital tools that can take your trade or hobby to the next level. That could mean creating miniature scale prototypes, or enabling automation and moving parts by using a microcontroller like Arduino. The best thing about tools is that they’re not an end product — they make new ideas possible and enable you to pursue them.
When you think of hacking, you might think of viruses or ransomware attacks where computers are unable to operate unless a bitcoin is paid to an anonymous cybercriminal. In the manufacturing sector, the reality isn’t quite so public.
First, the practical physical realities of the industrial landscape are challenging. With industrial devices, an exploit of a single, simple software vulnerability can have serious consequences. Depending on the actual setup and security posture of the target, smart factory attacks could severely affect critical goods, risk lives on the factory floor and generate massive financial damage.
Second, 2017 research by Verizon revealed that while across all industries most cyberattacks are opportunistic, 86% of attacks in manufacturing are targeted. Almost half (47%) of breaches involve theft of intellectual property (IP) to gain competitive advantage, with trade secrets the most common data type breached in manufacturing companies.
Of course, it’s hard to get accurate figures about the incidence of such attacks, as few companies are willing to disclose breaches on public record. In 2016, the European steel conglomerate ThyssenKrupp confirmed it was the victim of a significant cyberattack that the company believes was carried out in connection with industrial espionage, with the attackers were reportedly looking to steal trade secrets from the company. ThyssenKrupp stated on its website, “According to our analyses, the aim was essentially to steal technological know-how and research from some areas of business area industrial solutions. There have been no signs of sabotage and no signs of manipulation of data and applications or other sabotage.”
The robots revolt
As well as the risk of stealing company IP, last year Trend Micro security researchers released research showing how easily factory and industrial robots could be hacked and used for malicious purposes. Industrial robots are mechanical multi-axis “arms” used in modern industries for automating operations such as welding, packaging, food processing or die casting. They consequently play a key role in Industry 4.0 initiatives focused on automation and smart factories. Many robots from manufacturers including Kawasaki, Fanuc and Yaskawa were found to vulnerable to enabling hackers to make changes that alter the way they operate. The researchers explained:
By leveraging the remote code execution vulnerability, we modified the control loop configuration files, which are naively obfuscated and thus easily modifiable. In particular, we changed the proportional gain of the first joint’s PID controller, setting it to 50% of its original value. Then we programmed the robot to perform a straight horizontal movement. The trajectory of the end effector projected on the horizontal plane was notably altered. Although the maximum difference between the position under normal conditions and under attack is small (less than 2mm), according to the specific machining that the robot is performing, it can be enough to destroy the workpiece.
Beyond simply altering machines, researchers were also able to inject faults and micro-defects into the workpiece with the potential to control a robot, damage its parts or even cause injuries to people who work in close collaboration with it by disabling or substantially altering safety devices. Additionally, the Trend Micro Forward-Looking Threat Research Team found tens of thousands of industrial devices residing on public IP addresses which could include exposed industrial robots, further increasing the risk that an attacker can access and hack them. Trend Micro has duly contacted manufacturers in response to its findings.
How to keep factories safe
It’s easy to forget that even the most automated factories are still managed by (often fallible) human beings.
Organizations can mitigate potential vulnerabilities by:
- Educating staff members on the basics of cybersecurity and risk management, including how to identify suspicious emails and what to do if they receive one;
- Maintaining a complete asset inventory. You can’t protect what you don’t know about, so collecting a complete and accurate inventory of all systems, software and critical assets within your environment is critical;
- Implementing multifactor access controls, data security, intrusion prevention, firewall and spam filtering from respected and established vendors.
- Having a clear policy and practice on BYOD. At a recent Berlin conference, a speaker recounted a virus that infected connected machinery and brought the factory floor to a standstill, resulting from a virus introduced by an intern’s BYOD USB stick. It’s not uncommon for virus-infected USB sticks to be distributed at tech events, even dropped strategically at bars and coffee shops frequented by factory staff as a means to gain access;
- Knowing where data is stored. Most organizations have an incomplete understanding of where their sensitive data resides. Security teams need to ensure sensitive data is not stored in unauthorized locations, with policies implemented to ensure maximum protections for the most critical systems and information;
- Employing services such as Shodan, a search engine for IoT that allows users to find devices that are publicly accessible on the internet, and those which may be vulnerable to hackers; and
- Maintaining validated, tested, reliable backups that include off-site and off-line copies.
Ultimately, for security to be robust, reliable and effective, it needs appropriate investment in time, money and energy, appropriate skilling and training of staff, and an effective action plan in place if and when an attack occurs. Attacks are inevitable and now a matter of when, not if. However, their severity, frequency and impact can be greatly reduced through good workplace practices.
In the physical world, each person is unique, with their own set of relationships, personal preferences, financial profile, physical characteristics, past behaviors, future plans and so on — the attributes that make up their identity.
Being able to recognize each customer’s unique identity makes it possible for companies to do business with them — to know what kind of services to provide and recommend, charge and track payments accurately, measure and enhance satisfaction, and provide the kind of continuity that delivers optimal value for customers and providers alike.
Digital identity is the extension of this concept into the digital realm — and it’s central to modern connected life. The ability to recognize and manage individual customer identities effectively is the foundation of:
- Trust, as companies safeguard each customer’s personal information and use it with consent for their benefit.
- Consistency, by harmonizing identities and connecting user identity record across organizations and industries.
- Experience, making it possible for companies to know their customers, personalize services, simplify online interactions and increase satisfaction.
- Privacy, allowing customer transparency and choice on what where, and how their personal data is used.
- Security, helping companies protect against identity fraud, hacks and breaches.
- Innovation, as companies use identity across industries to capitalize on synergies and deliver new and dynamic connected experiences powered by context.
Most fundamentally, digital identity makes it possible to take a customer-centric approach to business. By building trusted relationships and delivering more personalized and consistent experiences, companies can improve customer retention, strengthen their brand, increase their share of wallet and achieve competitive differentiation.
So, digital identity is the ultimate “vehicle for success” that must underpin the new mobility. To have a clearer understanding of that role, it’s helpful to review a few of its core concepts.
The fundamentals of digital identity
Digital identity can apply to things as well as to people. This is important to keep in mind in our world of connected devices and things. Just as businesses and systems need to know who they’re interacting with, a thing (such as a connected car) needs to be able to recognize another thing (such as another car, a charging station, a drive-through payment terminal, a tollbooth, etc.) to enable secure new mobility functions and experiences.
Authentication is simply the trusted recognition of the user’s digital identity: Who is this? Is it really who he claims to be?
Authorization goes one step further: Based on their authenticated digital identity, what should this person be allowed to do? What applications and data should he be able to access based on factors such as his business role or relationship, customer subscriptions, account status, current scenario and so on?
Single sign-on simplifies the customer journey by allowing customers to log in once for access to all of your applications that they’ve signed up for, rather than having to log in application-by-application. Frictionless login across applications isn’t just convenient; it’s also fast becoming an industry standard. Meeting this expectation is increasingly important for maintaining a brand’s credibility and trustworthiness.
Federation extends single sign-on beyond your organization to encompass your ecosystem partners as well. In addition to making life easier for customers, federation positions your company as a trusted identity provider and go-to access point for a broad range of content and services.
Simple multifactor authentication is, as its name suggests, the use of multiple factors to authenticate who someone or something is. Multifactor authentication typically uses a combination of identity types such as something they know (e.g., a password), something they have (e.g., a key fob or an iPhone app) and something they are (e.g., biometrics, such as a thumbprint or retina pattern).
Privacy is critical. Personal digital data is precious — customers have to be able to trust you with theirs. As the number of connected devices and things grows, companies must be able to secure the user experience wherever and however services are used, tailor it to the customer’s data-sharing preferences and ensure that their data is never used in a way they haven’t approved.
Security becomes more challenging all the time — and more important. As consumers become more mobile and do more online in more ways, businesses need to ensure continuous protection not just at login, but throughout each digital session. This includes responding to threats in context by asking for additional identity verification when something unusual takes place, like a resource request from an unfamiliar location or device.
From IAM to CIAM
Initially, digital identity was used primarily as a way for businesses to control access to their systems by their own employees. Based on your digital identity, verified through your username and password, you would be granted to the appropriate applications and data for your role. By the same token, you would also be prevented from accessing applications and data that you shouldn’t, aiding customer privacy and security.
Digital identity also makes it possible to track your behavior over time, helping companies meet requirements for auditing, regulatory compliance, internal security and the like. Within the tech industry, technologies to manage digital identity fall into the identity and access management (IAM) category.
Digital identity has now expanded to encompass personalization and quality of experience as well. As any successful business knows, the better you know your customer, the better service you can provide, helping you drive loyalty, growth and revenue. The personal information customers share with you to establish an identity with your organization, complemented with personal data from additional sources, helps you understand their individual needs more fully.
This in turn helps you cross-sell, upsell and deliver more personalized experiences. Of course, security and control remain paramount as well. Reflecting the customer-centric orientation of this way of thinking about digital identity, this technology category is called customer identity and access management (CIAM).
How digital identity adds value for new mobility
There are many ways the tools we use to provide and protect a secure digital identity can add value to the present-day development trends in connected and autonomous cars. For example:
Personalization and services: Feature on demand
For the most part, today’s cars are personalized during the purchasing process — not afterwards. If buyers subsequently wish they’d opted for more horsepower, matrix-LED lights or additional connectivity or GPS features, their only option is to try their luck with expensive after-sales projects. With digital identity, both owned and shared connected cars can allow flexible personalization of their software-enabled features on either a per-ride or ongoing basis. The identity of the user is linked with the identity of the car to sync the user’s preferences with the car’s configuration and trigger the corresponding monetization processes.
Identity for privacy and compliance
Some connected car capabilities raise delicate issues for user privacy. As part of predictive maintenance, a car’s ECUs may push alarm messages to the carmaker’s back end to signal a problem with the engine, gearbox or brakes. This message can include driven speeds, gear and RPM history, and geographical locations. And there’s the catch: A driver or user may appreciate the alert there’s something wrong with the car and where to find the next garage, but may not necessarily want to share information about how the car is being used. The carmaker needs a way to let users and drivers choose which data to share — a preference that can be linked to their digital identity.
Connected car security and safety
A modern car’s functions and features are controlled by upwards of 100 complex ECUs whose interaction is critical for the safety of the passenger and of the car. Equipping each ECU with its own unique and secured digital identity makes it possible for these control units to identify themselves to each other when sending messages, helping prevent hackers from injecting malicious messages to cause malfunctions.
These examples show how much even today’s connected cars depend on secure identification of different parts of the cars to each other, as well as of the entire car to its driver or owner, to ensure both a good user experience and the protection of the data being generated during each ride. Designing digital identity and its corresponding tools into the vehicle from the very beginning provides a vital backbone for security, privacy and monetization.
Identity use cases in new mobility
As the industry moves beyond connected cars to fully realized new mobility services, federated digital identities will play an increasingly important role, as illustrated in this next set of examples.
Bring the end user’s digital life to a connected car
One of the most important targets of the industry is to bring the digital life of the user into a connected car — to enable the same set of services during physical mobility as at home or in the office. To make this simple and frictionless, carmakers need to provide a version of single sign-on into the “car as a service,” linking the authenticated sessions of various digital services to the car for the duration of the journey. Digital identity will provide the mechanism for this seamless yet secure experience.
Enable the best user experience for car-sharing or ride-hailing services
User experience is a prime factor in people’s willingness to use shared vehicles rather than their own personal car. Federated mobility services will allow people to handle every part of the journey using a single app, from summoning their vehicle of choice to payment at their destination, with streaming media, GPS and other connected services along the way. The same app even works across fleet providers — no more separate apps for each car sharing or ride hailing service.
Make the connected car interact with the smart city
The examples above illustrate the links between users, services and preferences. As a next step, the car and the driver need to securely interact with the infrastructure of a smart city, such as identifying the car and payment at the charging station, autonomous parking, tolls and so on. Here, digital identity goes beyond the relationship between the car and the driver to manage the interaction of the car and driver with the world around them based on secure authentication.
As we see, digital identity is more than just a mechanism to secure and authenticate cars and devices; it’s also a foundational tool for enabling the entire new mobility and smart city ecosystem. Service providers offering digital experiences from the connected car to payment can collaborate to deliver new mobility services which are solidly built upon the security, trust and interoperability of digital identities across business domains.
That’s vital in building the most critical element of new mobility adoption: the ultimate trust and confidence of new mobility users.
With the soaring number of mobile and IoT devices and onslaught of new apps that businesses are faced with on their wireless network, it is time for innovation that can help IT scale and meet these new requirements. Thankfully, AI and modern cloud platforms built with microservices are evolving to meet these needs, and more and more businesses are realizing that AI is a core component to enable a learning and insightful WLAN. AI can help bring efficiency and cost savings to IT through automation while providing deep insights into user experience, or service-level enforcement. It can also enable new location-based services that bring enormous value to businesses and end users.
At the core of a learning WLAN is the AI engine, which provides key automation and insight to deliver services like Wi-Fi assurance, natural language processing-based virtual network assistants, asset location, user engagement and location analytics.
There are four key components to building an AI engine for a WLAN: data, structure and classify, data science and insight. Let’s take a closer look.
Just like wine is only as good as the grapes, the AI engine is only as good as the data gathered from the network, applications, devices and users. To build a great AI platform, you need high-quality data — and a lot of it.
To address this well, one needs to design purpose-built access points that collect pre- and post-connection states from every wireless device. They need to collect both synchronous and asynchronous data. Synchronous data is the typical data you see from other systems, such as network status. Asynchronous data is also critical, as it gives the user state information needed to create user service levels and detect anomalies at the edge.
This information, or metadata, is sent to the cloud, where the AI engine can then structure and classify this data.
Next, the AI engine needs to structure the metadata received from the network elements in a set of AI primitives.
The AI engine must be programmed with wireless network domain knowledge so that the structured metadata can then be classified properly for analysis by the data science toolbox and ultimately deliver insights into the network.
Various AI primitives, structured as metrics and classifiers, are used to track the end-to-end user experience for key areas like time to connect, throughput, coverage, capacity and roaming. By tracking when these elements succeed, fail or start to trend in a direction, and determining the reason why, the AI engine can give the visibility needed to set, monitor and enforce service levels.
Once the data has been collected, measured and classified, the data science can then be applied. This is where the fun begins.
There are a variety of techniques that can be used, including supervised and unsupervised machine learning, data mining, deep learning and mutual information. They are used to perform functions like baselining, anomaly detection, event correlation and predictive recommendations.
For example, time-series data is baselined and used to detect anomalies, which is then combined with event correlation to rapidly determine the root cause of wireless, wired and device issues. By combining these techniques together, network administrators can lower the mean-time-to-repair issues, which saves time and money and maximizes end-user satisfaction.
Mutual information is also applied to Wi-Fi service levels to predict network success. More specifically, unstructured data is taken from the wireless edge and converted into domain-specific metrics, such as time to connect, throughput and roaming. Mutual information is applied to the service-level enforcement metrics to determine which network features are the most likely to cause success or failure as well as the scope of impact.
In addition, unsupervised machine learning can be used for highly accurate indoor location. For received signal strength indicator-based location systems, there is a model needed that maps RSSI to distance, often referred to as the RF path loss model. Typically, this model is learned by manually collecting data known as fingerprinting. But with AI, path loss can be calculated in real time using machine learning by taking RSSI data from directional BLE antenna arrays. The result is highly accurate location data that doesn’t require manual calibration or extensive site surveys.
AI-driven virtual assistants
The final component of the AI engine is a virtual assistant that delivers insights to the IT administrator as well as feeds that insight back into the network itself to automate the correction of issues, ultimately becoming a “self-healing network.”
The use of a natural language processor is critical to simplify the process for administrators to extract insights from the AI engine without needing to hunt through dashboards or common language interpreter commands as legacy systems that lack AI do. This can drive up the productivity of IT teams while delivering a better user experience for employees and customers.
Wireless networks are more business-critical than ever, yet troubleshooting them is more difficult every day due to an increasing number of different devices, operating systems and applications. AI engines are a must-have for businesses that need to keep up with soaring numbers of new devices, things and apps in today’s connected world.