The IoT landscape is rapidly evolving and these “things” are becoming countless. A few years ago, it would be hard to believe that medical devices would have network-connected capabilities that can track our health and monitor recovery progress and save lives of every age and demographic. This brings us to the internet of medical things (IoMT), a marketplace that covers a variety of applications, including patient monitoring, telemedicine and other devices that have internet connectivity. As we embrace the benefits IoMT brings, we must also ask essential questions about protecting patients: Where is their health data being stored, and is it safe?
Such devices can react in real time to relay critical information to the doctors, first responders and caregivers that are saving lives and improving health outcomes and patient experiences. However, according to the 2019 Thales Data Threat Report-Healthcare Edition, the healthcare industry experiences the highest rate of attack compared to any other industry studied.
Incredible health benefits, but also tech risks
It is clear that IoMT positively impacts healthcare providers and the lives of patients. Patients suffering from chronic diseases can avoid frequent visits to the doctor through remote patient monitoring. Every-day sick visits can turn into convenient video chats. Doctors can give their patients faster and more accurate diagnoses. Wearable devices can detect potential heart problems. While these examples demonstrate freedom, accessibility, and a painless experience for patients, IoMT technology surprisingly has the potential to cause some real “pain”.
Hospitals may assume patient data is being protected in the cloud, but according to this the study, 100% of healthcare organizations — more than any other industry sector — are collecting, storing and sharing sensitive data within digital transformation technologies while fewer than 38% are encrypting data within these environments.
One significant benefit of connected devices is the capability to collect and store a large volume of information, enabling doctors to access patient health data in real time and increasing the accuracy of diagnosis and spotting of trends. Unfortunately, data collection and storage can bring increased vulnerability around privacy and security. The range of possibilities for IoMT seems infinite, but to take advantage of them, the security of connected medical devices and related applications must be implemented thoughtfully to ensure data attacks and misuse are avoided.
While IoMT significantly improves healthcare, there are staggering numbers that indicate healthcare organizations are failing to implement good data security practices, putting themselves in danger of non-compliance and putting patients in danger of becoming victims of fraud. When sensitive patient data is compromised — intentionally or otherwise — medical records can be sold on the dark web for upwards of $1,000 per record, according to Experian. Unlike a credit card hack, where the bank can shut down the account and provide the consumer with a new credit card number, this healthcare data is out there for good – exhibiting a shelf-life longer than dried beans.
Where do we go from here?
As data breaches reach an epidemic level, Healthcare leaders do not need to choose whether or not to implement IoMT technologies within their business. Instead, they must be sure to check two things off their to-do list:
Partner with the right companies. Developers and the hospitals that implement these technologies must consider integrating key security features that protect the device and patient from encountering any malicious activity. Nowadays, every business is inclined to function as a technology company when it comes to implementing IoT and security. In a previous blog, we discovered that less than half, 48%, of companies could detect if any of their IoT devices have been breached. Breach detection and mitigation are especially crucial for the healthcare sector, because businesses must partner with the right security companies that can help ensure safe data storage, compliance and security protection features.
Meet security compliance regulations and educate patients. It is important for healthcare providers to not only confirm that their collection and use of data is HIPAA compliant but also ensure healthcare practitioners are explaining to patients the privacy issues and security risks that come along with IoMT devices. In addition, personal identifiable information is increasingly becoming a hot button for consumers at large. A prime example is California’s Consumer Privacy Act. Privacy will continue to be a focus for legislators over time, so it is imperative for healthcare organizations to understand regulatory mandates and compliance issues and how those impact their IoMT strategy.
The world of digital transformation is upon us, and our healthcare providers may need a shot in the arm to safeguard IoMT, because an apple a day won’t keep a data breach away.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
With the holiday season right around the corner, the need for smart technology solutions is essential. The influx of products developed, managed, qualified and shipped between October and December each year is vast, and facilities manufacturers are always looking for ways to streamline efficiencies and drive down costs.
As such, it’s becoming increasingly important for companies to automate the tracking process within manufacturing facilities and plants, especially during peak shipment times such as the holiday season. Depending on the size of the organization, there may be hundreds of thousands of products being produced, which then are being packaged and shipped throughout the world.
This is a highly complex procedure that requires precision and accuracy across the entire manufacturing process. If an aspect of the process fails and a crate is mislabeled and moved to the incorrect shipping truck, a store might not receive an entire shipment of the desired product. Not to mention a double shipment of the same product to a different location. This mix-up not only causes challenges for the retailer, but it’s a costly mistake for the distribution plant as the product now needs to be reshipped via a rush order to its real destination.
Typically, this process and management falls to the warehouse workers who manually track assets throughout expansive facilities and follow goods while they are out for shipping. However, we’re increasingly witnessing organizations turning to technology and connectivity solutions for tracking, which is helping to ensure better productivity and cost savings.
For example, a France-based company, Ineo Sense, has created and implemented Clover-Core, a series of LoRa-enabled sensor products for smart asset tracking in manufacturing settings. By integrating wireless connectivity via the LoRaWAN protocol, product managers and engineers across a warehouse setting can effectively monitor the usage, functionality, current status and location of expensive manufacturing equipment and shipment assets in real time. That means staff can remotely monitor their global operations and assets produced from one central location.
The holiday season is a very challenging time for retailers with many companies relying heavily on the revenue generated in this short window of time to reach their year-end sales goals. That said, IoT devices can seamlessly be installed into an existing manufacturing plant to help mitigate potential error and streamline the process. All consumers want the ability to purchase their holiday gifts in time for the season’s close, and manufacturers can do their part by ensuring the product is delivered on time.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Developing applications and services for the connected car market can be a hard path to navigate. The car data marketplace is a platform that enables third-party mobility services to integrate with personalized car data, which then promotes these services to users whose driving and car-owning experiences can be dramatically improved by them.
Unlike mobile app developers who have marketplaces, such as Google Play and the Apple Store, and access to retrievable data from hand-held devices for their products, the connected car market is still finding its feet in terms of accessing personalized vehicle data with a customer’s consent, and promoting the available services to users.
The rise in third-party mobility services
When we talk about third-party mobility services, we are referring to applications that seek to assist or enhance the user’s experience of driving, riding in, charging or owning a vehicle. These applications might be built to work with a user’s phone or the cloud.
Carmakers themselves have created their own applications and services for their customers to use specifically within their own vehicles, but over the last five years we have seen a significant rise in mobility services designed and built by companies and developers who are not affiliated with carmakers. Essentially, these companies or individuals are external operators working independently with their own business plans and objectives and building their own user bases.
However, because of the nature of their services, these third parties require access to data from their users’ vehicles. Data such as a person’s driving style, vehicle model and location enables them to tailor the experience of using their service to that specific person. We can be sure that a tailored, personalized service is going to be far more valuable to a user than a generic one.
It’s for this reason that third-party mobility services need to make some sort of agreement with a carmaker. They must get access to the personalized data of the carmaker’s customers which they would otherwise not be privy to. Without this data, the applications they want to offer have significantly less value.
But how do third-party services work with multiple carmakers and car models simultaneously? This is where a car data marketplace comes in. Thanks to car data marketplaces, third parties no longer have to make separate agreements — or integrations — with multiple carmakers, but can instead offer their services to multiple users via a streamlined portal.
What is a car data marketplace?
A car data marketplace — also sometimes known as a vehicle data platform — works between the two parallel ecosystems of carmakers and developers and third-party operators. Essentially, the marketplace works as a data broker. A third-party service can request to connect to the vehicle data it needs to operate via the car data marketplace, which can check it for quality, relevance and security before verifying it to connect to vehicle data or suggesting improvements.
The car data marketplace can verify applications and services because it already has contracts in place with carmakers. In addition, the car data marketplace will have a marketplace API, which means third-party services only have to integrate once with this API to potentially connect with car data from multiple carmakers. Thanks to this API, third parties do not have to make different arrangements or technical integrations with numerous carmakers, or work with different hardware or software, saving significant amounts of time and potential complications.
Where do carmakers come in?
As we just mentioned, the carmakers will be working directly with the car data marketplace, not with the third-party services themselves. It will be the car data marketplace which negotiates individual contracts and costs with the carmakers.
All financial transactions go through the marketplace, which protects both sides and keeps everything transparent and fair. The marketplace is also where applications and services will be approved or rejected. If a service is approved, it will then connect to the car data. If the data does not connect, the team who built it will be offered advice on improving their offering to ensure future success.
The neutral server
But how do these things work from a technical perspective? For carmakers to share personalized car data from their customers with third-party mobility services, they need to use a neutral server.
The beauty of a neutral server is that it can enable third-party applications and services to have access to vehicle data without those third parties needing to come to their own unique agreement with a carmaker. It is an independent intermediary that is engaged by the OEMs, and not owned by them. These servers are neutral because they are not subsidiaries of the vehicle manufacturers, but instead wholly independent of them, both financially and operationally.
A neutral server enables more customer choice. Not only can car users work with a carmaker’s range of services, they can work with literally any other service provider they want. It frees up the movement of the third parties too, as they are no longer tied to one carmaker. With the neutral server, third parties they can access multiple carmakers with one simple integration.
The carmakers generally broker data to third parties within a specified and agreed scope, and the neutral server protects the direct visibility of third-party business models from carmakers. Another important feature of the neutral server is that it can provide compatibility between third-party services and carmakers that use different technology interfaces.
Certain companies provide API. For example, High Mobility provides a carmaker agnostic Auto API, which enables third parties to potentially work with any carmaker after a single integration, irrespective of the system it is using. This significantly
How do third party services connect to vehicle data?
The vehicle data market is still new, both for data providers and third parties. At this stage, what’s key is for third parties to figure out what kind of data they need from the vehicle. Is it personalized data that would be most valuable to their product? Or is anonymized data in bulk quantity for big data-related services that they are looking for? These third parties will also need to think about what their required data update rate is. Is once a day enough to inform their applications, or does it need a data update every five minutes to perform at its best?
Secondly, third parties need to know which providers are offering the specific vehicle data that they are looking for. In the current climate, different vehicle data platforms generally have different agreements with carmakers and several data product bundles or offerings for those looking to integrate data into their applications. It is worthwhile for third-party services to spend some time researching what these different product offerings consist of and their pricing in order to have a good understanding of which vehicle data marketplace is the right one for their product.
Another factor influencing whether an application or service may choose to access vehicle data via one car data marketplace over another is the type of integration tools which that vehicle data platform offers. These tools could be SDKs, a testing environment, vehicle emulators or tutorials. Some car data marketplaces may offer substantial technical support, while others may not. Depending on the needs and level of experience of the third-party service, these additional features will make some vehicle data platforms more attractive than others.
Finally, the pricing model used by the car data marketplace will affect how many developers and third-party services access data via that platform. Easy to understand and transparent pricing is likely more attractive to third parties who are wary of being caught out by unexpected or hidden costs. Small companies will also want to calculate their costs in advance and work out how these costs will grow as they increase the number of vehicles they connect to. A data marketplace that is transparent and clear about how its pricing works is likely to generate loyal and repeat customers.
When a third-party service has chosen which data marketplace it wishes to work with, it will then need to integrate with that marketplace’s standardized API. This standardized API will enable it to connect to multiple carmakers by seeking verification to connect to their customers’ vehicles. The integration of the marketplace’s API is likely to involve entering a data contract with that marketplace. Once that contract is signed, the third-party service should then be free to submit their applications to the many different carmakers who are also in an agreement with that marketplace.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
One of the hottest topics these days, along with the creation of IoT solutions, is the use of edge computing devices. As with many of the latest technology trends, it is important to dig past the hype to understand what is genuinely new and determine the value it provides. Simply implementing the latest buzzword technology only makes sense when it creates business value, which is defined by utility and appeal to the end user.
Edge computing is the implementation of information processing performed at the IoT device level. This is an alternative to cloud computing, thought they both can coexist within the same system. In cloud computing, data is transmitted from an IoT device and the information is then processed in cloud-based servers.
This is similar to the older practice of users connecting to large, centralized computers via less intuitive connected terminals. We are currently transitioning to a world of connected smart devices, but decentralized devices and some processing power has shifted toward edge devices while still connecting to centralized servers in the cloud. At the end of the day, the IoT world will look a lot like the general-purpose computer world where edge and cloud-based computing will coexist.
When getting to know edge computing, it’s important to evaluate the reasons for — and potential benefits of — shifting computation from the cloud to the edge. Here are a few reasons why you might want to consider adopting edge computing:
Managing use of bandwidth
Depending on the application, the amount of data sent from the IoT edge device can be substantial. The need to transmit massive data amounts in real time can drive the selection of radio technology, which has a further impact on product cost, size and power. Depending on the selected radio technology, there can also be a substantial impact on the cost associated with data transmission and fees from carriers. In an edge-optimized application, data processed on the edge device can be decreased or preprocessed for compression, which reduces the bandwidth requirements.
There is no question that transmitting vast amounts of data can flood a network or affect the real time availability of data. By preprocessing or compressing data, you can remove certain amounts from the network, thereby reducing latency.
Depending on IoT implementation of edge computing, the individual or mesh of edge nodes can reduce dependency on wide area connectivity. If the WAN goes down, the edge devices can provide useful data to mobile devices or machine-to-machine without the need for wider connectivity. Of course, edge computing that is dependent on cloud-based data for import would be hampered by the loss of wide area connectivity, but edge computing enables remote IoT devices to provide a potentially reduced subset of information in the absence of cloud access.
The reality is that in the world of IoT solutions, there will not be a one-dimensional shift to edge computing. Applications that make the most sense for edge technology will be those leveraging benefits from the technology, but those also taking advantage of cloud-based infrastructures. In doing so, the entire IoT solution will utilize the inherent advantages of cloud-based systems, such as the large and vast data structures, high computational power, data redundancy and reliability, while also managing the issues that can be mitigated with edge computing.
It is up to the system’s solution architect to assess the right model for any IoT solution. If the data to be transmitted is small or infrequent, the WAN connectivity is robust and reliable, and the application requires connectivity to cloud to provide any useful value, then the need for edge computing is greatly reduced. Given the drawbacks of edge computing in hardware cost, size and battery capacity, one should carefully assess the requirements of the full solution needed to implement an IoT solution.
Trends come and go, but if you don’t have the skill to adapt to change, you’d better prepare yourself for downfall. It doesn’t matter if you run a small business or a large-scale company, the ability to adapt is what determines your company’s success. This, however, doesn’t mean you need to embrace everything that’s new. Instead, you need to be able to identify what’s relevant for your company and what’s necessary to scale up your business.
Some companies make the mistake of believing that once they’ve attracted customers, their customer loyalty will last forever. But consumers are savvier than ever before, and companies should get savvier, too. If something worked well for your business in the past, it may not work today. Modern-day organizations and entrepreneurs need to spot emerging trends and implement them correctly into their business operations. And you don’t need to be the world’s top CEO to make this possible. All you need to do is step up your trendwatching game.
What is trendwatching?
Trendwatching is essential for any enterprise and industry today. The term refers to the ability to identify relevant trends and analyze their impact on business and society in general. It can include analyzing tech trends, social trends, and market trends. For instance, a hardware firm with a customer base composed mostly of IT professionals, who are known for being increasingly interested in IoT projects, could drive revenue by developing new smart home devices. This would enable the company and its buyers to enter the smart home market that’ll be worth $53.45 billion by 2022, according to a Statista report, ” Global smart home market revenue 2016 to 2022.” But the growth potential is actually much bigger. The smart home is a niche market within the massive IoT industry that’ll grow to $1.6 trillion by 2025, becoming one of the major actors in the tech field, according to the ” Global IoT market size 2017 to 2025″ report on Statista. And if sustainability plays a huge role in customers’ decisions, a company can appeal to its target audience through the development of sustainable products, or by incorporating sustainability into its business. The key element of trendwatching is determining which trends will impact your business the most and acting accordingly.
How to become a trendwatcher
Since trendwatching is in high demand, the number of companies and individuals specialising in trend analysis and prediction is increasing. Companies, such as The Trendwatching Academy, offer professional trendwatching services to those who want to future-proof their business by learning how to spot, validate and apply consumer trends. With such insight, companies can shape new ideas and develop new business models.
Even higher education institutions — such as the Fontys University of Applied Sciences in the Netherlands that offers a minor Trend Watching programme — have recognized the importance of trendwatching courses. Trendwatching programs aim to teach people to trace, analyze and interpret trends, as well as understand and visualize existing trends.
Trendwatching done right
Over the past few years, IT professionals who have spotted key trends were able to rise further and faster in their careers compared to others. Knowing how to recognize tech changes is becoming ever more important. Take, for instance, the impact of AI-driven software development. It’s becoming possible to infuse AI capabilities into applications without the involvement of data scientists thanks to new software tools. Even the development of AI-enhanced solutions is becoming automated with the rise of augmented analytics, automated testing, automated code generation and automated solution development. This will enable more people to develop applications. Developers will eventually be able to develop AI independently with predefined models delivered as a service, replacing the role of data scientists. Understanding trends like this one can help professionals and companies to adjust to the new business environment and keep growing.
Today’s business world is rapidly changing, with the emergence of new technology, and growing consumer demands make it even more dynamic. To remain competitive, companies need to be able to identify and analyze new trends. With a proper implementation of trendwatching, companies of all sizes can easily turn trends into opportunities.
The first weekend of October brought the first glorious days of fall temperatures to North Carolina. After breaking historical precedent with a record temperature of 100 degrees on Oct. 3, 2019, we finally got a break. Everyone complained about the unusual weather, the blistering heat and especially the lack of rain, which is top of mind across the agricultural community of farmers.
Faced with the concerning impact of global climate change combined with the need to feed a global population expected to reach 9.1 billion people in 2050, the pressure is on. Farmers need to increase food production 70% compared to 2007 levels to meet the needs of the larger population, according to a report from the Food and Agriculture Organization of the United Nations, and they need to do this in a sustainable and profitable way.
Is the promise of agri-tech out of farmers’ reach?
Satellite imagery and better weather forecasting models are having a positive impact for farmers. But interestingly, according to Dan McCaffrey, VP of Analytics at The Climate Corporation, “Research shows that outside of climate and weather, two-thirds of variables in the food growing cycle are controllable factors, such as plant population, soil preparation or previous crops.”
More and more farms are taking advantage of new agricultural equipment, including auto-guided tractors, combines, tillers, robotic sprayers and weeding robots to automate and optimize their activities. But the key to immediate productivity improvements is agri-tech, which is combining all the data available from the sensors built into the tractors and tillers, along with the weather forecasting data and each individual acre’s historical data and chemical makeup. This is a significant and frustrating challenge for the farming community given home-built analytics platforms that leverage Kafka data streaming; extract, transform and load tools; geospatial analytics and machine learning are not on the local farmer roadmap.
Analytics can make a positive impact on farmers’ yield
IoT can be used to improve farming practices using analytics platforms, such as The Climate FieldView platform. Hardware devices directly in farming equipment — such as tractors, combines, liquid applicators, and planters — can capture machine and field data from IoT sensors as farmers traverse their fields. An analysis of sensor data combined with weather, geospatial and satellite data can identify the optimal yield scenarios for that farm or field. The key to agricultural success is not to hope for the perfect weather season but, instead, to ensure that all factors within a farmer’s control are optimized.
Feeding our world with valuable data from IoT
There are many opinions on the future of agriculture. Genetically modified organisms are a controversial topic but considered by many corporate farming organizations to be the future of farming. Local farmers and most consumers do not agree. We want fresh and authentic blueberries in June and crisp, delicious apples paired with beautiful autumn leaf season. Leveraging data in order to create the best possible circumstance for crop growth is imperative. The future of agricultural success is not dependent on genetics, but on analytics powered by IoT, provided to local farmers on their tablets or smartphones as they continue to feed the world.
There are no secrets to creating a smarter, safer and more comfortable connected building environment for employees, visitors and occupants. Smart technology deployed in strategic ways enables this type of environment, creating innovative, optimized, efficient and sustainable spaces that deliver connected experiences for the people who work there.
By establishing the desired outcomes, along with the “as is” baseline and the “to be” measurements such as key performance indicators, at the beginning of a project with input from all stakeholders involved, you can more easily choose building, business and vertical market systems that can help bring your vision to life.
Part of this process includes determining which technologies are needed to achieve this vision. With an outcome and metric-driven approach, all decisions remain focused on the end goal. This better enables an on-schedule and on-budget project that delivers measurable outcomes on day one.
Facilities that have existing technology in place can achieve goals by strategically integrating previously siloed building systems such as HVAC, security, lighting and building automation systems. For example, if your overall goal is to reduce energy use and increase operational efficiency, security, lighting and HVAC systems can be integrated with the building automation system to help control energy spend. When an occupant badges into their building, the security management system communicates with the building automation system, automatically enabling HVAC and lighting within the appropriate zone. When the occupant badges out of the building, the appropriate lighting and HVAC will shut down.
When designing a new building — or updating one with outdated systems — intelligent technology can be integrated to lay the groundwork for future improvements. For example, connected lighting provides a platform for future smart integrations, such as incorporating video surveillance or advanced analytics.
Another benefit to systems integrations is an increase in data supplied by your building’s main systems. Building leaders and management can analyze information such as occupancy flow or peak hours of operation to optimize their building’s processes for improved efficiency and comfort. Analytics stemming from technology integration enables managers to make informed business decisions that provide long-term impact.
Whether you are embarking on a new construction project, retrofitting an existing building or refreshing technology, creating a smart facility is possible. Simply focus on the ultimate function of the building and determine how you will measure success, then begin the process of determining how it’s built.
By integrating building, business and systems onto a unified, intelligent infrastructure from the beginning, you can have a bigger impact on lifecycle costs, operate more efficiently and sustainably, all while achieving the desired measurable outcomes for the occupants within. With the right plan, partners and technologies, any building can be smart.
A device management system supports features such as remote provisioning, software upgrades, command and control, configuration updates, state and health monitoring, diagnostics and debug.
In IoT and connected device systems, remote device management from a server or cloud is one of the most critical features, but often the most sidelined too. Because the device management solution does not result into an actual tangible or visible feature of the system, product management often tends to ignore it during the system development. Later when devices get rolled out in the field, the importance of device management becomes clear. For example, a critical software fix requires complicated procedures to upgrade the software to field-deployed IoT endpoints when a device management system is not implemented.
Autonomous mobile robots (AMR) are increasingly internet connected. Indoor AMRs have access to Wi-Fi. Outdoor AMRs have access to cellular. There may be cases where the connectivity is restricted to on-premises LAN for additional security. Invariably, AMRs are connected.
One can look at connected AMRs as connected IoT devices on wheels, autonomously navigating in the field they are deployed in. They need to be remotely managed. Typical device management features are required; however, as a result of the robot’s autonomous mobility, there are some unique challenges and use cases that require specific add-on features for remote management. The key question that drives remote management solutions is how does one remotely manage provision, diagnose and give software updates to AMRs on the field?.
Similar requirements hold true for industrial robots with robotic arms. They have increasing numbers of sensors, multi-CPU configurations and connectivity. Although their motion is restricted to the configuration space surrounding them, there are a vast number of aspects that need to be remotely managed, including states, sensors, power or battery health, algorithms, software and firmware.
This article dives into typical requirements for remote device management of AMR’s and industrial robots and serves as a guide for product management teams of existing and new robot OEMs, software solution providers for AMRs and industrial robotics.
Key functional features that admins should remember for remote management of robots are:
- Provisioning and authentication. This involves robot on-boarding, registration and authentication with the remote management server.
- Command and Control. Features such as factory reset or reboot.
- Configuration updates: Sending offline generated revised site maps, dynamic map edits specific to a site, updating other configuration parameters, which get fined tuned for better operation over the period.
- Monitoring and diagnostics. These allow the capture of log files from the robot and upload them to the remote management server. Files include real-time streaming of sensor data, algorithm data, event data and error logs to the server for monitoring state and diagnostics. All of them need to have timestamps associated with the data.
- Software updates and upgrades. In robots there are various host and slave CPUs or connectivity modules, robot applications and update agents. All need to be updated for bug fixes, new features and more. Software updates and upgrades also include rollbacks or downgrades when the new update has created issues. Software updates need to be done for various components, including robot host CPU application, agents which do the updates, robot operating system (ROS) middleware, host OS, microcontrollers (MCUs) firmware, Wi-FI or cellular module, computer vision with AI and machine learning models, and containers.
- Multi-tenant remote robot management system server with role-based access control. Manage a fleet of robots based on various roles and permissions set in the system. Fleets of robots themselves are organized in various groups with different permissions.
- Fleet management: The remote management features need to be provided for a group of robots, a group within a fleet of robots and multi-fleet management.
- Robot off-boarding, de-registration and disable.
In addition to above functional features, there are many that are not part of the core functionality, but instead focus on security, adaptivity, analytics, location, scheduling, monitoring, data storage and log data.
- Security. Security is the underlying theme for all the features. Without secure authentication and provisioning and a secure data exchange mechanism, there is just no way an end customer will trust a remote robot management solution.
- Adaptivity to connection quality. One specific issue which often arises for AMRs and industrial robots that are connected over wireless is the unpredictable quality of wireless connection. AMRs require adaptability to variable connection quality. The Wi-FI or cellular wireless connectivity link may not be reliable. Signal strengths will vary across the environment. There could also be high data consumption cost when connected over cellular. The remote robot management solution needs to handle this condition like a managed service throttling the data and retry when there are connectivity issues. Software updates need to happen when there is minimal business impact.
- Analytics. Data from a fleet or individual robot is analyzed based on data streamed stored on servers.
- Locate and search robots. Robots have a specific ID for a specific customer and site based on state or and other attributes and view the information, state, configuration and data.
- Schedule over-the-air jobs. Scheduling is based on campaigns and view the status of different jobs.
- Monitor various API calls.
- Visualize insights and data.
- Easy to navigate, responsive GUI.
- Playback of captured log and diagnostics data. In the robot simulations’ environment storing data in rosbag files — rosbag is a file format supported by ROS — allows data to be played back into ROS simulations with the correct timestamp.
- Meta data-based search and remote management.
- Map upload, combination and push back. AMR’s in the field may upload separate Maps to the Server if mapping operations which are combined on the Server and then pushed back to the Robots.
Associated with each robot’s data are various types of meta information including current location, site, customer and robot ID. Use this meta data to quickly search AMRs and then complete remote management.
Importance of interfacing to middleware
Remote management systems will invariably involve an agent which resides on the robot, typically on the host CPU. The agent should provide a flexible handshake with the robot application for all features of remote management.
ROS is a very popular middleware in robotics. ROS1 is and will remain de-facto for a while but ROS2 is also coming up. Both provide various publish and subscribe topics and services for command and control over which the remote management agent needs to interface to the robot’s primary application. There are robots in the field and organizations developing new robots which don’t use ROS. The ability to interface to robots native application is important. ROS provides a bridge feature to handshake between a Non-ROS application and a ROS application.
Remote debugging AMRs and robotic arm robots in the field.
AMRs and robotic arms’ motion is possible as a result of sensors and algorithms that utilize the raw sensor data. Sensors are susceptible to noise, intrinsic randomness and statistical variations.
AMRs monitor live sensor data, view it in the cloud or remote server and record live data on the server side based on time parameters, such as start capture of a select number of sensors for next 10 mins or show last 10 mins of chosen sensor and algorithm data.
The sensor data can include point clouds from depth cameras, GPS, estimated pose, dynamic linear and angular velocities, accelerations, various events, various motor parameters, live video streams from various cameras and odometry. Many of these get categorized under telemetry.
The same holds true for various algorithms that are continuously generating various states and processed outputs. Algorithms for AMRs include localization, mapping, slam computer vision and AI. Algorithms for industrial robotic arms include motion planning, inverse kinematics, computer vision, sensor fusion and more.
Challenges with streaming large raw sensor or algorithm processed data to a remote server
The bandwidth limitations and usage can be a problem, especially when streaming large amounts of raw sensor or processed data originating from lasers, depth and vision cameras, other sensors and algorithm processed data. Capturing data on robots for upload later to a server can be a better approach. However, when a technician is remotely debugging robots, live streaming is unavoidable.
Some end customers are not comfortable with public cloud connectivity. On-premises data streaming has high value with features on similar lines as a cloud deployed system.
Offline robots due to connectivity loss
Connectivity may not be available or possible in areas where industrial robotics and AMRs are deployed. For example, AMRs can get deployed in remote sites — such as mining fields — where there is no cellular connectivity. Satellite connectivity is costly. Connectivity loss should not result in robot malfunctioning and its remote management should remain active, but in a different form. It is essential to support capture logs, streaming diagnostics, algorithm data from the field robot and software updates via USB or local ethernet LAN as part of remote management. Push captured data on the robot to the management server at a later point. There are limits to how much data can be stored locally on the robot, but maintaining even the last 10 seconds of data can make a big difference to diagnose what went wrong when bad situations arise, such as an AMR crashed or an industrial robot failed to pick and place.
Black box for AMRs
With the large amount of data that can get captured locally for logging, diagnostics and later upload, there is an overall need for a black box similar to flight recorder black boxes. Cost is a large factor to having an independent black box which can survive crashes and transmit beacon signals at periodic intervals upon a crash to receivers used for remote monitoring. This will become increasingly important.
Remote management 24/7
Robots are running 24/7 in the field at indoor warehouses and outdoor environments. Having a strong remote robot management solution, combined with 24/7 managed services is peace of mind to the customer who has deployed the fleet of AMRs. The criticality of managed services should not be underestimated.
This article focused on priority features. There are other non-functional features not necessarily covered.
Do you remember that time when you bought an SUV, renowned for its auto safety, and then decided that there was no need to wear a seatbelt? Or that time you decided not to install a smoke alarm because there was already a fire extinguisher in the house?
These don’t sound like familiar scenarios because they’re impractical. When it comes to keeping the people and things we care about safe, we apply a range of safeguards against common threats. Where the impact of loss would be high, we avoid a single point of failure. In our personal lives, we commonly incorporate layers of protection against loss from house fires, automobile accidents and theft.
Whether protecting a home or a business, multiple and varied layers of protection work together to form a strong barrier between the things we care about, and the threats that endanger them. The practice of applying a variety of overlapping layers of defense to guard against and respond to a threat is what military strategists and information security experts alike call defense-in-depth. Layered defenses in security force would-be attackers to overcome multiple challenges to gain access to their target.
Defense-in-depth, a tried and true security approach
The effectiveness of any system’s defense-in-depth is about the power of and. To protect your home against loss from theft, you have a front door and a deadbolt lock and a burglar alarm and an insurance policy. You might even have a dog or motion sensing lights. The burglar alarm alerts — or your barking pet — protects your home even if an intruder isn’t deterred by your lights coming on and is able to break through the deadbolt lock.
Similarly, in the corporate world, to protect data you can apply identity and access management controls to define who can access what parts of a network and information rights management to control access to sensitive files and documents and threat protection protocols to detect and investigate breaches, compromised identities and other malicious activities.
Despite being a historically proven security practice for both physical and information security, defense-in-depth and other basic security best practices are not commonly applied. There are countless companies making the same misstep. The majority of IoT devices continue to be based on simple processors that are — at best — retrofitted with a smattering of security measures that might give buyers peace of mind, but do little to slow attackers. This is accumulating risk, one unsecured chip at a time.
Connected devices continue to pose security risks
Each connected device that we bring into our home or workspace has the potential to open new attack surfaces and introduces vulnerabilities to increasingly personal areas of our life. The devices and infrastructure we rely on to keep us safe will become increasingly connected: the carbon monoxide monitor that detects and alerts to poison in the air; remote vital sign monitoring devices to help medical providers keep tabs on patient health; and the home security systems used to monitor and alert homeowners to suspicious activity. Connected versions of these products are already on the market today.
The expanding volume of connected devices coupled with weekly headlines about hacked IoT devices is spiking corporate and consumer interest in more reliable device security. And while there are several nascent industry and regulatory efforts to drive common security standards for connected devices, there is not a unified understanding of the risk and how to solve for it. This leaves each manufacturer or vendor to choose their own approach. An IoT product might market long lists of security features, but — in most cases — the addition of multiple security features does not equate to defense-in-depth.
With most existing IoT devices failing to apply even basic principles of security, consumers, businesses and modern society are vulnerable to the increasing menace of cyberthreats. In the meantime, you can still take responsible steps to ensure that the products you bring to market are built on the foundations of defense-in-depth.
Device security for mass market deployments
In the early proof of concept stages of device design, you might not be thinking about holistic device security. But when shipping at scale, defense-in-depth security is a requirement that protects your customers and your bottom line. Developing a security practice to counter attackers requires an approach that accounts for an evolving threat landscape and is built on secure-by-design platforms. By leveraging the best practices of the brightest security minds, along with a variety of thoughtfully layered security mechanisms, you increase the chances that a connected device is hardened to the likely event of a breach.
IoT security mindset. A defense-in-depth device security practice starts with the mindset that every connected device, no matter its application or type, is built to defend against a comprehensive array of threats. Just because a connected device manages only the temperature of the water in a fish tank doesn’t mean the security approach shouldn’t be as intentional as it should be with a connected insulin pump that manages the release of insulin to a diabetic patient. In both scenarios, a device breach could lead not only to tampering with the device’s intended function, but also give access to the larger system that it’s connected to. When you operate with the assumption that parts of a connected device will be breached, whether it’s a national security risk or just a rain monitor, it’s more likely that second, third and fourth layers of protection are developed to deter, challenge and confound an attacker that has found entry and minimize the impact of a breach.
Device security strategy. Identifying the likely attack path based on physical location, communication channels and device capabilities are key components of device security strategy. You might be able to do this yourself if you can build a team of experts with deep knowledge of the industry, dedicated bandwidth for 24/7 vulnerability monitoring and the ability to quickly code and release urgent security updates. However, the ongoing security talent shortage makes that approach almost impossible to implement and sustain for most organizations. An effective shortcut to a comprehensive security strategy is to build on secure-by-design platforms. Use of secured platforms helps to ensure that the critical security of your device is backed by the deep knowledge and expertise of an ecosystem of security professionals, and reliably developed, monitored and updated for threats with ongoing security improvements.
Device security mechanisms. An effective device security action plan includes multiple, interlocking layers of security features and techniques to address each potential threat. Just like defending a home against a thief might require a door and a lock and an alarm, we must apply a variety of defense types when securing a connected device. Using a combination of layered hardware and layered software and secure communication ensures a diverse and multilayered mix of security tactics for IoT devices.
Just as a list of many security features does not equal defense-in-depth, security features that aren’t activated by default can give a false sense of security for manufacturers and product developers. When security features are built in and activated by default, the opportunity for human error is minimized. You can help your team do the right thing by choosing solutions that make development of a secure product easy and automatic.
Now is the time to secure connected devices with the rigor of defense-in-depth. In the IoT era when devices are embedded in the fabric of our lives; when they entertain, inform and even protect our physical wellbeing, applying the power of and to device security is a proven path to securing the future of IoT.
IoT defense-in-depth characteristics
There are many published perspectives on IoT security. They range from baseline security requirements to prescriptive operational models. Regardless of the security principle, model or platform vendor your organization ultimately elects to adopt, this basic framework serves as a primer of defense-in-depth security as it relates to IoT.
As you upgrade your security practice, ask yourself and your team this: Do you follow a defense-in-depth approach or one with limited protections?
Connected robotic process automation is delivering huge productivity and efficiency gains across every global industry.
As it evolves, it will drive eye-widening levels of transformation. By enhancing connected robotic process automation’s (RPA) capabilities, it will become faster and better, enabling organizations to innovate and keep pace with ever-changing technology shifts, market demands and threats from competitors. Ultimately, connected RPA will shape the future of work by enabling those who embrace an automated, digital workforce to perform things that only they can do.
Evolution with smart integration
Connected RPA is compelling as a transformational agent because it’s quick to implement and non-invasive to existing systems. Its digital workers access reads the user interface to interoperate and orchestrate any 3rd party application, just as a user would. The universal, enterprise integration capabilities built into connected RPA‘s digital workers also enables them to consume intelligent automation skills through a broad ecosystem of complementary technologies so that they can adapt and learn from humans or other systems.
Connected RPA gives business users who understand their organization’s operations the ability to simply create automated processes in a visio-like designer, which are then used by an easy to control digital worker — using the same applications — to manage a task. These users can also be innovative through creating new products or services by easily accessing leading-edge cloud, AI, cognitive and other capabilities through drag and drop, intelligent automation skills from a digital exchange marketplace.
The digital exchange is a shop window for new and emerging technologies. It is a platform that puts powerful, pre-built connected RPA and AI capabilities into the hands of business leaders in the form of downloadable integrations and visual business objects. These assets connect and integrate digital workers, existing systems and processes to technology partners, creating a solid foundation of AI-enabled intelligent automation that’s scalable and sustainable.
The digital exchange works like an app store for intelligent automation, comprising hundreds of applications available for quick download to customers, resellers and technology partners. Companies like ABBYY, Appian, Google and others have collaborated to collectively post hundreds of assets, enabling these partners to bring their capabilities to numerous industries via the digital exchange.
Evolving capabilities of connected RPA
The digital exchange currently offers more than 150 Google APIs, so users can add intelligent automation capabilities to their digital workers on the fly. Integrating with emerging cognitive technologies enables connected RPA’s digital workers to emulate these more human-like skills:
Knowledge & Insight. This is the ability for digital workers to harvest information from several data sources, understand it and deliver previously unattainable insights, which enables organizations to:
- Deploy natural language processing
- Gain new insights into customer behavior
- Leverage real-time analytics
- Report metrics
- Mine data for better understanding of processes
- Use data management to quickly deploy new programs
Visual Perception. This is the ability to read, understand and contextualize visual information digitally, enabling organizations to:
- Leverage optical character recognition so digital workers can work with text just like humans
- Use natural language processing to allow digital workers to understand & interpret human language
- Instantly analyze and understand the meaning of digital images via computer vision
Learning This is the ability to derive contextual meaning from datasets, as well as recognize process
and workflow changes, and adapt accordingly without human intervention, which enables organizations to:
- Leverage true machine learning to give digital workers the ability to learn without being programmed
- Prepare for the future as digital workers process information with a neural network paradigm
- Enable digital workers to model algorithms quickly
Planning and sequencing. This is the ability to optimally plan workflow and workload execution to deliver the best outcomes, which helps organizations to:
- Enable digital workers to instantly and intelligently manage workloads
- Let digital workers auto-scale as needed by business conditions
- Use automatic process mining to analyze business processes based on event logs
Problem Solving. This is the ability to solve logic, business and system problems without intervention, enabling organizations to:
- Use automatic problem detection to ensure the highest levels of service
- Possess problem solving ability to increase productivity throughout all processes
- Achieve digital worker-enabled visualization to gain insight from data
Collaboration. This is the ability to communicate and complete tasks with people, systems and other digital workers, which enables organizations to:
- Use digital workers to reduce time to service customers and improve overall quality
- Empower employees to work with digital workers to elevate their roles and increase contributions
- Deploy chatbots to work with digital workers to autonomously service customers and escalate to humans when needed
Connected RPA adoption is being driven not just by the promise of greater cost savings and operational efficiencies, but many other criteria too. Key drivers include value creation, productivity increases, improving the customer experience, generating innovative opportunities and gaining more value from staff. Other notable outcomes include achieving higher quality operations, greater workforce agility and more actionable data for customer insights.
In fact, the different ways organizations are using connected RPA is remarkable and groundbreaking. Not only is this technology being used as a catalyst for organizations to enhance business operations, but to reinvent themselves too, making them highly competitive within the markets they serve.
Moving forward, organizations that employ connected RPA with AI and cognitive technologies will see this as providing a true foundation for enabling collaborative technology innovation to deliver real transformational change across their businesses. In fact, having both human and digital workers operating together, while seamlessly interacting with existing and new applications, will provide the foundation of the future workforce.