In the age of digital transformation, all executives should be asking themselves, “How will our business be disrupted?” Some businesses are taking a wait-and-see approach, often because they aren’t sure how digital technologies like IoT apply to them. This conservative approach is understandable because in the Industrial Revolution era, innovation was often based on product-based business models based on incremental innovation, or adding new features and capabilities to products.
And in the past, value creation (profit) was a function of economies of industrial scale: mass production and the high efficiency of repeatable tasks. This old business model banks on customers’ loyalty and being compelled to buy more products. The problem with the old business models is products are quickly being commoditized. In the new digital era, incremental innovation falls short and price becomes the competitive advantage, leaving unprepared companies to play catch up as they attempt to react to digital transformation emerging in their industry.
Today, the most innovative companies realize to stay competitive they must shift mindsets by looking at entirely new business and revenue models with connected devices and product lines. This means reimagining, reinventing and evolving how their businesses operate to develop new value creation. Leaders must innovate revenue models by integrating exponentially advancing digital technology to drive new revenue streams.
And, to ensure the value creation with connected devices, IoT leaders need to make holistic and integrated customer service a top priority at every point along the customer lifecycle — all the way from pre- to post-purchase. This puts customer service in a prominent position to being a key differentiator to drive enhanced revenue and profitability. In fact, with over 20 billion connected things expected by 2020, connected products can not only provide proactive versus reactive service, but also provide some of the best opportunities to create new revenue streams and business models.
To do this, business leaders must clearly understand the impact of integrating IoT data with connected devices and customer service applications. Value creation, by combining and incorporating new technologies into customer service, allows for the development of new business models and revenue streams.
In a connected world, products can add new revenue streams after the initial product sale, including value-added services, subscriptions, and apps, which can easily exceed the initial purchase price. The result? Shifting profit model paradigms from not just selling more products to the same customer, to enabling a recurring revenue stream with existing customers can be much more lucrative. Businesses can not only vastly increase size of the business, but also stop their competition in their tracks.
Take Samson Rope, for example. This 140-year-old company supplies rope for marine, mining, forestry and rescue. It is implementing a high-tech rope threaded with IoT sensors to monitor its condition to know when it needs replacing. Samson services 8,000 lines of rope throughout the life of the product. And according to the company’s director of IT, Dean Haverstraw, combining IoT and field service capabilities will help the company identify when maintenance is needed, creating entirely new revenue streams for their business.
This type of proactive service enables brands to deliver enhanced trust and loyalty with customers. How might this work? Consider a product which sells 300,000 units per year at $1,000 each. At just 1% per month of service revenue, this product could generate $36 million of high-margin, recurring revenue. Integrating IoT and field service with customer service provides a clear path to move out of the old business model paradigm of being a cost-center to a profit-center.
The takeaway? It’s not a matter of if, but when your industry and business model will be disrupted by new, service-oriented, reoccurring digital technology-based business models. The next question to ask yourself is, “Will you sit back and be disrupted or be the disruptor in your industry?”
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Supply chain automation has been the primary focus of many companies for well over a decade. While the production line, storage protocols and loading procedures have advanced at lightning speeds, the decision-making process has remained more or less the same.
A small number of issues are identified that can be handled by preset procedures at the ground level. Anything that happens outside this set is passed up the chain of command to be analyzed and dealt with. The time lost in conveying this information, even with IoT devices in place, waiting for a response and then acting on that response, eats up considerable chunks of otherwise productive time.
Logistics is a critical supply chain component of the manufacturing industry. Road Transport in India constitutes a bulk of the logistics chain. Its optimum and efficient utilization governs supply chain profit margins for many companies. On-time delivery by vehicles also helps in customer satisfaction and retention. To control and improve the utilization of these mobile assets, it’s imperative to monitor and manage their movement and performance.
The reports linked to GPS-based location tracking are useful in performing post-mortem analysis of the delivery performance, however the data flows in only after the mistake or issue has already taken place. Developing action points after the fact is reactive and detrimental to business in the long run.
Why does this happen despite the readily available technology and mountains of processed data at your disposal? There are several reasons.
First, the lack of coherent data. Several parameters are monitored separately under separate reports. It’s difficult to relate them to each other to arrive at a concrete conclusion. Normally, individual truck data is available, but cumulative data clustered at a logical hierarchical level is not.
Second, the absence of advanced analytical tools. There is a limit to the manual analysis that can be performed on a spreadsheet.
Third, the presence of functional silos with conflicting objectives in the same organization. For example, the dispatch team focuses on volumes, but the safety team focuses on incident-free delivery. Add to these the challenge of maintaining multiple service providers across different functions and a basic understanding of the problem starts to form.
A control tower addresses all these problems.
Imagine all your data points being captured coherently and analyzed against your entire database. You can determine the best transporter in a plant or the worst-performing plant in a region without having to muddle through piles of reports. Imagine advanced analytics providing you with live, definite and measurable action points. The system will identify actions that can be taken to rectify mistakes on the fly. While doing this, it is keeping an eye on every other aspect it’s plugged into and making sure they clock over nicely. The centralized nature of this system means that all relevant stakeholders will be using a single tool to measure and meet their key performance indicators while keeping the focus on the overall organizational objective.
The control tower relays automatic warning to the relevant process or task owner before a performance parameter is breached. The suggested action helps stem the issue before it gets out of hand. Measurements of user response time, effect of the response and timeframe to implement are used to further enhance the system. Ultimately, this helps improve performance, set benchmarks and fine-tune your logistics chain into a well-oiled machine.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Not too long ago, IoT meant something completely different than it does today. Machine-to-machine used to be the preferred vernacular to describe connected devices, which relied on modest cellular connectivity needs to maintain equipment and ensure proper functionality. Fast-forward to 2019, and now IoT is starting to play an ever-greater role across U.S. industries. With use cases growing in every vertical, we are entering an era of “massive IoT,” which is ideologically similar to IoT that preceded it, but requires much more wireless capacity.
Consider the popularity of smart cities and the amount of connectivity required just to monitor vehicular traffic. In order to obtain an accurate understanding of traffic patterns and points of congestion, large volumes of data will have to be communicated back to a source frequently throughout the day. IoT use cases including the aforementioned example will require support from both 5G and LTE networks to satisfy requirements of a massive IoT world.
Achieving the needed capacity is easier said than done, and requires improvements in both in-building and outdoor networks. Many in the telecom industry believe that large-scale deployments of compact small cell radio access points in the regions lacking coverage will support the necessary growth of IoT and drive 5G implementation across the United States. While it’s true that small cells should have a substantial impact on 5G networks, as well as LTE, there will be other wireless hardware involved in building smart city infrastructure.
Smart infrastructure paves the way to smart city future
A decade ago, deploying extensive wireless connectivity systems into transit infrastructure wasn’t as common. Now, it’s expected that any new venue built in major cities, such a Los Angeles, Seattle or New York, will have strong cellular connectivity support. For example, a new double-decker tunnel running through downtown Seattle Tunnel called SR-99 is being coined “the smartest tunnel ever built” and includes a high-powered distributed antenna system (DAS) and remote radio units to provide incredible connectivity for all passengers and drivers across the two-mile stretch. It’s the development of these kinds of projects which will greatly assist the proliferation of IoT across an entire smart city. While 5G networks are likely to provide the capacity that future IoT deployments will need, 4G/LTE networks should also provide robust blanket connectivity for those use cases.
Massive IoT support will initially comprise a mix of network architectures
Many telecom industry analysts and key players see small cell deployments as a key factor to 5G growth. They are miniature (comparatively speaking) radio access points with antennas which transmit wireless data. The size allows small cells to attach to poles every few blocks instead of miles for cellular towers, which is an amazing asset for boosting connectivity to dead zones that hinder blanket connectivity around the United States, and in turn, supports IoT and smart city development. Parts of Asia, notably South Korea, are more advanced in their rollouts of small cells, laying the blueprint for what the United States is beginning to implement.
However, there is a notable difference in places like South Korea and the United States that could affect how the U.S. leans on small cells for every application. South Korea has three main mobile carriers — Olleh, SK Telecom and LG U+ — that use considerably fewer bands to power their networks over a smaller area. In contrast, the U.S. has four main telecom providers that must provide capacity over much larger distances. Both scale and small cells’ current limitations create a unique challenge.
Currently, small cells can work with one band and one carrier. As a result, small cell deployments that want multi-carrier, multi-band support can become very expensive. They will predominantly be secured to poles, which can quickly become congested when networks require a system that supports the four major carriers and different bands for LTE and 5G. Tormod Larsen, CTO at ExteNet Systems, echoed this point in a recent article. At least initially, massive IoT will be built on a collection of different network setups, such as hybrid DAS, using a combination of active DAS, repeaters, small cells and even Wi-Fi to cost-effectively build networks capable of meeting diverse needs including indoor networks of all sizes. DAS has the advantage of supporting all highly used frequency bands and carriers in a single unit. While DAS is mostly for in-building connectivity solutions, outdoor DAS can be a preferred multi-carrier option over small cells to improve connectivity in preparation for massive IoT and smart cities.
With all four major carriers vying for the most impactful initial 5G deployments and further densification to support burgeoning IoT use cases, expect plenty of creative uses of all wireless connectivity hardware to support their ambitions. Between upgrading existing networks and venues coupled with adding substantial connectivity systems to new infrastructure projects, it will create a gateway to a smart city future.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
We’ve all heard how the internet of things is taking over the world, but what has its impact been on software development? It means today’s product expectations are higher than ever. Users want products that are feature-rich, can be accessed remotely, are easy to upgrade and offer solid security. Take the medical device industry, for instance. There is a proliferation of wearable devices available now that help us monitor and understand patient behavior. Making sure these devices and the data being collected from them are secure can literally be a life-or-death task. This means software projects are becoming increasingly complex and require more expertise than ever. For some companies, that has led to outsourcing either some or all their software development.
So, what are some reasons you might outsource your software development? As programming becomes more complicated, it requires specialized skills. You may not have in-house expertise and it can be time-consuming, difficult or expensive to hire. Most companies tend to specialize in either cloud/mobile applications or embedded software. You might have embedded developers, but are working on a mobile application and they lack the skills necessary to execute the project. Or maybe you do have qualified developers in-house, but they are tied up working on other projects. Outsourcing is a great way to get the technical strength you need when you need it without adding to your headcount.
How do you find ‘the one?’
Once you’ve made the decision to outsource, how do you decide which software development company to choose? It’s important to remember that the quality of your software ultimately depends on the provider you hire. The first step is to determine exactly what you need and ensure any providers you consider have the applicable engineering expertise versus general software development experience. It takes a different skill set to develop a GUI than, say, a database.
Understanding the desired product features will allow you to determine what skill set your software provider needs to successfully complete the project. You should look for developers who have successfully developed products at least as complex as yours. Similarly, if you’re looking for a quick prototype versus a production build, find a company with a pro.
Next, it’s important for a provider to understand not only the technical requirements involved in building your product, but also your business processes. A provider should be interested in what you are trying to accomplish, what problem your product is trying to solve and how their role impacts the overall project. If a company is focused on trying to fit your project into a cookie-cutter offering, that should be a red flag. A good developer will approach each new project from ground zero and build a truly custom system that meets the project’s objectives. Think of it as building a partnership versus “buying” software.
But it doesn’t end when the code is written. Always ask potential providers about QA and testing. Testing is a critical component of the software development lifecycle. Even the best programmers introduce bugs into their code. Developers that don’t have a rigorous, defined testing process in place cannot produce quality software.
Once you’ve found your ideal software development partner and you’ve signed the contract, now what? Like any relationship, there are some fundamentals that will determine its success.
Communication is key — and it’s a two-way street. Sharing project background, goals, objectives and a clear plan will help your software provider understand the big picture and may determine the best approach to delivering a solution that will best fit your needs. On the other hand, your provider should provide regular updates on their progress, inform you of any schedule changes and be willing to discuss their processes with you. Open communication on both ends ensures the product you expect is the product they deliver.
Honesty is the best policy, so no budget hiding! Both parties need to be upfront about cost. Your budget will determine the approach your developer chooses and the final features of a product. Cutting corners at this stage can add technical debt to a project that will take five times longer to fix in subsequent stages. Be prepared to collaborate and compromise on the final deliverables and what you are willing to spend.
Outsourcing software can help a company innovate and grow. But it’s important to take the time to find the right partner for your organization. Look for a company that has the technical aptitude you need, cares about your business objectives and is open to honest communication. Finding a qualified, reliable software partner you can trust can be the start of a valuable relationship.
Edtech initiatives enable students to learn in entirely new ways, helping to shape the skilled workforce of the future. From virtual reality to drones to robotics, K-12 districts are investing in new, cutting-edge technology for education environments. Other than providing students with creative learning opportunities, what do all these technologies have in common? They rely on the network.
Often overshadowed by the latest and greatest gadget or device, the network is the singular engine responsible for powering and driving innovation in education. However, not all networks are created equal — many are not designed to support the new technologies found in the modern-day classroom, nor are they sufficiently automated to make life easier for a lean IT staff.
To keep pace with rising demands in education and to make infrastructure work smarter, IT leaders must optimize Wi-Fi with artificial intelligence and machine learning technology to support dynamic school settings, ensure IoT infrastructure is both secure and flexible, and prepare their network for the tech of tomorrow.
ABCs of AI and machine learning
Across industries, any conversation about innovative technology is bound to include artificial intelligence and machine learning. Leading organizations are turning to AI and machine learning to optimize resources and employee talent, reinforce company security, expedite innovation and improve their bottom line.
Education environments are no exception. From automating tasks like grading to creating personalized lesson plans, one cannot overstate the potential impact of AI and machine learning in schools. In fact, Technavio Research estimates AI use in the U.S. education sector will increase by 48% from 2018-2022. What does this look like in a real-world application? And what does it have to do with the network?
A typical campus has a gym, an auditorium and a cafeteria. In these settings, occupancy can surge from relatively empty to hundreds or even thousands of people during games, meals and other schoolwide activities. Having an AI-powered network that can automatically improve radio frequency efficiency and add capacity to meet bandwidth demand in precise locations will ensure that this connectivity remains consistent. Put simply, an AI-powered network allows school networks to scale up and down as needed, whether an event is happening or it’s a typical school day.
AI and machine learning will also play a critical role in network diagnostics as more schools complete the shift to digital. For example, many institutions are moving from printed textbooks to online curriculum to save money. If the network goes down, the school day is less productive, setting students back in their studies. Network connectivity is also paramount during more time-sensitive activities, such as state testing. If the network were to crash, student answers could be lost, they’d be unable to complete their exams and turmoil would ensue. In both instances, AI and machine learning technology can empower IT teams with greater network visibility through real-time analytics and automated diagnostics. In turn, they could quickly identify, troubleshoot and address the network deviation.
Reinforce IoT security without compromising flexibility
Gartner forecasted that 14.2 billion connected devices will be in use in 2019, and that the total will reach 25 billion by 2021. Yet, Gartner also predicted IoT security spend will top $3.1 billion by 2021. Why? With IoT growth come security challenges and vulnerabilities.
Most IoT devices are not manufactured with enterprise-grade security in mind and lack embedded security features such as antivirus and firewall capabilities. Additionally, if IoT technology isn’t segmented properly within a school’s network, devices can be breached as a gateway to more sensitive areas of the network — similar to the casino-fish-tank hack last year.
There are also compliance considerations. In education, schools must adhere to numerous standards, such as FERPA and CIPA. The fallout from a cyberattack could not only cripple a school’s functionality, but it could also jeopardize the private data stored on and off campus, and potentially expose the school to legal and financial repercussions.
In short, not only does a school network need to be able to support BYOD initiatives, in-classroom technology and the school’s administrative infrastructure, but it must also be highly secure. Unfortunately, many IT leaders lack the transparency and control required to protect their networks effectively. IBM reported it takes organizations an average of 191 days to even identify a data breach, let alone mitigate it. Now, more than ever, it’s important for IT leaders to have a 360-degree view of the network, including uses, devices and applications.
When incorporating new IoT technology, IT leaders should closely evaluate how the technology factors into the broader organization’s security strategy. Centralized, end-to-end network monitoring is critical for timely issue containment and remediation. Establishing network segmentation, applying individualized security profiles for each IoT device and aligning on a real-time analytics and security response protocol are just a few ways institutions can safeguard their technology and mitigate risk to the students and staff.
A network for the future
As technology continues to infiltrate the classroom, schools must also look at the impact and benefits for students and adjust their educational strategies accordingly. Laptops, tablets, drones and other IoT-based technologies can lead to a robust, engaging curriculum for students — as long the network hosting these devices is up for the challenge.
K-12 school districts must begin future-proofing both their infrastructure and curricula by investing in smart networking technologies. Networks that are built with automation, real-time visibility and a standards-based architecture in mind will significantly reduce the strain on IT staff, and thereby reduce overhead. These resources can then be reallocated where they belong: to improve students’ futures.
Personalized health data is becoming more accessible to and actionable for patients and providers than it’s ever been before. The expansion of big tech’s presence into the healthcare space is an indicator of the viability of the data gathered from wearables and connected devices, as today’s consumers can look down at their phones and watches to view more than steps and calories burned in real time. Devices are now offering more feedback and data to consumers on cardiac care biometrics that once provided limited information. Not only are cardiac events being measured more accurately with advanced sensors, but it is happening at home, at work and on the go, without visiting the doctor’s office.
These advancements lead to truly proactive care, enabling consumers to take charge of their health alongside their care provider before a larger issue manifests. When every second counts in cardiac care, this area stands to benefit immensely from advancements. More specifically, monitoring potential or diagnosed atrial fibrillation (afib) is of key interest to technologists in the space. This potentially serious cardiac complication, which can increase the risk of stroke, is being tracked and managed for the first time with consumer wearables. Afib is an irregular and often rapid heart rate that occurs when the two upper chambers of the heart (atria) experience chaotic electrical signals, and it currently affects between 2.7 to 6.1 million Americans.
Having a wearable that can accurately track heart rates and alert users to the occurrence of afib is instrumental in preventive care and can save hundreds of thousands of dollars per patient in healthcare costs by addressing the medical concern before it results in stroke or cardiac arrest. Although still in the early stages of provider acceptance, identifying afib wasn’t feasible at home, remotely or without a prescription until these novel wearable devices entered the market.
There is more to why these connected devices are still in the early adoption stage. It is a big jump for the medical community to go from highly regulated and prescribed devices to FDA-cleared, consumer-ready devices. The healthcare industry has already made this transition in blood pressure monitoring, temperature monitoring and other areas; however, cardiology is the new horizon that holds much promise. Part of the transition lies in clinical proof of accuracy. Afib detection requires extremely accurate data ideally over a period of time to properly track and identify the chronic heart condition.
The other challenges are data access and actionability. Connecting the patient and doctor is the first step. Yet, the data presented also has to be actionable and easily understood for both parties. Another component of these challenges is incorporating this feedback into the electronic medical record, the primary source of patient data for physicians. However, interoperability with externally acquired data continues to be a challenge in health IT.
In a time where some people with atrial fibrillation have no symptoms and are unaware of their condition until it’s discovered often at a later stage, advanced tracking and detection is crucial for catching it early. With today’s new technologies, this is now possible for the first time. Technologists are exploring many form factors, platforms, gamification techniques and more to increase adoption of these new offerings, but all can agree that cardiac care, in particular afib, is a priority for development.
This article is the first in a three part series outlining medical conditions that stand to be most impacted by advancements in IoT sensors and wearables.
In semiconductor lingo, the term die attach simply means a die, or bare chip, is placed and attached with some form of adhesive in a package, like a ball-grid array. However, given the assembly and manufacturing demands of small IoT devices, die attach technology has emerged onto the printed circuit board (PCB) manufacturing floor. This means dies or bare chips are now being directly placed on substrates or small rigid-flex circuits used in small IoT devices. The reason for this is that the circuitry area of those miniature PCBs doesn’t have the luxury to allow bulky or traditional space-consuming device packaging using plastics, ceramics or glass.
What’s also important to know is that all die attachments for IoT devices aren’t the same. There are three different die attach methodologies, each one involving various key differences: epoxy, eutectic and solder attach.
The epoxy method could be silver epoxy glass or a polyimide-based material. It is dispensed using a very fine dispenser that precisely distributes the epoxy. The substrate in this case sometimes needs to be heated to a higher level of temperature, from room temperature up to 200 degrees Celsius, depending on the type of epoxy used. This temperature allows the epoxy to be properly cured so that it adheres to the substrate to precisely create the joint between the substrate and the die.
Eutectic die attach uses a metal layer made of aluminum or gold. Gold has a very high temperature ranging from 230 to 400 degrees Celsius. A eutectic system is a mixture of chemical compounds or elements that form a single composition and solidifies at a lower temperature compared to the individual elements, which have considerably higher individual melting points compared to compounds. The fact that eutectic temperature is more manageable and lower compared to the individual pure elements is important in eutectic bonding.
The third die attach method is solder attach. It is similar to surface-mount technology manufacturing or SMT joint creation. Solder attach is a common type of die bonding because of better thermal conductivity of the solder material itself. Depending on the different methodologies discussed above, that could be extreme variation of temperatures on the die during the die attach process and operation.
Solder attach is an important concept to dissipating heat that is generated from the power device, basically the bond that efficiently creates the joint. When the solder is attached, it is referred to as soft solder attach, as well. Depending on the alloy used, the melting temperature is a lower one compared to other methodologies. Alloy used in solder attach can either be tin lead, which melts at 150 degrees Celsius or indium, which melts at 220 degrees Celsius.
The increasing adoption of IoT has created the next generation of smart devices that are transforming business models and human lives. The proliferation of smart devices has resulted in a complex ecosystem of sensors, software and networks. Customers now need a comprehensive assurance proposition for their IoT Implementations.
A large-scale IoT implementation is comparable to a symphony orchestra, where a group of instrumentalists and their instruments, coordinated by a conductor, produce united pleasurable music. Though the individual instrumentalists are extremely talented, imagine how annoying it would be if one or more of the instruments are out of rhythm. Rehearsal is the key to a successfully coordinated orchestra.
In identical context, though individual elements, such as devices, network connectivity, IoT platforms, cloud infrastructure, data analytics and end-user apps, of an IoT ecosystem are developed and tested separately, the seamless integration and successful coordinated operation of all these elements are necessary to provide a reliable, seamless end-user experience.
In a world of connected everything, IoT devices and platform developers must adopt proven strategies and systems that enable them to deploy reliable IoT technologies faster to market. This interesting technology not only makes the physical world a reality, but also brings a new set of challenges to the community.
IoT-assured best practices
1. Define a scalable IoT strategy. While drafting an IoT implementation plan, enterprises must define both its short-term as well as long-term objectives with focused outcomes. Most often, enterprises are engaged in a small IoT prototype or proof of concept with few critical business cases, and all their assumptions and decisions are made around the use cases and targeted users. Once this system is moved into production and they want to scale, enterprises may hit a roadblock — and it might be too late to realize that the system is not adequately scalable to fulfill an internal product roadmap or network architecture. So, companies end up going back to drawing board to modify their architecture. Some of the decisions, like whether to adopt IoT edge computing or not, should be made during the IoT architecture discussion and everything should be built on top of it. So, it is very critical to define what is expected from an IoT implementation both from a short-term and long-term perspective including the transformation plan.
2. Select the right IoT platform. Needless to say, with the diversified IoT platform market and an array of services from SaaS- to Paas-based IoT and from open source to sophisticated commercial IoT platforms, it is tricky to choose the platform vendor that offers a robust, reliable, feature-rich platform that is cost-efficient. For this reason, it is critical to choose a partner with proven capabilities and trustworthiness. It is important to pay attention to their product roadmap and their ability to provide hotfixes, as well as their success rate in handling production issues, post-production support offerings and maintenance records, and client references from prior implementations from a similar industry. Operating cost estimation has to be drafted jointly based on your implementation plan and their product upgrade strategy to avoid any surprises at a later point of time. Choosing the right platform partner is one of the most important tasks that enterprise IoT architect teams, product management and ops teams should do cohesively.
3. Draft an inclusive fallback plan. Customer IoT experience is like a magic show — once done perfectly, it is appreciated and there is no second time. While building your IoT architecture and cloud components, it is vital to draw all possible situations and draft a bulletproof fallback plan and, more importantly, this plan has to be developed, tested and approved before the entire IoT system is pushed to production. Integrated fallback plans include things like cloud server backup, alternative network switches, and IoT edge and local computing facility for mission-critical applications such as healthcare or nuclear power plants. In nutshell, have a fallback plan that is robust and reliable in case of an emergency.
4. Choose quality over features. When it comes to keeping the pace with platform vendors, it is good to stick to the basics and choose quality over features. Look for stable features in the IoT platform and build your system on top of it. But take caution when accepting new build/patch releases and do not rush for rich features as IoT platforms are maturing in parallel — it is important to give some time before choosing the right version of the platform for your needs.
5. Embed quality across the software development lifecycle. Implementing quality gates at each layer and different phases of IoT implementation is a critical success factor. Three core elements for IoT quality are:
- Test infrastructure — A production-like test infrastructure that can support not only preproduction testing of devices and platforms, but is also suitable for post-production situations is important. This continuous pipeline should support periodic interoperability and integration testing from device and sensor OEMs, as well as patch upgrade requests from IoT platform vendors without disturbing ongoing product enhancement. This is a living environment that needs to scale based on your production.
- Custom tools and techniques — Every IoT implementation has unique devices, networks, platform architectures and tools that can mimic IoT devices and platform behaviors, like digital twins. These must be tested for extreme real-life situations before they are ready to deploy into production.
- Integration test strategy — After the successful testing of individual components, executing end-to-end integration testing, without any compromises, is a must. Automated periodic integration testing is a best practice that can unearth defects earlier and help accelerate production deployment.
As IoT platforms and device ecosystems have matured during recent years, the acceptance of more and more IoT use cases and products has gained momentum across industries. As the market evolves, new technologies, protocols, platforms, ecosystems and processes for building IoT use cases has also evolved to a great extent. Nevertheless, being successful in IoT is defined as a balancing act between the stability of the system and the cost of operation — and, yes, keeping IoT dreams alive still remain a key challenge due to various reasons. While most of the technical challenges are being addressed every day, successful IoT implementations require a thorough project management plan blended with industry best practices.
Sensor usage has exploded in all sorts of devices, from smartphones and embedded devices to moving objects such as robots. Sensors form the core of IoT.
A typical smartphone contains 20-plus sensors. Rapid smartphone growth over the years has also helped the overall sensor industry in terms of advancement of sensor technology, miniaturization and dramatic reduction of their cost. This in turn has helped the robotics industry too.
Sensors come in all different sizes and shapes, measuring various quantifiable parameters of the robot or the external environment the robot is in. The types of sensors used in robotics are large and vary across different applications of robots and types of robots.
In this article, I am going to focus on sensors that help the mobility of autonomous mobile robots (AMRs) — i.e., localization and navigation in the environment.
Sensors for an AMR are like its eyes. When combined with sensor software algorithms, sensors allow an AMR to understand and navigate the environment, detect and avoid collision with objects, and provide location information about the robot.
Types of sensors
Exteroceptive sensors discern the external world; these include camera, laser and lidar, radar, sonar, infrared, touch sensors such as whiskers or bump sensors, GPS and proximity sensors.
Proprioceptive sensors deal with robot itself, such as accelerometers, gyroscope, magnetometer and compass, wheel encoders and temperature sensors.
There are many other categories sensors can be clubbed into, such as active or passive sensors. It is also important to note that sometimes the boundaries between exteroceptive and proprioceptive overlap.
With the use of voice assistant interfaces becoming popular to interact with robot, sound sensors such as microphones are also becoming more prevalent.
Likewise, as robots must often connect to the internet, communications with Wi-Fi and LTE are critical. While not strictly sensors nor actuators, they allow robot to interact with external world.
The sensor market for robotics is expected to grow steadily, as per this report.
Another report looks at the vision sensor market for robotics for industrial environments and predicts this market alone to be very big.
Sensors used for mobility
Typical sensors used in ground mobile robots and drones include:
Inertial measurement units (IMUs) typically combine multiple accelerometers and gyroscopes. They can also include magnetometers and barometers. Instantaneous pose (position and orientation) of the robot, velocity (linear, angular), acceleration (linear, angular) and other parameters are obtained through the IMU in 3D space. MEMS sensor technology advances have benefitted IMUs significant. IMUs suffer from drifts, biases and other errors.
GPS provides latitude, longitude and altitude information. Over the years, GPS accuracy has increased significantly and highly accurate modes, such as RTK, also exist. GPS-denied areas, such as indoor areas, tunnels and so forth, and slow update rates remain a GPS’s top limitations. But they are important sensors for outdoor mobile robots and provide an accurate periodic reference.
Depending on whether they are indoor or outdoor robots and the speed at which the robot moves, laser sensors can vary significantly in price, performance, robustness, range and weight. Most are based on time of flight principles. Signal processing is performed to output points with range and angle increments. Both 2D and 3D lasers are useful. Laser sensors send a lot of data about each individual laser point of the range data. To take full advantage of lasers, a lot of compute power is needed. Lidars are also very popular in mapping.
Encoders count the precise number of rotations of the robot wheels, thereby estimating how far the robot has travelled. The terms odometry or dead-reckoning are used for distance calculation with wheel encoders. They suffer from long-term drifts and hence need to be combined with other sensors.
Vision sensors, such as cameras, both 2D and 3D, as well as depth cameras, play a very critical role in AMRs. Computer vision and deep learning on the sensor data can aid object detection and avoidance, obstacle recognition and obstacle tracking. Visual odometry and visual-SLAM (simultaneous localization and mapping) are becoming more relevant for autonomous robots operating in both indoor and outdoor environments where lighting conditions are reasonable and can be maintained. 3D cameras, depth and stereo vision cameras provide pose, i.e., position and orientation, of an object in 3D space. In industrial environments, well-established machine vision techniques combined with pose can help solve a number of problems from grasping to placement to visual servoing. Thermal and infrared cameras are used when working in difficult lighting conditions, such as the dark or fog.
If there is an object in the range of an ultrasonic sensor pulse, part or all of the pulse will be reflected back to the transmitter as an echo and can be detected through the receiver. By measuring the difference in time between the transmitted pulse and the received echo, it is possible to determine object’s range. Sonars are impacted by multipath reflections.
Pulsed and millimeter wave radars detect objects at long range and provide velocity, angle and bearing parameters typically measured to the centroid of the object. They work in all weather conditions while most other sensors fail in complex environments, such as rain, fog and lighting variations. But their resolution is limited as compared to lidar or laser.
Robust and accurate localization schemes combine data received from IMUs, wheel encoders, GPS, laser, radar, ultrasonic and vision software algorithms to implement SLAM techniques. Depending on the application and specification of navigation and object avoidance, the fusion can be limited to few sensors or all sensors.
Selection and placement of sensors
With so many sensors at disposal, it is a complex exercise to select the right ones. Factors that dictate the choices are type of application, specification of navigation and localization features, environment in which the AMR is going to operate, available compute power to run sensor algorithms, choice of software algorithms, such as sensor fusion, power consumption and costs. Invariably, there is a tradeoff taken when balancing across all selection parameters.
Dynamic range, accuracy, resolution, linearity, field of vision and many other parameters determine the quality of sensors. Sensors used in defense applications are very expensive and can meet superior specs. But in the majority of non-defense applications, where the specification, errors and biases of sensors are suboptimal, the choice of algorithms running the software is a key decision point.
The placement of sensors inside the robot also requires a very careful design exercise. Sensors in IMUs are extremely sensitive to external forces, such as vibrations, stray magnetic fields, lighting conditions and more. Accurate static translation and rotation offsets need to be calculated using physical methods and calibration techniques. Some of the sensors require antennas and, once again, selection of the right antenna and its placement are critical.
Parameters to consider during sensor integration
Sensor calibration; time synchronization, in sensor fusion especially; different rates and frequencies at which sensor data arrives from different sensors; various types of errors in measured values and biases, which are both intrinsic and external environment-driven; and environmental conditions impacting sensor measurements remain key challenges in sensor integration. The majority of research and development happens in these areas.
Various sensors are available in the market today with a wide range of performance parameters. Choosing the right sensors is a complex, technical and product management task. It’s a balance of sensor spec, application requirements, cost, power, form factor, environment conditions, software algorithm sophistication, time to market, longevity and more.
Through sensor fusion algorithms, different sensors complement each other well to achieve tough goals. Sensors and sensor software algorithms are critical to the success of the AMR industry to a large extent.
Sensor manufacturers are also putting significant effort into improving sensor performance, accuracy and range. Some are even doing sensor fusion inside their units. This helps accurate time synchronization. When selecting sensors for a specific AMR application, teams need to keep a continuous watch on various developments happening in the sensor world.
Is edge computing analytics a real IoT trend, or is it more smoke from analysts and large technology vendors? As manufacturers like Cisco, Hewlett Packard Enterprise and Dell build specific infrastructure for the edge designed to be more physically rugged and secure, we should all believe there will be a lot of IoT money at the edge.
While working the last few months with Aingura IIoT, I have become aware of the difficulties of developing and implementing edge computing machine leaning in the manufacturing industry. The company combines years of industry experience and knowledge in automation (PLCs, SCADAs, HMIs) and electrical and mechanical engineering with a unique edge computing distributed system used by data scientists to develop machine learning algorithms and build IIoT applications for manufacturing and automotive companies.
However, it is worth asking if clients are ready or even interested in implementing these technologies — or will they continue one more year with pilots that do not go anywhere?
Below are some observations I believe will help accelerate the implementation of edge computing and machine learning in manufacturing.
Get help finding the needle in the haystack
With so many companies talking about Industry 4.0 and with a fragmented ecosystem of IIoT vendors and the challenges that always appear during discussions with customers, it is normal that manufacturers are asked for free pilots.
But it is not just finding the needle (the best or cheapest IIoT offering) in the haystack (the IIoT ecosystem), it is how well this needle matches your business and technology strategies.
I know I am selling myself, but my recommendation is to get advice from independent IIoT experts.
Avoid OT vendor lock-in. We need machine data availability.
Powerful edge analytics machine learning applications need to exchange data with manufacturers’ PLCs. Reading the specifications, one might think this will be easy. In fact, we can find many ways to extract data from PLCs if manufacturers provide info on how to do it. However, most top PLC manufacturers do not allow customers to easily extract data from third parties or their own customers.
It is not a question of protocols; it is a question of vendor lock-in and data availability. Customers must request openness and avoid lock-in if they want innovation in their plants.
Edge computing and machine learning: The last frontier to break between IT/OT
I was optimistic about the quick convergence of IT and operations technology (OT) before. I was wrong. If you visit manufacturing companies’ plant floors, you will see how much work still needs to be done.
Edge analytics is a key component in the integration of IT and OT and requires the combined knowledge of these teams. But the lack of skills in both areas, and the impact in operations and business, makes it difficult to know which department should lead edge analytics projects.
Manufacturing companies need a role with authority, such as a chief IIoT officer, and resources to lead the IT/OT convergence strategy.
To cloud or not to cloud: Don’t let this stop you
When I wrote about fog, cloud and IoT a few years back, the hype around edge computing and machine learning had just started. There was a lot of confusion around fog computing and edge computing and how they would impact the IoT architecture, especially when it came to cloud workloads.
Today, top cloud vendors are offering IoT platforms and tools that combine cloud and edge application development, machine learning and analytics at the edge, governance and end-to-end security. On the OT side, companies like Siemens have launched MindSphere, an open, cloud-based IoT operating system based on the SAP HANA cloud platform.
Manufacturing companies should not stop developing or deploying edge computing and machine learning applications that monitor the health of their machines or improve asset maintenance and quality control because they are afraid of integration with public or hybrid clouds.
IIoT edge computing helps manufacturers improve their competitiveness without the cloud. And when ready for the cloud, it will provide additional benefits, so make sure your IIoT edge system is ready for easy integration.
Connected machines are the only way for new business models
Security is one of the main challenges of IIoT adoption in the manufacturing industry. Manufacturers have been reluctant to open their manufacturing facilities to the internet because of the danger of cyberattacks.
But as we are heading for an economy of platforms and services that need products and machines connected, every factory should be able to tap into machine data remotely and make it available for machine vendors. This requires every edge computing machine learning system implemented to be built with the capability to share data remotely via open and secure protocols and standards, such as MTConnect and OPC-UA.
Having machines connected is the first step to making machines smarter, building smarter factories and flourishing with new business models.
The benefits of using edge computing machine learning systems are very attractive to manufacturers because it allows them to minimize latency, conserve network bandwidth, operate reliably with quick decisions, collect and secure a wide range of data, and move data to the best place for processing with better analysis and insights of local data. The ROI in such IIoT systems is very attractive.
But manufacturers will never achieve these benefits if they do not step up and change their outdated attitude and quickly start their IIoT journeys.
Thanks in advance for your likes and shares!