Even just a decade ago, it was hard to believe that computers would be able to drive cars or easily recognize pictures of a cat. The programming and AI tools around at that time struggled with computers seeing the world around them and processing that information accurately.
In the last 10 years, machine learning has slowly grown from a field of research to a mature technology with real-world applications that are used by some of the top organizations and multiple industries today. At the most basic level, machine learning creates algorithms that solve some of the most complex and interesting problems that technology organizations face.
Many of these problems have moved from science fiction to established fact. Some have become positively easy, such as handwriting recognition, which is now the “Hello World” of machine learning. Through Exadel’s own experience developing machine learning programs for clients, we wanted to share some of the do’s and don’ts of using machine learning technology.
On the surface, the result of the machine learning process looks much like traditional programming because the end product is a programmatic algorithm that processes information. When it comes to the how of machine learning program creation, it actually looks quite different.
How to create a machine learning model
First, a few don’ts of creating a model: don’t define requirements, design a system or algorithm, write code, test or iterate as you would with traditional software development.
With machine learning you must characterize the problem in a way that makes it susceptible to machine learning, and then you must understand if you have the data that can help solve that problem. Next, you define a model, train the model with training data and test the model, hoping that your training resulted in a high probability of success. If it didn’t, you tweak your model and retrain.
Facial recognition app development
One of our clients came to us for help developing an app to make it simpler for secure check-in to an office space. The client had requests to simplify the visitor check-in process and avoid duplicate data entry. When someone checks into the office, they must enter a few pieces of information, including name and phone number, into a tablet at the front desk. For reasons of privacy, this needed to be re-entered every time because we can’t simply provide a list of all the people who have previously checked in. Re-entering this information was repetitious for visitors, but important for the client to know who was in the office and how to get a hold of them. In order to automate this process and to provide security for the information, we decided to use facial recognition to identify visitors and understand if they had been in the office before. If they had been in the building before, we would have a picture on file and could identify them when they took a picture again. We decided to use machine learning, open-source tools and open source projects as a baseline. Not surprisingly, we sought out existing tools to develop this application.
In the existing app, when a visitor first comes to the office, they fill in the information on a tablet and the tablet takes their picture. The check-in tool now has a profile and an image that can be used to recognize each individual.
To create this facial recognition system, we used some off-the-shelf machine learning and computer vision (CV) components:
- Python: generally the language of choice for machine learning today.
- Tensorflow: an open-source machine learning and neural network toolkit. Tensorflow is the go-to library for numerical computation and large-scale machine learning.
- scikit-learn: Simple and efficient tools for data mining and data analysis.
- scipy: a free and open source library for scientific and technical computing.
- numpy: a Python library supporting large, multi-dimensional arrays with a large library of functions for operating on these arrays.
- OpenCV: an open-source library of functions aimed at real-time computer vision.
These are all very common tools used for machine learning projects. We’ve been working with and adapting open source code to tie all of these components together, including the face recognition using Tensorflow GitHub project.
Developing the code and tools to do facial recognition is important, but, as mentioned above, the core of machine learning is to train the model until the results on test data — which has never been evaluated during training — provide a high-enough level of success to say that the developed neural network algorithm can recognize people in the setting — in this case, checking in at the front desk.
Data is very important here as well. Best practices indicate that you should have training data, validation data and test data. Organizations use training data, data that your model learns from, to train the model. Machine learning specialists use validation data to review the trained model. The machine learning specialist may then change or tweak inputs, based on this validation. This is part of the iterative process of developing the model. The machine learning model never sees test data except in the final testing steps. It is the gold standard that is only used once the model is fully trained. It may be used to compare the success of two different, trained models.
The pre-processing and training processes look like this:
- Find the face: Find the face within the image. Real world images contain more than the face, so you first must isolate the pieces that comprise the face.
- Posing and projecting faces: Even the best computer algorithms work better if every image has the same proportions. We needed to align the face within the image frame to improve its use with the machine learning model.
- Calculate embedding from faces: A human describes the difference between faces using visual human-readable characteristics, such as nose size, face width or eye color. We use neural networks that automatically determine machine-readable features.
- Use embedding for training model: The step where we are training the model with images or using the trained model.
Once we have trained the model and tested it, we can deploy it so that it can be used by the tablet program to check newly created images to see if they match anyone who has visited the office before.
We created a web API that the tablet application uses to send in a photo to potentially match the new image against the image database.
Machine learning is still a relatively nascent technology, but its applications are starting to become more pervasive. As we start to better understand the best practices and uses for machine learning, organizations must have the skills ready to keep up with the competition.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
I’ve been promoting the progress of the fourth Industrial Revolution for a few years and I’m excited to see the momentum of the 4IR conversation in the market. However, it’s still an enigma for many organizations that need help to figure out where to start.
The 4IR isn’t just about technology. Although the 4IR does encompass different types of technology, including artificial intelligence, IoT and robotics, the non-tech arena is equally important. We must remember to keep the workforce, cybersecurity, resiliency, trust or distrust of technology and business model innovation top of mind. Marrying the tech and non-tech areas is key to success in the 4IR.
Success can take many forms — such as more efficient operations, better customer engagement, more productive employees and the creation of new businesses — but the primary goals are to solve business problems and generate new opportunities.
What differentiates this revolution from the rest
Like the three previous industrial revolutions — steam, electricity and digital — the 4IR represents a giant leap forward for business. Unlike the first three revolutions, during which people connected to machines, in the 4IR, machines connect to other machines and people. That’s a profound difference.
Why? Because all those machines that are connecting to other machines create vast amounts of data, far more than people can produce on their own, leading to a crucial 4IR challenge: trust.
With all the breaches that have taken place, distrust seems more prevalent than ever. Banks, retailers, social media platforms and organizations that routinely ask people to trust them to protect their data face a lot of skepticism these days. The 4IR goes way beyond that; it basically asks humans to trust millions of devices and machines and all the information they produce and distribute. This is a critical issue for organizations to address if they don’t want to lose out on the opportunities 4IR offers.
Consider the economic challenges and prepare a strategy
Despite the need to overcome the trust gap, several economic challenges reinforce that now is an ideal time to move forward with a 4IR initiative. Nearly 30% of the CEOs in PwC’s 22nd Annual Global CEO Survey forecast a decline in economic growth in 2019, up from just 5% who felt that way in the prior year. During the same period, optimism among North American CEOs dropped from 63% to 37%. That’s unsettling.
How should your organization deal with these tough business challenges? Enterprises that build a technological and cultural framework that supports the 4IR will be far better prepared to ride out any seismic changes that may unfold.
Start to scope out the 4IR world, but move at your own pace
As you begin considering your 4IR strategy, be sure that you develop an understanding of the essential eight technologies. In particular, pay attention to IoT, the glue that binds all connected tools, technologies and systems in a 4IR organization. These connected devices go beyond merely boosting efficiency and cutting costs; they can also disrupt an organization or an entire industry.
But there’s no need to start by tackling every 4IR technology at once. Instead, begin with a manageable use case and build up and out from there. For example, asset trackers can instantly identify the location of lost hospital equipment and can reduce the number of lost, stolen and misplaced devices and machines. Other types of sensors can identify and count valued customers, so they can be given an exceptional experience and reduce wait times. Hotel occupancy sensors can determine when people are physically present in a room, so the lighting and temperature can adjust automatically.
These modest implementations do more than increase productivity, efficiency and customer satisfaction. Organizations can join the 4IR at their own pace, by automating tasks and then scaling up from there.
Don’t underestimate the importance of culture
Since IoT and the 4IR are about more than just technology, you must think about cultural considerations that are part of a connected business model. A vital factor here involves upskilling the workforce, yet approximately 60% of organizations in PwC’s digital IQ survey reported that their employees lacked the digital skills needed to move their organization into the future.
To be successful with the 4IR organizations must establish partnerships that bring expertise and new ideas to the table, but must also bring employees on the journey. It’s about more than tech training and bringing in tech expertise; it’s about reshaping digital mindsets and creating a collaborative culture.
Consider the potential
All of this may seem like a big investment in time and resources, but the potential ROI makes it worthwhile. Wouldn’t you applaud an investment that resulted in better service to customers, enhanced employee productivity, the development of new products and services and the creation of new, innovative business models? I would and do. Already, the effects of the 4IR on business and industry are clear and we’re just getting started.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
It’s been widely reported that the IoT market is expected to generate significant market opportunity – upwards of $267 billion — by 2020. Numbers aside, one of the main drivers is the rapid growth of IoT and connected devices consumed at both the consumer and industrial level, leading to an extreme side effect — a massive explosion of data. The amount of data created by all these devices is expected to reach 847 zettabytes per year by 2021.
The IoT potential can be limitless if businesses can identify and understand how these devices and the data they create affect people’s daily lives. Let’s look at a few examples of the opportunities ahead with IoT data.
Consider an energy exploration organization and its ability to uncover viable energy resources. What if they were able to visualize and analyze data from an unlimited source of sensor data points? It would make the process of finding natural resources much more efficient and equip their scientists with information of the highest degree of accuracy. This would change the way exploration is done, reducing the risks to the environment and potentially cutting down the time it takes to complete the process.
We’re witnessing more and more examples of IoT at work. Another example is a large delivery player, using thousands of delivery vehicles and staff to deliver packages to more than 150 million addresses. The organization can use near real-time location data from connected devices to streamline delivery routes and reduce inefficiencies in distribution methods.
These are two examples of current IoT use cases. There are a lot of smart things out there. Even a light bulb has an IP address behind it these days. It’s now or never to monetize IoT applications. It’s a good start for businesses to collect and store IoT data, but it’s more meaningful to understand it, analyze it and use the insights to improve efficiency and effect meaningful changes.
The unlimited potential of IoT carries across multiple industries that all stand to benefit from the technology and the data that comes with it, including saving energy, package route delivery optimization and predictive maintenance. A focus on location intelligence, advanced predictive analytics and the rise of streaming data analysis will undoubtedly drive a return on IoT investments.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
This is the first piece in a three-part series.
There is a lot of controversy in business circles as to whether organizations use AI technology for unethical purposes. This blog isn’t about that at all.
From a data scientist’s point of view, ethical AI is achieved by taking precautions to expose what the underlying machine learning model might learn that could impute bias. At first glance, latent features of the model or relationships between data may not appear to be biased, but deeper inspection shows that that the analytic results that the model produces are biased toward a particular data class.
Bias can be imputed through confounding variables
One of the most common misperceptions I hear about bias is “If I don’t use age, gender or race, or similar factors in my model, it’s not biased.” Well, that’s not true. Even though the same people holding this opinion know that machine learning can learn relationships between data, they don’t understand that there are proxies to biased data types in other features that are captured. These proxies are called confounding variables and, as the term indicates, unintended variables can confuse the model into producing biased results.
For example, if a model includes the brand and version of an individual’s mobile phones, that data can be related to the ability to afford an expensive cell phone — a characteristic that can impute income. If income is not a factor desired to use directly in the decision, imputing that information from data, such as the type of phone or the size of the purchases that the individual makes, introduces bias into the model. A high dollar amount on purchases can indicate that an individual is more apt to potentially make these types of transactions over time, again imputing income bias.
Research into the effects of smoking provides another example of confounding variables. In decades past, research was produced that essentially made the correlation, if your smoke, your probability of dying in the next four years is fairly low; that must mean smoking is OK. The confounding variable in this assumption was the distribution of smokers. In the past, the smoking population contained many younger smokers whose cancer would develop later in life. The older smokers were already deceased. Thus, the analytic model contained overwhelming bias and created a biased perception about the safety of smoking.
In the 21st century, similar bias could be produced by a model concluding that, since far fewer young people smoke cigarettes than 50 years ago, nicotine addiction levels are down, too. However, youth use of e-cigarettes jumped 78% between 2017 and 2018, to one out of every five high-school students. E-cigarettes are potent nicotine delivery devices, fostering rapid nicotine addiction.
The challenge of delivering truly ethical AI requires closely examining each data class separately. As data scientists, we must demonstrate to ourselves and the world that AI and machine learning technologies are not subjecting specific populations to bias.
Electronics manufacturing services providers who use conventional printed circuit board assembly and manufacturing models are scrambling to augment their production facilities to handle increasingly smaller boards like those used for IoT devices.
At today’s advanced printed circuit board (PCB) houses, savvy leaders are homing in on newer technologies and merging conventional surface-mount technology manufacturing with the newer microelectronics manufacturing. Why? Because sophisticated IoT PCBs demand a different breed of inspection and calibration to comply with extremely miniature dimensions and that’s where PCB microelectronics lives.
Take for example, the newer higher-powered laser microscopes introduced on the microelectronics assembly and manufacturing floor. These tools are tailored to perform inspection and calibration tasks that legacy PCB manufacturing systems cannot handle because they cannot deal with the minutest details imaginable. Those minute details are the cornerstone of microelectronics manufacturing.
Capabilities of advanced laser microscopes
Tools such as the laser microscopes, perform die or chip, epoxy resin and solder mask bleeding and air bridge inspections. They also calculate Z-axis dimensions and create 3D rendering. Why are these manufacturing tasks so important? All fall under the umbrella of assuring IoT device reliability and operational integrity. These highly advanced laser microscopes check for die surface defects, such as extremely fine cracks or miniscule chipping at the corners of a die. These scopes also quickly spot corrosion, contamination or oxidation.
It’s important to prevent floating die. In cases like this, miscalculating the amount of epoxy under the die for the die attach process results in epoxy resin bleeding. In other words, a die isn’t completely attached to the substrate. The microscopes verify that a poorly produced die attach doesn’t go any further in the IoT PCB manufacturing process.
Microscopes perform inspection for solder mask bleeding. This means the mask may bleed onto the pad where wire bonding is installed. The pad’s size may not be sufficient to perform the bonding. Again, these high-powered scopes inspect and verify this issue doesn’t exist.
The microscopes also inspect air bridges. An air bridge is the air distance created to bypass a component located between two other components. It connects a wire bond from one point to another and passes over the middle component.
Aside from inspections, microscopes calculate length, width and height in the Z-axis for dealing with height restrictions. Sometimes dies are attached in gold surface finish cavities, and they need to be precisely height controlled. This permits a perfect fit in those cavities before attaching wires via wire bonding after die attach. The high-powered scopes are perfect tools to view cavity length and depth and to perform height measurements in the Z-axis.
Finally, 3D rendering of the wire bond pads, substrate height or paste height provide microelectronics manufacturing technicians a clear visual. Technicians can calculate the length of the wire bonds, their loop curvatures and underfill thickness for the proper die attach and accurate wire bonding.
When it comes to IoT PCB microelectronics manufacturing, there should not be any question about the tools needed to achieve accurate inspections and calibrations.
IoT is truly a ground-breaking innovation that affects everything in our lives — from kitchen appliances to major business operations. Things like GPS trackers, sensors and even everyday objects can all be embedded with electronics and internet connectivity to make them smart. These smart objects can connect to the cloud, be remotely controlled and communicate with each other. IoT paved the way for tech companies to create tailor-made solutions for business operations and daily challenges. The radical changes in sophisticated mobile devices, computers and advanced machinery were somewhat expected because of the speed of their development in recent years, but controlling the temperature of your coffee cup via your smartphone was a pleasant surprise for everyone. The ability to create vast networks of connected devices has already deeply affected every industry in the world, leading to IoT being widely considered a new milestone in technological history.
What the IoT Means for the Future of GPS
Although GPS technology has been around for decades, the emergence of IoT has reshaped the way we use GPS-based applications and devices. IoT technology enhances GPS devices to transmit data remotely and connect to other systems and sensors. Modern-day tracking devices can collect and transmit comprehensive vehicle data, including fuel monitoring, remote temperature monitoring and driver identification.
Take a look at some examples of how IoT connectivity can enhance the capabilities of GPS tracking technology:
Improving logistics and transport operations: The biggest challenge in the transport and logistics industry is the tracking of vehicles and where they are headed. Managing a fleet operation can be a logistical nightmare. With hundreds of moving assets and employees scattered across cities or even countries, it becomes almost impossible to keep track of their movements and plan efficient routes and provide customers with accurate ETAs. The introduction of IoT-enabled tracking devices completely changed the way fleet companies handle their business operations. Field managers now have access to actionable real-time information about their vehicles, giving them total control over their assets to optimize the performance of their workforce. Organizations can use real-time location tracking to make immediate changes to routes in the event of road closures or traffic and ensures they can respond promptly when customers require urgent services.
Providing solutions for people with disabilities: IoT has promising implications for the lives of people with disabilities. It can be frustrating to depend on other people to travel or take care of your daily needs. New innovations are developing rapidly to empower people with disabilities with the help of advanced sensors and IoT technology. Once a seemingly far-fetched idea, self-driving vehicles will soon be available to those unable to drive themselves. Personal GPS tracking devices offer a myriad of tools to assist people and ensure their safety, such as a panic button, event alerts and live tracking. GPS trackers with IoT-enabled sensors provide the necessary navigation tools when moving around and can scan the surroundings of people with visual impairments to direct their movements and keep them safe. Tracking devices also provide additional safeguards in the form of personal locators.
Tracking devices for parents: Losing a child is every parent’s worst nightmare, and parents have every right to be worried. The statistics on missing and abducted children show that our kids are very vulnerable to the outside world. IoT-enabled tracking devices are the best way to give parents peace of mind without restricting a child’s freedom. These devices create a protective barrier around children to keep an eye on them at all times. By drawing virtual fences around specific locations, such as home and school, parents can know their children are where they should be without having to actively watch the tracker. Tools, such as instant event alerts and an SOS button, can be lifesavers in emergency situations, giving parents the chance to rush to their child’s aid.
Safer driving with IoT sensors: Vehicle tracking devices reveal a surprising amount of information about our driving habits. IoT technology greatly improves a driver’s monitoring capabilities, with instant updates about their behavior behind the wheel. With sensors onboard, tracking devices detect and help prevent the driver from speeding, idling, harsh braking and other risky driving practices. For businesses with a fleet of vehicles, fleet management systems integrated with IoT and GPS technologies have unprecedented benefits when it comes to safety and security. With the driver performance reports provided by the management system, unruly drivers can be identified and encouraged to adopt safer driving habits through training schemes, warnings or rewards for safe driving. In emergency situations, managers can locate vehicles and drivers on demand to dispatch emergency services to the scene, potentially prevent serious injuries and reduce vehicle downtime drastically.
Superior asset monitoring: Transporting sensitive equipment and cargo is an extremely demanding undertaking. Perishable goods such as vegetables, baked goods and meat products must be kept in a carefully controlled environment when transported over long distances, which means they must be monitored around the clock. Traditionally, drivers had to complete this laborious process manually by inspecting the cargo at regular intervals. These days, field managers use GPS tracking devices containing IoT-enabled sensors installed into each vehicle at relatively low cost to monitor every vehicle and cargo hold simultaneously. Advanced sensors can detect any changes in the condition of the cargo hold and effortlessly transfer the data to the cloud. Field managers can check the state of their goods remotely — from anywhere in the world — and make necessary changes, such as adjusting the temperature and humidity settings or informing the driver about a malfunction.
IoT technology is spreading like wildfire throughout the world. It’s already had a huge effect on many aspects of our lives, and it’s still evolving. GPS-based devices and tracking systems have been considerably enhanced by the emergence of IoT-enabled systems and connected sensors. With IoT’s superior connectivity and the vast network of connected devices, technological marvels like self-driving vehicles, smart homes and even smart cities have become possible.
More organizations in all industries are starting to use augmented reality glasses and software, but the bulky headset and premium price tag are major barriers to entry for most industries and job roles. Mixed reality glasses are on the rise, capitalizing on the plethora of AR uses and advancing business applications with sleeker and lighter form factors, advanced computing power and more capabilities. In fact, 68% of workers believe that MR will play an important role in helping to achieve their companies’ strategic goals over the next 18 months, according to a report that Microsoft commissioned from Harvard Business Review.
Today, more and more AR and MR technology is making its way into different organizations, including airline, automotive, engineering, architecture, field services and healthcare. Many companies use smart glasses or only AR software, but MR technology has continued to evolve, and the industry now expects full end-to-end packages that include both MR glasses and AR software for customers and employees.
MR is bringing forth a new era of hands-free human interaction. Users can directly interact with the surrounding objects or people while smart glasses display digital information in their field of view and connect users to remote experts. For example, workers could use MR glasses and advanced AR software for remote expert aid and 3D capture capabilities across industries.
Remote Expert Aid: A worker can rely on their live point of view through MR smart glasses to connect with a remote expert, while receiving live or audio instructions. This not only will improve troubleshooting by visualizing overlaid precise digital information, but will also allow the remote expert to see and guide workers through each step of the task. In the “Enterprise Training with Augmented Reality” white paper, Re’flekt researchers found that AR instructions overlaid in 3D resulted in an 82% reduction in the error rate for assembly tasks, 50% faster task performance and 60% increase in learning time. Live instructions could help in urgent cases, such as if a nurse is aiding a patient and needs to call a doctor for assistance, they can easily stream their live point of view to a remote healthcare expert for additional help. Airline workers can save data on any work processes for future reference or audio instructions, such as fixing a wing electrical system.
3D Capture Capabilities: While on site, workers can generate an accurate 3D computer-aided design model of anything they are working on or looking at, whether that is an onsite location or piece of machinery. Along with capturing accurate models, workers can overlay existing designs or instructions on top of machinery to have everything they need in their immediate line of view. They can then easily save all 3D-generated content for future reference in a secure database. Railroad workers can apply the technology to wayside track device locations, such as finding track signs or markers at their expected, predefined geolocation. Logistics workers can use the tech to generate an accurate 3D model of a package or scene during the logistic process.
MR technology streamlines communication to save money and give workers within the organization the resources they need to maximize productivity, improve key performance indicators and improve worker safety.
Gartner has said there will be 20.4 billion IoT devices deployed by 2020. The truth is, no one knows the exact number. It’s possible the total will be even higher.
This proliferation could lead to unprecedented challenges in sending data to devices and back to the cloud for analysis. The sheer volume of data can escalate costs quickly and even create high-latency that prevents devices from working, especially those that require rapid data processing.
Enter edge computing, which brings cloud resources closer to devices. Edge computing provides local data collection and processing, without traversing the internet to reach a distant central cloud or data center.
Organizations that adopt edge computing must consider an optimal mechanism for transferring data in the most resourceful and cost-effective way possible, tailored to their specific requirements.
For IoT deployments to succeed, organizations need an efficient process for delivering new applications, features, updates and security enhancements to potentially huge numbers of geographically dispersed embedded devices.
Many existing in-house updates push a copy of the device’s entire disk image — often referred to as a firmware blob — each time the device needs an update. This practice can result in huge costs, both in terms of time and bandwidth, especially for devices using cellular connectivity.
Thankfully, an alternative exists to reduce the amount of data transferred. Snaps, the universal Linux application packaging format, can use delta updates, an important capability that transmits only the differences between the new snap and the previous one. In many scenarios, such as a minor update to a single software library, the organizations can save much more bandwidth compared to the traditional firmware blob method when deployed at scale.
Two examples of organizations that decrease their bandwidth consumption are Rigardo and Dell Edge Gateways. Rigado, a global provider of commercial IoT edge-as-a-service, offers a containerized application platform and a variety of wireless connectivity options. Dell Edge Gateways analyzes data at the edge of IoT networks.
As the world heads towards the billions of IoT devices predicted by Gartner, the savings will grow to higher rates as more IoT developers embrace continuous delivery best practices already popular in cloud applications.
Smart home devices, such as connected cameras, speakers and thermostats, are proliferating. A second-quarter of 2019 survey from Parks Associates reveals that 28% of U.S. households own at least one smart home device. These owners have an average of six smart devices, double the average amount from two years prior.
About two-thirds of device owners install the devices themselves, according to the survey. These installations are not without challenges, including physical installation, internet connectivity and configuration. Homeowners increasingly expect a streamlined installation. When problems persist, about 20% of users return the device — a lose-lose situation for consumers, retailers and device vendors. The cards that you find inside packages with a phone number to call before you return the product are a proof of the cost.
After a successful install, devices require periodic maintenance. For example, alarm companies offer device warranty contracts on top of the monitoring contract. Discussions with service providers reveal the primary reason home owners call for service is low battery. Who hasn’t had a smoke alarm start beeping at some point, often at an inconvenient hour? The second most popular call for service is a device that is offline, often because of a dead battery. The service contract obligates the provider to send a technician, even if it is just to replace a battery. These service calls come at great expense to the providers, which increase the cost of the service contract.
Any time spent on smart device installation and maintenance is not a happy time for anyone. At home, consumers want to spend time with family and pursue passions, not replace batteries. Installation difficulty is one barrier to greater adoption of smart devices in both home and business. Of course, not all devices are created the same. A smart speaker might be easy to install. A smart thermostat might be more difficult. Installing a wired security camera might even require routing wire behind sheetrock or drilling through existing infrastructure to ensure you have optimal coverage, which is not your typical weekend DIY project.
Smart device configuration and internet connectivity problems are receiving a lot of attention from users and vendors. One option is to send consumers and businesses pre-configured devices. For instance, some users already own an Amazon device. When they buy another, Amazon might preconfigure it with the stored Wi-Fi password. Another option is to offer apps that walk users through the configuration. Other vendors offer video chat or video tutorials. Organizations, such as Handy or HelloTech, offer on-demand, on-site configuration.
Long-range wireless power technology emerges
The physical installation problem is as important as the device configuration issues. With long-range wireless power technology, we now have an opportunity to rethink both the installation and the service of smart home devices. The technology can entice new end-users and simplify the life of existing users.
Long-range wireless power is designed to charge smart devices without wires or direct contact with a charger. An energy transmitter connects to a power outlet and emits energy, such as RF energy or infrared light. A wireless receiver captures this energy and converts it back into electricity. Owners of newer phone models might be familiar with the term wireless charging in the form of Qi wireless charging pads. Long-range wireless is more optimized for devices, such as smart locks, sensors and alarms. It does not need the device to be placed on a charging pad.
The effect of long-range wireless power on device installation cannot be understated. With the right wireless technology, a power cord is no longer required for smart devices. Installing power cords can be a big issue for some smart devices. For instance, indoor security cameras are often installed high in the room. Routing a power cord from an outlet to the camera can be cumbersome. Hiding the cable in the wall might be an expensive process. In some situations, such as apartment rentals, co-working spaces or rented business offices, renovation work could violate the terms of the lease.
Without cables, the end-user can place a wireless indoor camera anywhere. The power source can be on another side of the room without cables or mess and with little effort. Battery-operated cameras often offer reduced functionality to preserve battery life. With wireless power, vendors don’t have to trade convenience for functionality and can continue to provide the same feature set as wired cameras.
Long-range wireless power also offers significant improvements to the maintenance of smart devices. Batteries never need replacement, and devices are always online. Wireless power creates convenience and eliminates the need for most technician visits. Manufacturers of battery-operated devices work hard to reduce the frequency of battery replacements, which often comes at the expense of power-hungry features. Manufacturers can add back energy-demanding features because wireless power gives them the energy needed.
This is the first in a two-part blog series.
Many of the devices we interact with on a daily basis are advertised as smart — even hairbrushes, forks, and water bottles. However, most of these smart products do not currently have the capabilities to provide significant advantages aside from collecting data about how often you drink water or brush your hair. On the other hand, smart products for commercial and industrial organizations can improve operations and bottom lines.
The case for smart devices
There are already more IoT devices than there are people on Earth, and that number will increase in the years to come.
Early generations of smart computing systems were generally bulky and required centralization and protection in designated rooms. Recent innovations have resulted in many powerful devices with smaller footprints, minimal power consumption and easier installations that improve features, including environmental monitoring and predictive analytics.
For the Industrial Internet of Things (IIoT), vendors have developed smart sensors that can perform a multitude of functions, such as monitoring temperature and pressure or flagging early warnings of trouble in unstaffed locations.
In addition to these capabilities, a major plus of modern IIoT technology is the ubiquitous adoption of wireless devices and battery operation. These advancements let users rapidly and economically deploy sensors with easy installations and minimal downtime required for launch.
Once smart devices have been installed, IT professionals can find true value by transferring data to cloud-based supervisory and analytical systems. These systems provide actionable insights for users to establish a preventative or predictive maintenance program. With these types of programs, technicians can respond as quickly as possible when a problem such as an air conditioning failure or a water leak arises.
There is a catch with smart devices; they provide vast amounts of raw data without context, making it a necessity to pre-process this data and boil it down to the essential information. In some cases, smart devices must perform analytics at the edge. In the field, these smart sensors often live at the extreme edge of the network, far past what traditional networking capabilities can access. Organizations have found the best practice is to process and analyze device data at the edge because there is too much lag time when sending valuable information to and from the cloud.
In my next article, I’ll discuss some logical steps for establishing an edge computing architecture, ideally suited for processing raw IIoT data to produce useful results from your smart devices.