Graduation season is here and that means another bright-eyed graduating class will be diving head-first into the workforce. Even before the valedictorians have dotted the I’s and crossed the T’s on their speeches, the class of 2017 is researching future workplaces. Being the first generation to grow up never knowing a time without the internet or cell phones, their future workplaces will not restrict their connectivity.
Most recent grads are looking for a mobile workplace, so organizations that want to recruit the best and brightest must ensure they are delivering a modern digital experience. With nearly all of work now conducted digitally and more devices than ever connecting to the internet of things, employees are using a multitude of devices, including laptops, mobile devices and wearables, to conduct work on a daily basis — all of which can be used both in the workplace to allow employees to work anywhere at any time. But with these devices come chargers, requiring employees to carry multiple cords around with them, whether they are moving from meeting room to meeting room or working outside the office, and limiting their mobility by tethering them to walls or charging ports and often causing “low-battery anxiety,” a nervousness that devices will run out of power when they are most needed.
According to a recent survey conducted by the AirFuel Alliance, 63% of 18-24 year old respondents worry about their batteries dying at least once a day, with nearly 85% of them turning to alternative charging methods like charging mats, portable chargers and car chargers to power up on the go. So it is no surprise that the incoming workforce would like to see employers implement wireless charging capabilities in their offices. This goes beyond the class of 2017 — the survey results showed that a wireless power solution is in high demand among respondents of all ages. In fact, 70% of 25-34 year old survey respondents expressed the same battery worries as the 18-24 year olds and also leverage some type of alternative charging method.
Nearly all respondents want wireless charging in their next device and anticipate this capability to be adopted in work and public places within the next three years, so employers looking to hire top new talent or retain current employees will need to make this a reality. The widespread adoption of wireless charging technologies would improve the workplace by delivering true mobility, allowing employees to be able to work from home, from their desk, in a conference room at the office or on the road to a business conference without worrying if they will have a plug available to them. Not only does this streamline office aesthetics, but it can also lead to a more productive workforce. For employers, though, is it possible to make offices completely wireless in the near future?
The answer is yes. Wireless charging technology is already available and is evolving to work on a grander scale — charging anything from laptops dropped on a conference table to electric and autonomous vehicles parked on city streets, in driveways and more. Companies like Dell have already started ensuring its devices are compatible with wireless charging technology — in fact, the first wirelessly powered device will be available for sale by Dell this spring, paving the way for companies to be able to purchase and provide this technology to employees and deliver a fully wireless and more efficient workplace.
As workplaces continue to become more connected, employees will expect increased mobility both inside and outside office doors. This workplace of the future will not be possible, however, if tethered by wires. Teamwork and mobility will become simpler and allow for greater productivity when wires and plugs are no longer a factor and employees will be able to work from anywhere, transitioning seamlessly from working on their laptops at their desks, to their mobile phones from the train, to their tablets from their couches — never worried they will run out of battery.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
There’s no doubt about it — the internet of things era has begun. But businesses that want to take advantage of the much-hyped IoT trend must do much more than simply build in connectivity.
As recently as February, analyst firm Gartner projected that there would be more than 20 billion IoT-equipped products on the market by 2020, compared to a little over 6 billion in 2016. At the same time, there are indications that key roadblocks are slowing consumer demand for “smart” devices such as phones, tablets and laptops.
Businesses and manufacturers that want to flout this trend and take advantage of IoT over the long run need to prepare now.
The key is to understand that the “things” in IoT are incidental; the end of the rainbow is the consumer’s engagement. After all, people don’t buy IoT; they buy more robust, meaningful, higher quality, amazing experiences — both at work and at play.
Preparing to deliver on this promise means understanding how to approach the creation of products in a way that will improve customer service, impede competitive encroachment, provide insights on consumer behaviors, drive efficiencies and ultimately create new revenue streams.
From my experience reinventing established product categories, there are three key things manufacturers must consider, if they want to design an innovative and useful IoT-enabled product:
1. Think beyond the product
While you may still believe that you are in the business of building products, the truth is that the connectivity and intelligence of IoT technologies will transform products into services.
Think about this: when a printer produces your documents, it’s a product. But when a printer detects its ink is low, alerts its operator and automatically places an order for a replacement cartridge — then it becomes a service.
Just consider a few examples of IoT in action:
- A solution provides farmers with a consolidated web dashboard that they can use to spot crop issues and remotely monitor farm assets and resource usage levels.
- Smart thermostats using sensors, weather forecasts and activity in the home to reduce monthly energy usage while keeping homeowners more comfortable.
- An office printer that communicate proactively with IT to inform them which parts are wearing out and how to optimize based on employee usage, significantly increasing printer user experience.
You should start by reimagining and reinventing what your products can be and do. Then, embed the intelligence and operability needed to make your vision a reality. By doing so, you will greatly and authentically enhance your consumer experience.
Not only will customer experience be enhanced, but you will gain an insider’s view of how consumers are using your devices. These insights let organizations anticipate what customers may need, perhaps even before they are aware of it themselves — a true sales crystal ball.
2. Safeguard everything, inconspicuously
Security is a key obstacle to IoT adoption. That’s with good reason: 96% of security professionals expect an increase in IoT breaches this year. TVs, DVRs and even washing machines all present risk to consumers.
IDC reported that 35% of recent security breaches are related to print security deficiencies, which could be avoided with proper security solutions. Devices of all types are being used to launch distributed denial-of-service and other web attacks, making it imperative that smart products are protected against vulnerabilities by the right security solutions.
Security for IoT devices needs to go beyond the actual product — it must be layered into software applications, and peripheral elements such as network security, authentication, encryption, PKI, security analytics and API security must also be addressed.
Equally, it’s important that security measures don’t burden, confuse or alarm the user. Security should be as discreet as it is impenetrable.
3. Make smart, simple
An Accenture report revealed that two-thirds of consumers experienced a challenge when using a new IoT device. Challenges can lead to limited use of functionality and, more seriously, device abandonment. That’s particularly true when it comes to new and emerging product categories.
When equipping a device with IoT capabilities, you simply can’t afford to make it more complicated. Frustration is not the experience you are aiming for. According to some industry observers, it’s not technology innovation that will keep companies from being successful in producing IoT devices, but that “they’ll fail to recognize the value of design in connected product development.”
So, when developing products for IoT, it’s critical that design is not only considered as important and often as technology, but that the two are carefully, deliberately and seamlessly integrated. In fact, design will prove to be the silver bullet in letting the functionality of your products — or should I say services — shine.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Technology has often been dubbed the great equalizer for SMBs, empowering them to compete with industry goliaths more effectively than they could in the past. The internet of things is the latest technological enabler that companies are eyeballing, and it is poised to change the workplace — especially as the number of devices deployed by businesses continues to grow.
The swelling IoT enthusiasm is understandable. Enterprises are using devices for every facet of business operations, and as sensors have become more affordable, companies are better equipped to utilize them to record everything from human interactions to machine activity. This gives businesses access to an unprecedented amount of raw data, which can be fed into their analytics to drive improvements throughout the company.
Whether IoT is enabling manufacturers to optimize factory operations or helping retailers improve the customer experience, the IoT hype has been fueled by the wild success of some of the early adopters. Additionally, the high-level IoT concepts that have graced news headlines recently are further driving excitement — stories about Tesla, Uber and traditional car companies like Ford building self-driving cars that rely on IoT or moves like Amazon extending Alexa to new products and devices.
But most of this hype is being driven by the investments of industry titans — the Googles, Amazons, Fords and eBays of the world. What about smaller businesses? Enthusiasm may be high, but you don’t often read about many small or midsize businesses taking full advantage of the wide range of data available to them to fuel IoT projects. If you could put numbers to the IoT success seen at small businesses, it’s likely that the facts wouldn’t match the fervor.
The IoT conundrum: SMBs face a data disadvantage
The big challenge many businesses face with fully utilizing IoT is that the definition of their data and analytics strategy is ambiguous in the first place. Some enterprises will say they have a reporting and analytics strategy, when all they actually have in place is a data visualization solution like Tableau — or worse, their strategy constitutes some sort of basic spreadsheet reporting. In these cases, their IoT initiatives are doomed from the start.
Even when businesses do have proper strategies, most lack the right personnel — data scientists — to successfully manage the process. Data scientists play a vital role in data and analytics strategies as individuals who can direct, interpret and validate the output. New tools and technology can help businesses process data, but the personnel are the brains behind the scenes and they aren’t easily replaced. That’s why some data scientists are commanding salaries and benefits of more than $250,000 annually.
The issue is only compounded further by the fact that developing and executing on a comprehensive data and analytics strategy is a real challenge. You need to be able to manage data effectively, from collection and access to cleansing and preparation. Then you must be able to determine which analytical model will yield the best predictions, and that requires data scientist expertise to train, test, evaluate and score the models. Once the data and model is set, you need to figure out how to operationalize your analytics and use them in production. Finally, the process itself must always be under review in the face of changing business conditions.
At the end of the day, all of these different elements of successful IoT deployment cost money and resources, and that’s why the real promise of IoT has been limited to industry titans thus far.
A new approach to IoT
When it comes to designing a data and analytics strategy that will maximize the value of IoT, every business faces its own unique challenges. However, the resource problem is persistent across most SMBs, and the answer is not as simple as hiring a large team of data scientists — that’s just not feasible. While analytics as a service can provide support for self-contained use cases like text-to-voice, “outsourcing” the entire process around strategic decisions is a no-go for most organizations as well. Even the influx of new tools that make data scientists more effective do not have much impact on the democratization of analytics for smaller enterprises.
One possible answer is automation. Now, you are probably thinking “isn’t that what the existing tools do already?” To earnestly make advancements in this area, the analytics process must be made capable of self-learning — and not just applying learning to the output of the analytics. This next-generation approach will apply meta-learning principles to machine learning, where learnings from one machine or entity are applied automatically to other machines or entities.
Meta learning will minimize the cost of running machine learning experiments by capturing learnings from prior machine learning experiments in the form of metadata, and then applying these learnings for future experiments. This is critical for several reasons:
- You can greatly increase the accuracy of the analytical models, which will directly impact your business outcomes.
- You can achieve faster outcomes, which makes your business more agile. Analytics are only useful if businesses get them while they are relevant, and improved agility helps enterprises realize and act on information even sooner.
- You can gain better control of your infrastructure and rein in the cost required to run the machine learning algorithms. This is especially important, as the volume of data available to businesses is growing exponentially alongside the proliferation of IoT devices.
Given the shortage of analytics skills — alongside the shortage of data skills — many businesses aren’t taking full advantage of the promise of IoT. However, if the industry can democratize analytics, more businesses may be better poised to capitalize. This next-generation approach essentially automates the data science lifecycle, greatly reducing the need for a high-cost and high-demand resource not available to most organizations.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The internet of things has simplified daily business operations for several industries to a degree that is almost unmeasurable. It is on track to be an additional business-critical component of organizations, both large and small, that will require support, investment and continuity management.
The ability for businesses to connect and integrate all of their operations with IoT can simplify and resolve current problems, while also opening up new opportunities for growth. Business owners and department leaders will have more data available to them than ever before and, along with continuity managers, will need to understand how they will use the IoT technology, what their growth rate could be, what requirements and investments will be necessary and, perhaps most importantly, how to keep operations running when something out of your control inhibits your ability to get work done.
With all of this interconnectivity and data available at our fingertips, it can cost considerable time and money when business operations are disrupted by a network outage. In fact, a report from Gartner has estimated the cost of an outage for the average business at roughly $300,000 per hour. If the disruption lasts a significant amount of time, it can also cause damage to a company’s reputation, employee morale and customer loyalty. Because businesses of all sizes rely on landline network technology for their operations, an outage interrupting the workday is a matter of when, not if. This is why it is important to have a business continuity solution to ensure that your operations can keep functioning seamlessly even when there is a network outage.
A majority of landline network outages are caused by nearby occurrences, from a routine maintenance blunder to a weather disaster — many times by no fault of the company. These events seldom impact the wireless networks, so cell service is almost always available when landlines are down. Having a business continuity solution that uses a wireless connection creates a network redundancy and helps safeguard against a loss in your network connection.
A wireless business continuity solution can automatically switch over to a wireless network when a landline goes down. High-speed wireless technology offers always-on, cost-effective connectivity and for businesses with limited IT resources, wireless continuity solutions are easily scalable, easily managed and easily set up. Because of this, it is not uncommon for a wireless business continuity system to pay for itself after just one incident.
Network outages can impact almost every aspect of business processes, including online ordering, in-store transactions, inventory management, supply chain, human resources and customer service. The costs of network downtime combined with the high risk of one happening underscore the need for a wireless business continuity system.
A business continuity system that uses standalone gateways supported by a cloud management tool provides the most flexibility and functionality because it helps ensure continuity while increasing operational efficiency. There is no need to travel to different locations to inspect, monitor or maintain equipment because it is handled wirelessly and remotely from any site.
While no solution can prevent all network interruptions, minimizing downtime is critical to business operations.
The internet of things has changed the way we view and use technology. As consumers, we have come to expect connectivity and access to information wherever we go. This has spread into many industries. As the industrial internet of things rises, it is driving connectivity for all assets in an enterprise network. This is even true for the access layer — or the outermost edge of the enterprise network that we often see in remote, geographically dispersed locations where mission-critical sensors or devices reside.
Today’s IT decision-makers are tasked with finding technology that connects all of these things together to provide a constant stream of data from the access layer back to the business office. As these decisions-makers work to bridge the connection from the OT field network to the IT business office, they are inundated with technologies using a variety of different protocols and spectrums.
Anyone who is new to traditional OT networking technology may not be as familiar with the benefits of frequency-hopping spread spectrum (FHSS) technology. Despite the rapidly changing technology landscape, it remains an optimal choice for long-range communication in nearly any environment or location. Decision makers also may not be aware that FHSS can help alleviate some of the IT/OT convergence pain points and accommodate the growing number of sensors in the field. FHSS is rugged and reliable, and today’s options offer long-range, high-speed, high-throughput options that are equipped to handle modern data needs.
Unfortunately, a number of myths continue to circulate around those who may not be familiar with the technologies’ benefits. Some of the most common myths involve security, saturation, range, compatibility, interference and obstruction. It is important to dispel these myths so that modern decision-makers can add FHSS as a viable option in their communication network arsenal. By exploring each of these “myths,” this article will provide clarifications and debunk some of the most common myths around FHSS.
In IIoT networks, such as those that provide data for smart cities, security is of utmost importance when it comes to selecting a communication link. Contrary to commonly held misperceptions, spread spectrum technology was designed with security in mind. Decision-makers unfamiliar with its history should know that it was originally invented for the U.S. Navy during World War II to prevent the Germans from “jamming” American radio transmissions for radio-guided torpedoes.
The inventor of the frequency-hopping radio was 1940s movie star, Hedy Lamar. The original radios contained a roll of paper, slotted like a player piano, to cause channel switching. Hedy’s friend, inventor/musician George Antheil, designed the first successful synchronization device to bring Hedy’s idea to fruition. In 1941, they received a U.S. patent for the first “secret communications system” — their original system used 88 frequencies.
Not only was the technology designed to stay secure, but modern FHSS technology offers rapid frequency switching and additional layers of security. Modern switching is controlled by embedded software code that enables a radio to change frequencies up to 200 times per second, and use more than 100 channels.
- The technology behind today’s spread spectrum radios is so complex that anyone attempting to intercept a signal would have to match 186,000 possible parameters in order to be on the same channel with the target radio — and then would be in sync for only about 1/100th or 1/200th of a second.
- In addition to matching parameters, intercepting data would also require overcoming the encryption technology used in spread spectrum radios, which provides significant additional security. Some manufacturers leverage AES encryption, which most frequently comes in 128 or 256 bit.
Not only is it highly unlikely for signal jamming or interference with FHSS because of the rapid signal hopping, but many of today’s technology providers have added standards-based AES encryption as an extra layer of security. In networks where data needs to be transferred over long distances from the sensor back to the server, FHSS communications are actually a strong candidate from a security standpoint.
The sheer number of sensors being deployed today (some estimating this number to be in the billions over the next several years) equates to an increase of FHSS radios being used to transmit sensor data — especially from large, complex and geographically dispersed networks. When using an unlicensed spectrum for FHSS technologies, a common fear is that a “shared” frequency will become saturated and unusable as more companies and industries adopt it. However, if there is a saturation point, it has yet to be identified. In some areas, thousands of spread spectrum radios are successfully delivering data to multiple end users without conflict or data loss.
Over the years, considerable research and development went into making spread spectrum radio technology work in close proximity and share the same frequency band. To accomplish this goal, the radio networks are programmed to use separate frequencies. Each network is programmed to “hop” to a different frequency than other radio networks in the area. This hopping allows construction of distinct communication networks that don’t conflict with other nearby networks.
This is especially important and beneficial with the high volumes of data being pulled from different sensors today. When the network is properly programmed, the IT team does not have to worry about saturation points in the network that could prevent important data from being accessed.
Another common myth about spread spectrum technology is that it is useful only for short-range communication, rather than for deploying a complete communication network solution. This misconception developed because the legal maximum output power of a spread spectrum radio. The U.S. Federal Communications Commission — and agencies in many other countries — allow a spread spectrum radio output of only 1 W of radiated power (at the radio), and 4 W at the antenna.
When repeaters are factored into spread spectrum networks, long-range communication becomes possible. Unlicensed FHSS solutions are able to use multiple repeaters to extend range or improve coverage around obstacles. Some device manufacturers place no limits on the number of repeaters included in a single network and there are cases of 100 repeaters being used in the same network.
Spread spectrum technologies have the ability to operate as both a slave and a repeater simultaneously. This eliminates the need for multiple dedicated repeaters and reduces installation costs. The radio can multitask as a slave unit at any RTU (remote terminal unit), EFM (electronic flow meter), or PLC (programmable logic controller), sending data back to the host, and as a repeater for other devices further down the network hierarchy.
Modern programmable FHSS radios can also host software applications, enabling data collection that streamlines decision-making, provides predictive analytics for maintenance and support, and allows organizations to automate more at the network’s edge.
If decision-makers are looking for a rugged, long-range solution that is suitable for networks in almost any environment, FHSS is a relevant option. Today there are also programmable, IIoT-ready options that offer long range, high speed and high throughput required for modern connectivity demands.
Many people believe that if they install a base of licensed radios, they must stay with the same manufacturer and model of radio they originally purchased. This is not true — it is possible to mix spread spectrum radios into an existing licensed radio system to take advantage of features such as multiple- repeater functionality. FHSS is specifically designed for ruggedness and reliable operation in any environment no matter how challenging. This makes it especially compatible for incorporating into just about any IIoT network, no matter what technology exists.
Additionally, today’s IIoT landscape is rapidly and constantly changing. Systems require more interoperability between disparate and oftentimes outdated technologies and systems. When looking at the needs of an IIoT network and the shift towards digital operations, technology needs have evolved. A decision-maker often faces a legacy field communication network that needs to somehow tie into the modern IT system. FHSS technology can and does provide the necessary communication link that supports interoperability, as well as new technology and software solutions designed to bring all the disparate systems under a single toolchain.
As a new age of technology continues to expand and innovate, advancements such as programmability at the network’s edge are beginning to surface. This will introduce the ability to leverage third-party applications at the edge. These apps can enable anything form advanced data collection to enhanced cybersecurity and more. Modern FHSS solutions are designed with programmability in mind — in order to remain relevant as systems continue to evolve.
Another common myth is that spread spectrum and other radio systems will interfere with each other — especially in modern networks where there is a proliferation of devices in the field. This is simply untrue because spread spectrum technology is designed to expect interferences. One of the most common spread spectrum bands is 902-928 MHz. Many governments have set aside this band specifically for FHSS devices and the rules are structured to allow the band to be shared by multiple users.
The official designation for this band is ISM (Industrial, Scientific and Medical), because it was first reserved internationally for non-commercial use of RF electromagnetic fields. Some countries allow 902-928 MHz and also 2.4-2.483 GHz, while others allow one of these bands, but not the other.
Not only was the 900 MHz spectrum designed for multiple users, but when combined with the constant frequency-hopping nature of FHSS, it is highly unlikely that the devices will interfere with each other. Another concern is that other devices — such as cellular technology — will interfere with the 900 MHz spectrum. A “band pass filter” is a simple solution that will block any noise or interference that is outside the 902-928 MHz range.
In an IIoT network where real-time data is a top priority, it’s important for FHSS technology to avoid obstructions. There is a myth that spread spectrum radios are more restricted by line of sight (LOS) than other communications devices. The reality is that while line of sight is always preferred, spread spectrum radios can successfully pass data through obstacles such as buildings, trees and even over hills to some lesser degree. Some FHSS radios are also ruggedized for performance in any challenging outdoor network or extreme weather condition.
In challenging environments, obstacles introduce more “attenuation” into the radio signal path, or resistance that reduces signal strength. The greater the distance, the greater the attenuation. Attenuation also increases when obstacles are encountered. For example, the more trees and foliage on the signal path, the higher the attenuation.
A radio that can transmit 20 miles with clear line of sight may not be able to do so if that path goes through a dense forest. The signal loss due to distance, in addition to signal loss or attenuation due to the forest, will be too great. While the radio can transmit through one or the other, the combination of the two may be too great to overcome.
Common terms for this degradation of signal are “path loss” or “signal fade.” In outdoor applications, this can be computed using software that takes into account topography, frequency, antenna type and signal output strength. In modern networks where the number of endpoints is dramatically increasing the distance between communications devices is getting smaller, this can help reduce attenuation. However, it’s more important than ever to obtain data from the access layer which is often remote. As a result, long-haul networks still need to be designed to account for LOS in order to most effectively get the data where it needs to go.
A common mistake of novice end users is not performing a ‘path study” prior to starting installation. Performing a path study prior to starting a project will create a network design that takes every obstacle into account, and ensure a robust communications system, regardless of LOS limitations. While obstruction poses a challenge for FHSS, there are numerous solutions that can be incorporated after the path study.
The rapidly changing technology landscape has caused major shakeups in the way OT and IT networks operate. However, FHSS remains one of the most reliable and ruggedized option for modern IIoT communications. It has the range, speed and throughput necessary for collecting, monitoring and transporting important data from the access layer to the business office.
As decision-makers work to overcome the challenges of adding and connecting sensors at every network endpoint, they must select technology that will optimize business operations. Modern FHSS technology is an ideal candidate for bridging the OT/IT divide. However, in order to recognize the value of FHSS technology, it is important to separate the myths from reality. From there it is possible to see the value of FHSS for networks now and in the future.
IBM Master Inventor Lisa Seacat DeLuca’s recently published book was inspired by IoT, near field communication (NFC) and two sets of twins under the age of four.
I had the chance to speak with Lisa about IoT and NFC along with her new children’s book called The Internet of Mysterious Things. With 70 patents already and another 300 pending plus an active speaking, work and family schedule (the two sets of twins under the age of four are hers!) I’m glad we could find some time to chat. Here’s what she had to say:
What is your new book, The Internet of Mysterious Things, about and why write it for children?
I’m a mom to two sets of twins, my oldest set are four years old and always asking me how things work. More often than not those “things” are IoT devices. So I’d explain what I knew and then go and read up on what I didn’t so I could help teach my children. Then I thought that the perfect children’s book would explain how technologies worked through the use of mysterious creatures and then allow parents to learn how the technologies actually worked through a companion website.
How did you integrate NFC tags into the story?
To make it easier on the parents to pull up more information about the story, each page has an NFC tag hidden within the artwork. By tapping your phone to the NFC tag, a website specific to that page will launch on your phone talking in more detail about the technology. At first I was only going to have a few tags, but they were such a perfect technology that really brought the book to life so I ended up putting one on each of the technology pages. There are 10 tags in the book. NFC and IoT work very well together.
What do you find most fascinating about NFC technology?
NFC is frictionless. You don’t need to open up an app first and then scan a barcode. You just tap and go. That’s a big advantage over other technologies. NFC is easy and it’s just now starting to grow in popularity. I’m crossing my fingers that Apple opens up support for developers to read and write to NFC tags. As soon as they do we’re going to see an explosion of NFC use cases that’ll bring all of our things — IoT devices included — to life.
I’m sure your children and their friends have read your books. Do you think your books will help spark their interest in science and technology?
Absolutely! My kids are still young, but the beauty of the book — and IoT — is it’s interesting for both young children and older children because they can dive in and learn as much about the technologies as they’d like. My children love the artwork and are beginning to soak up some of the interesting facts about IoT.
What book are you reading right now and why?
Unfortunately most of my reading these days is stackoverflow.com for debugging and sites like IBM DeveloperWorks to learn about new technologies. The last book I read for fun was Ready Player One. I can’t wait for the movie.
To order Lisa’s book, go to http://internetofmysteriousthings.com. The book will ship in May and you can order directly from the website.
At 34 years old, Lisa Seacat DeLuca is the most prolific female inventor in IBM history and her accomplishments have been recognized widely throughout the industry. She is an IBM master inventor and holds 70 patents with another 300 pending. Recently, she was inducted into Women in Technology International Hall of Fame and named one of the Most Influential Women in IoT, and before that DeLuca was named one of MIT’s 35 Innovators Under 35 and one of Fast Company’s 100 Most Creative People in Business. She is also the mother of two sets of twins under four years of age. Near field communications, and all the possible wonderful innovations it could enable, is a particular passion of hers. For more information visit: http://lisaseacat.com/
IoT devices going into military/aerospace, medical and/or highly specialized industrial applications demand ultra-high reliability so they operate properly according to their tight specifications. That means the small IoT printed circuit boards (PCBs) must be ultra-clean and virtually free of any contaminants or chemical residues.
But to understand how best to gain that IoT reliability, we first have to check out PCB surface finishes or coatings, because therein can lie tiny and miniscule residue that sabotage that reliability. Regardless of size, whether large or small, like IoT PCBs, all printed circuit boards have these surface finishes. The purpose of these finishes or coatings is not only to protect the board’s copper finish, but also to keep the copper from oxidation and deterioration.
There are a number of PCB surface finishes. However, the most prominent are hot air solder leveling or HASL, immersion tin, organic solderability preservative or OSP, electroless nickel immersion gold or ENIG, and immersion silver. Of those, immersion gold is the surface finish most popularly used for rigid-flex or flex circuit PCBs comprising an IoT device. This surface finish is heralded because it offers a number of key benefits, among them are surface flatness and the fact that it works well with fine pitch devices. Fine pitch is a measurement that means the very short distance between a device pin or lead and its neighboring pin or lead. Considering their small size and advanced electronics functionality, IoT devices demand highly advanced circuitry packaged in ultra-small packages like micro ball grid arrays or µBGAs with 0.2 to 0.4 mm pitch between leads. That’s about the diameter of a human hair.
If immersion silver is used as a surface finish, it has a tendency for oxidation and is sensitive to the tiniest flux or chemical residue that remains after the IoT PCB is assembled and cleaned using regular cleaning techniques. Oxidation corrodes surface mount or SMT pads that are in contact with the leads of electronics’ circuitry. And that oxidation leaves a white residue as well.
If an IoT device is designed to operate at high frequencies or high speeds, as a lot of medical, industrial and military/aerospace IoT deployments do, those white residues as well as flux residues or tiny splashes of solder can adversely affect signal transmission and will keep the IoT device from performing at its intended higher frequencies or higher frequencies.
This is where Ionographic cleaning comes in to play a big role. It uses a set of chemicals to perform the ultra-clean process for IoT rigid-flex and flex circuits. This chemical is a combination of isopropyl alcohol and de-ionized water called IPA solution, which is a conductive solution in nature that collects all the contaminations and leftover residues to make the IoT PCB extra clean.
This cleaning and testing involves submerging the IoT rigid-flex or flex circuit boards in the IPA solution for about five to15 minutes. It’s left underwater for a short period of time so the solution has the time to fully circulate around the circuitry.
Then, the Ionographic cleaning system automatically tests the IoT rigid-flex or flex circuit boards for ionic contamination. Hence, changes in water resistivity provide the level of relative board cleanliness or, if any, remaining residue with specific quantitative figures so that the data out of the Ionograph system is absolutely unquestionable.
The recent WannaCry computer virus may have been an eye-opener for many, but we shouldn’t be surprised. It certainly wasn’t the first virus of this magnitude, and it won’t be the last either. The success of these types of attacks exploiting security vulnerabilities, and the financial payoff for limited risk, will only see these stories grow. WannaCry was mitigated by a relatively straightforward domain registration; next time, however, there might not be such an easy fix.
The IoT world is no stranger to headlines about viruses that propagate through email and by individuals opening infected files. Hackers have built vast armies of botnets through relatively straightforward tools. One of the first examples of these types of botnets was the Mirai malware discovered in late 2016.
MalwareTech analyzed the code and found that Mirai relied on finding networked devices running outdated versions of Linux. This included everything from home routers to networked baby monitors, security cameras and more. As long as the devices were functioning properly, most of them had been installed and then forgotten. The software involved would compromise devices by going through a list of 60 common passwords, one at a time. Devices that were still using their default setups would then be infected.
Once infected, the devices could be used for several different applications — the most well-known is a distributed denial-of-service (DDoS) attack. The size and capabilities of the botnets’ traffic is impressive. Mirai launched an attack that exceeded 600 GB per second on the KrebsOnSecurity website, which was then followed up by an attack that exceeded 1 TB per second against OVH, a French internet service provider. Most of the infected device owners had no idea they were infected until it was too late, and probably still don’t know their devices have been compromised.
What’s even more concerning is that the source code for Mirai has been made available online. While that does allow groups like MalwareTech to understand how it’s achieved these attack results, it also paves the way for others to take advantage of the same exploits.
Shortly after Mirai, another huge IoT device-generated DDoS event happened on the Imperva Incapsula network with a 20-minute attack that peaked at 400 GB per second, followed by another 17-minute attack that peaked at 650 GB per second. This new malware was named “Leet” after a character string in the payload.
With new devices being rolled out constantly for both home and business automation, the pace of attacks being generated by IoT botnets will also continue to increase. With published source code examples, motive and opportunity, this will become a more prominent feature that needs to be managed.
The wide variety of niche IoT offerings is impressive and can solve for many different industry- or function-specific needs. With so many providers offering so many solutions, your business needs to have a uniform security platform to remain uncompromised. IoT security needs to manage not just devices, but also communication, data storage and lifecycle solutions.
Unfortunately, we’ve seen many organizations take the same “deploy and forget” approach to their IoT strategy. It’s initially configured to support business goals and sent into the field with the hope that initial device security settings are strong enough. So long as they’re operating, these enterprises assume there’s no need to actively manage these devices from a security standpoint.
This laissez-faire attitude, while cheaper and easier to practice in the short term, exposes that organization to hardware security vulnerabilities and data loss and/or corruption. It also creates the potential risk of added endpoints to existing or new botnets.
For organizations, it’s much easier long term to catalog and inventory your devices (everything from hardware type to firmware versions) before deployment. Basic security measures like changing the default password go a long way toward locking down a solution. Once properly catalogued, securing an IoT device becomes a much easier task. Security vulnerabilities can be quickly identified and widely distributed solutions can be found, updated and, in many cases, tied to other assets that aren’t stationary.
While this may be the more time-consuming and expensive route to take today, it goes a long way toward preventing these types of attacks and keeping networks, data and devices safe.
Enterprises and service providers building IoT solutions evaluate myriad software and hardware options to assemble an IoT stack. Typically, middleware requirements are met by using an IoT application enablement platform (AEP). AEPs are a fundamental building block and interface with almost every component in the system, including enterprise back ends, IoT devices and ancillary services.
Enterprises and service providers selecting an IoT AEP have to choose between selecting a commercial AEP or an open source AEP. Here I will discuss five benefits of selecting an open source AEP to satisfy middleware requirements of an IoT solution.
The typical IoT technology stack is fairly complex and includes hardware, connectivity, platforms, applications and services. One of the most critical components of an IoT solution is an AEP.
MachNation definition: An IoT AEP is a technology-centric offering optimized to deliver a best-of-breed, industry-agnostic, extensible middleware core for building a set of interconnected or independent IoT solutions for customers. An AEP vendor relies on a flexible deployment model, a comprehensive set of device and enterprise back-end connector SDKs and APIs, and a set of well-documented developer resources. AEP vendors assemble a network of application development, system integrator and service provider partners that build custom IoT applications on the platform for customers.
AEPs are one of the fastest-growing technology segments within the IoT ecosystem. According to MachNation forecasts, worldwide IoT application enablement revenue will be $2 billion in 2017, growing to $83.4 billion by 2025 at a compound annual growth rate of 62% over the period.
The rapid growth of AEPs is a testament to the value of a horizontal platform. According to MachNation research, enterprises and service providers often analyze the capabilities of 20 or more AEPs and then conduct extensive AEP technology comparisons and trials before selecting a preferred AEP vendor. Customers realize that selecting the right AEP from the beginning is especially important because a properly chosen horizontal AEP can easily support multiple IoT solutions and use cases.
Open source AEPs have some of distinct advantages over commercial AEPs. Next, I discuss five high-value benefits in more detail.
Benefit #1: Richness of the ecosystem
Open source projects tend to have more success gaining market traction and adoption than commercial systems because of the nature of the open source ecosystem.
Developers and integrators form a large network of experts that know how to assemble open source technologies into solutions.
Software communities, comprised of large numbers of volunteers, contribute to the ecosystem often for personal reasons or because they have reaped the benefits of the community in the past. Sometimes these communities volunteer on a quid pro quo basis — trading expertise with other community members.
Large enterprises that benefit from open source initiatives tend to support the overall ecosystem with commercial contracts and code contributions.
Open source projects tend to attract the types of developers and integrators that value a collaborative and open approach to development, whereas the technical expertise for commercial systems is largely confined within a vendor’s organization for all but the most successful products. Finding a partner to develop on top of or to manage an open source stack is easier as the available talent pool is often significantly larger.
Benefit #2: Best-in-class security
Open source components are revered for their security characteristics.
Security auditing in the open source model is very strong. The open source model enables developers, quality assurance teams and independent security researchers to conduct security audit and testing at multiple levels including source code, system and system of systems. This enables a much deeper and complete analysis of potential vulnerabilities and provides the greatest level of transparency from a security standpoint.
There are often more and better security technologists involved in open source projects. In an open source platform, or in any other open source projects, security vulnerabilities are identified by the community. Since open source projects are frequently used by very large enterprises, there is a team of leading security researchers working to identify and report vulnerabilities before hackers do. For platforms built by niche software vendors, it is unlikely that many independent security professionals are involved. Rather it is a vendor’s employees that are tasked with ensuring the platform is secure. In such a scenario, there may be fewer reported vulnerabilities due to oversights or “security by obscurity.”
Open source platforms allow vulnerabilities to be patched as soon as they are identified. Since each deployment has full control over the source code, it is possible to mitigate risks without waiting for a vendor to issue a fix. This is of particular importance for enterprises and service providers that opt for a private cloud or on-premises deployment that requires a vendor to create and deliver a fix before anything can be done.
Benefit #3: Unrivaled flexibility
Open source platforms provide enterprises with flexibility to deploy software that meets business needs. Unlike commercial products which have to tailor to many interests, open source software has no commercial agenda. In many cases, core changes to functionality of a commercial product are simply not feasible without engaging the vendor. Thus, commercial solutions require that customers align themselves with the priorities of the vendor. In the cases where a vendor is unwilling to modify a product, the enterprise has no choice but to rely on costly professional services.
With open source software, an enterprise can begin development where the community has left off and make necessary modifications and amendments to refine the software to deliver on the entire set of business requirements. If the software is missing functionality out of the box or doesn’t work exactly as intended, an enterprise can assign resources or use the ecosystem of partners to deliver on the additional requirements.
Benefit #4: Ability to future-proof
By their very nature, open source platforms provide strong mechanisms for future-proofing middleware to adapt to changing requirements and market needs.
First, open source AEP solutions are fairly easy to maintain if there is a disruption in the community. Open source AEPs are community-maintained and a third party can take over an AEP deployment project or ongoing management at virtually anytime. Among commercial software stacks, it is entirely common for platform vendors to come and go. The consolidation within the IoT space suggests that many platform vendors that exist today are unlikely to be around 3-5 years from now as they will either cease to exist or be acquired by a large player that may or may not maintain the same product development priorities.
Second, migration to newer and better application enablement approaches is easier in the open source model. The large user base of open source software increases the likelihood that a migration path will be available for deployed open source AEP solutions. Migration from a commercial AEP system is a complex process. Commercial software vendors — even those building extensible software — still benefit from a certain level of vendor lock-in. This is especially true for middleware solutions that provide core functionality tightly integrated northbound and southbound in the technology stack.
Benefit #5: Favorable financials
While there is much debate on the TCO of open versus commercial software, open source software has no upfront software costs or recurring upgrade costs. As a result of the advantageous and highly scalable pricing model, some of the world’s largest technology companies base their core business technologies on open source software. It can be more cost-effective and allows businesses to scale their operations while reducing fixed and variable fees tied to infrastructure decisions.
Enterprises and service providers invest tremendous time and money in selecting the right type of AEP for their IoT solutions. Choosing the right horizontal solution can empower the efficient and secure creation of many IoT solutions across many use cases. Enterprises and service providers that pick open source AEP technology do so for the richness of the ecosystem, best-in-class security, unrivaled flexibility, ability to future-proof the solution and favorable financial characteristics.
It seems like every headline these days is either praising or despairing the impact artificial intelligence (AI) and next-gen technology will have on society, especially as it relates to the workforce. Will the introduction of AI, bots, drones and autonomous vehicles permanently alter the current job landscape as we know it? Probably. AI is poised to replace 6% of U.S. jobs over the next five years, according to Forrester. But before you become too concerned about the current state of your job stability, let’s take a deeper look.
As with every revolution, some change is to be expected. Industries will be affected by new technologies by varying degrees, but most changes will come in waves. Almost immediately, we’ll see retail jobs like floor sales and cashiers, delivery drivers, and technicians impacted. The introduction of bots embedded into things like appliances and machinery means self-diagnosis and perhaps even in some cases, self-repair will be possible — reducing the need for human technicians. Of course, we will still need these types of skilled workers, but their time will be freed up for more complicated projects and repairs.
Longer term, we’ll start to see next-gen technologies reduce the overall number of humans in the workforce in fields such as aviation. Technological advancements in the field of autonomous vehicles could allow airlines to be piloted by AI sooner rather than later, but passenger psychology will impede this process for the foreseeable future. As a society, we just aren’t ready for a fully autonomous Boeing 787, although don’t be surprised if cargo aircraft start this sooner. And it doesn’t stop with pilots; we’ll also see call centers staffed by AI bots capable of answering phones and solving problems. Or notice TV news anchors, weather reporters and other information transfer jobs transitioning to AI.
There is no stopping these technological advances. Sounds like a lot of gloom and doom, right? That’s only one view (sometimes also referred to as “glass half-empty”). But before we condemn these technologies and try to limit them or stop them, we need to consider what we already know from the past as we look toward the future. When Henry Ford first introduced the Model T car in 1908, it’s likely he didn’t foresee the full scope of events he would set in motion. By 1914, Ford’s production line had reduced assembly times from 12 hours to less than 2.5 hours. By putting 15 million Model Ts on the road, Ford forever disrupted the auto industry by slashing the price, redefining the working wage and ushering in a phase of innovation that was previously unseen.
If we think about it, at what point did the introduction of a new technology not raise complaints and issues about taking away jobs? Cars instead of buggies and horsewhips, calculators instead of manual computation, assembly robots such as in auto plants, direct-line phones instead of going through a switchboard… Every major technological introduction has its side effect (and yes, perhaps, even purpose) of potentially completely, or partially, replacing the human worker in the equation. But we have also seen that these same technologies enable new services and even industries to be created — and, with them, new jobs.
Think of Amazon, for example. As an online bookseller that “collapsed” the need for thousands of bookstores, one could argue it eliminated bookstores and bookseller jobs. But the truth is it created jobs elsewhere: deliveries, logistics, warehousing and call centers are a small direct example. Moreover, look at how many people can start home-based businesses, or grow their retail businesses because they can now expand their market reach via Amazon’s marketplace. Does that, perhaps, create jobs too?
As we look ahead to the future, we should not be asking, “How can we reassure the public there will still be job security?” Rather, we need to be asking, “How do we best position ourselves to take advantage of the future?” These technologies are sure to displace manual labor in certain industries — like call centers or production facilities — but will they also simultaneously open up the door to new and unforeseen opportunities?
There is really no such thing as job security, regardless of bots. Did employees of Kodak, PanAm and Lehman Brothers have job security? What about engineers at Boeing that are currently going through layoffs? The jobs lost in those cases had nothing to do with AI or bots. What people need to do is see and understand what is happening around them, and think about how they can best utilize their skillset (and/or acquire new skillsets) to find a place — or better yet, create a place — where they can provide value others will pay for. Ingenuity, innovation, perseverance, flexibility and grit are what will ensure people a job in one form or another.
All in all, there are two ways view this revolution. The first is a gloom-and-doom mindset that bots are taking over and stealing our jobs. Or the more rational option, at least in my opinion, is that new technology will positively advance our workforce so we should use it to our benefit. As a society, we’ll be better off embracing change and looking at how it can serve us rather than fearing change. We don’t want to go the path of Kodak — denying digital and ending up as a lesson in history.
There’s no doubt there are unknowns when it comes to AI and the future job landscape, but transformation and adaptability have been an inevitable part of our nation’s and in fact our world’s history. As innovation and invention remain core pillars of our societal DNA, the next phase we need to embrace is the digital revolution.