I’ve been doing this software engineering thing for nearly 25 years now. During all but maybe the last five years, I’ve used just a few common design patterns and building blocks to do most of my work. I’m oversimplifying a bit, but I have to admit that the building blocks were pretty similar across a variety of projects. Improvements in hardware capabilities — released about every 18 months or so — were the true driving force behind software advancement. Software design didn’t have to change too much because the hardware enabled it to simply “work better” or “do more.” As a result, I’d say I gained a reasonable sense of mastery building similar solutions over and over. I honed and improved with each iteration while the design and toolchain remained relatively stable.
That time has come to an abrupt end.
The explosion of cloud
The last five years have felt like a different world altogether. Cloud technology has ushered in an era of combined software and hardware innovation at a remarkably fast pace. The explosion of cloud is happening so fast and it’s so wide-reaching that it’s difficult to appreciate. Cloud innovation is enabling developers to do things that simply were not practical or even possible just a few short years ago. It’s amazingly empowering yet unnerving at the same time. Just when you start to gain a sense of mastery, something supersedes it, leaving you with a shiny new learning curve and a tough choice. The choice to use the “latest and greatest” could be your keystone or leave you with a pile of technical debt.
The fact is that I’m spending less time, almost none, futzing with the physical stuff. This allows me to spend nearly all of my time innovating at the application level. That aspect is fantastically liberating. Less time setting up means more time focusing on core application value with quicker iterations between releases. This focus on core application value is happening across several layers of the stack. Everyone is wasting less time with setup, leaving more time to think and truly innovate. This has generated an enormous wave of new tools, services and capabilities which are, in fact, fueling each other.
The reality of compounding innovation and microservices
Building innovation on top of innovation has enabled leaps of progress in a short period of time. The net effect is a layering of new applications on top of new services, on top of new hardware, often pulled together with new connectivity. That’s a great deal of “new.” Significant improvements in process have helped minimize risk. However, it’s worth noting that nobody has much experience managing all of these blocks working in concert over time.
The whole point here is that this change is happening quickly. It doesn’t leave much time to gain mastery or learn from mistakes before it changes or is replaced entirely. That churn can equate to outdated components left scattered throughout a distributed solution. Updating something in production may be something like a scary Jenga tower of technical debt. That’s where engineers pick the least risky component to update so they don’t bring down the production system.
Handling scale is only part of an IoT technology. It’s equally as important to maintain and improve that technology over time. It’s especially so in the case of IoT, where a device has a lifespan which will likely span several generations of the supporting technology. The pilot phase of an IoT system often deprioritizes lifecycle management. Don’t let that tower get too tall before considering how you will manage change.
The churn of cloud IoT development
If you feel like your development skills are falling behind despite the fact that your skill set has broadened, you’re not alone. The current pace of development is unprecedented during my time. I’m hearing similar perspectives from colleagues doing cloud development. There’s this constant buzz that you’ll get left behind if you’re not using the latest “cool tool”. Cloud tools and services used in IoT systems are changing so fast that it’s exceedingly difficult to gain mastery and stay current at the same time.
We should expect this wave of innovation to continue and get even faster. The overwhelming advice is simply to get good at designing for change. The reality is that we have to embrace and build for it. It seems this realization of change is what has fueled the microservices buzz, which, in part, helps isolate the impact of frequent updates or “churn” of underlying components. Simply put, it’s easier to manage change for a small service. Designing smaller, functional components using skill set-proficient teams can help mitigate lack of mastery for something new. Limiting the scope of the service or component helps those teams “get deep” fast without having to understand all of the details of a big complex technology at once.
The age-old guidance to take the time and consideration to build a strong foundation still holds true. Just make sure that foundation is on castors so you can swap it out or move it without tearing down and rebuilding everything on top of it.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
This past summer, the Bluetooth Special Interest Group announced a seismic new development in Bluetooth capability: Bluetooth Mesh, a new layer of software that would do away with the traditional point-to-communication method in favor of new “mesh” capabilities that would, instead, enable all of these devices and points of contact to talk with each other.
The announcement came on the heels of the growing prominence of mesh technologies in other wireless applications. Thread, a wireless protocol for smart home networking, and Zigbee, a wireless standard for short-range, low-data functions, are just a few of the next steps we’re seeing in mesh development.
But, mesh networking has, up until now, not been a one-size-fits-all solution for connecting devices. And, that’s precisely why Bluetooth Mesh is aiming to completely change the game.
Routed mesh vs. flooding mesh
There are two types of mesh networks: routed and flooding. For routed mesh, individual devices have designated conversation paths. The conversation between specific devices follows the fastest designated route between points A and B.
Flooding mesh does the opposite: Every device on a flood mesh network can send out signals en masse between all Bluetooth-connected devices in an area. Think of it like the difference between talking on the phone versus speaking through a bullhorn. The original Bluetooth mesh standard prototypes used this flood protocol, but it ultimately proved too challenging due to the deluge of conversations between devices to manage and the drain on power efficiency.
Now, Bluetooth Mesh combines the best of both worlds by delivering a “managed flood” that allows similar devices to communicate with each other in a blanket manner.
Smart homes, smart cars, smart offices
Bluetooth Mesh is casting a wider net than ever, so it’s no surprise that the possibilities for what this means in a user’s day-to-day life are wider than ever, too.
The smart home is one of the most potent use cases. Already the average house is loaded with connected, IoT devices embedded with Bluetooth chips, from smart thermostats to Nest cameras to wireless speakers. But, these are all point-to-point communications, where you only manage one connection with one device at a time. Bluetooth Mesh empowers users to bring all of these devices, and more, together with a single point of control. From their phone, users could remotely control all of the aforementioned, plus TVs, kitchen appliances, lights, garage doors or anything in the home with a Bluetooth chip, all at once.
We can see this same single-point hyper-connectivity in the workplace, too, where managers can remotely control everything from the office door locks and light fixtures to the temperature and the dishwashers. Bluetooth Mesh can bring this web of connectivity to on the road, too, where cars with embedded IoT sensors can effectively speak to each other or to surrounding infrastructure to detect everything from passing cars (so drivers can avoid potential collisions when changing lanes) to upcoming traffic lights, so as to let the driver know how much time is left on the light before it turns red or green.
The approach for developers
The new possibilities being raised by Bluetooth Mesh and its myriad of applications will undoubtedly raise the expectations of end users and manufactures alike, and deservedly so. All of which raises a crucial question: What do engineers need to know about Bluetooth Mesh in order to live up to those expectations?
For one, Bluetooth Mesh means easy replication. If mesh networks are being built off the backs of multiples of the same device, then developers will need to ensure that those devices are built to low-cost at scale, so they can be easily and affordably deployed in greater numbers. Additionally, engineers should aim to accommodate a small footprint. Mesh networks mean devices will need to be fit into a wide array of places, functions and applications, requiring these devices to be easily adaptable and built around a small design footprint.
Finally, low-power usage and high-power reliability must be priorities. Engineers need to take steps to ensure that connected devices are “always on,” in order to avoid dark spots from occurring in a mesh network.
Just scratching the surface
Bluetooth Mesh is poised to redefine what’s possible in connectivity and device-to-device communications. Homes, offices and cars are just the tip of the iceberg. Forward-thinking, innovative manufacturers are already looking ahead to how Bluetooth Mesh can scale up for deployment across factories and manufacturing sites, improving efficiency, productivity and device synchronicity.
It’s an exciting time to be in the connectivity business, one where providers and developers now have the tools they need to both dream big and make it a reality.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Edgard Capdevielle, CEO of Nozomi Networks, agreed to have dinner with me in NYC after a long day of meetings and lectures to our partners and customers. When I walked up to the restaurant, Edgard was outside with his phone glued to his ear, and I couldn’t help but overhear his half of the conversation. The gist of it was that IT people were not getting along with the operations people on a specific project. I harrumphed to myself. After many years in the oil and gas industry, I tend to stay away from the subject due to fear for my career.
My lack of fear for my career in other areas had taken me down many paths with some success: servers on rigs to help remediate and rebuild virus-ridden project laptops, voice over IP for large cost savings, real-time video teleport for deep water remote operated vehicles. However, I always steered clear of the SCADA and automation people. We in IT were seen as “soft” operations. If our stuff went down, no one got hurt or died. If they messed up, well, things blew up.
Edgard ended his call and said, “Tough client.” He explained that there had been a clash of zeitgeist between two groups inside his customer’s organization. The security mechanism that his company offers bridges the gap between IT and OT.
“I get it,” I said. “I have dealt with both sides, and I think that the IT guys don’t always have a clue what they are dealing with.” Edgard squinted and I immediately knew I had stepped in it. “Really?” he said. “You’re a network engineer, so I thought you would see it differently.” Edgard opened the door and we walked in. I was about to speak when Edgard held up a finger.
Obviously, we would need some refreshments before the debate.
He sat down, “I’m really surprised that you would take that stance. I want to listen, and then I’ll respond.” I was stunned by such equanimity coming from a CEO, especially when I was attacking his company by inference if not by proxy.
I explained that knowing too much had traditionally gotten me into trouble. I learned at one job that OT people didn’t want me touching their networks. OT technicians were bridging most of their network, and it wasn’t even routed, much less firewalled. I was always a security guy at heart, so this horrified me. But they were more concerned with stuff getting stopped than protecting the network. They had been wrestling with performance issues over wireless and believed that adding security to the mix would simply make it worse.
“You see, in the short term this makes sense,” Edgard deadpanned. “But it is my experience that TCP/IP will change that. Anything that touches TCP/IP automatically changes in order to talk TCP/IP.”
“You mean like a singularity?” I said.
“Singularity? Maybe that makes sense. Yes, it could get sucked into a black hole,” he replied.
“The more positive analogy is the idea of a technological singularity,” I posited. “It means that eventually, everything will not only merge, but the merging will produce a huge explosive move forward in society and mankind. It’s not really accurate to call it a singularity, because it’s actually the sort of critical mass that results in the creation of a star and not its death. But yes, anything that gets within the magnetic pull or singularity of TCP/IP is essentially transformed by it and included. So I suppose that the technological singularity is already here but just quietly sucking everything into it. I guess Scott McNealy was always right, but he just had it backwards. The computer is the network.”
Edgard replied, “You just won my argument for me. While I understand the security and safety issues –which, by the way, is the reason we both have jobs — this is a short-term problem. Now that the network has reached the OT stuff, the OT stuff will change. It has become IoT, whether it wants to or not.”
He had me there.
Edge information systems and OT have started to become networked and thus data-driven. The market sees a gap in the older security systems that have been baked into networks for so long. For example, they are not backed up by automation, nor can they adapt. And they don’t track behavior. That is table stakes for securing IoT\OT. Devices with automation and intelligence should protect dumber devices. This includes aspects of proactive defense, adaptation and behavioral tracking.
However, proactive defense is not something that many companies can do right now. The security industry has been stuck on signatures and protecting concrete resources. But there are pieces of the delivery package that can help detect payloads that have never been seen in the wild before. There isn’t a signature-based detection system that can see those. They require pattern-based detection.
That’s not heuristics. Pattern matching and regression are math. It’s the reason why people make ASICs or co-processors to assist machine learning. If you can do math at wire speed, you are #winning. Which means that proactive defense is math-based detection. The adaptation layer must use different detection methods, such as emulation and behavior, to find the “no-see-ums” like fileless attacks: Java, HTML, PowerShell or any other shell-based scripting language. They can then update the bits that are part of the attack. All of this to grab the tiger by its tail before it escapes.
I don’t know if there is an agreed-upon term for the integration of different types of security systems that work together by design, but it’s what is needed to solve this problem. It is being called a security fabric and an expert system by some. But regardless of what we call it, it is a singularity in security that we must achieve before the whole thing goes nova on us.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Jetsons or Flintstones? Astro or Dino? Rosie the Robot or … wait, the Flintstones didn’t have a household helper, did they? My mom, however, would point out that, indeed, the Flintstones had a least two helpers. Just like when we were growing up, my sister and I begged our parents to buy a dishwasher. “Why?” my mom would ask. “We already have two — dishwasher #1 [pointing to me] and dishwasher #2 [pointing to my sister].” Very funny, mom.
While The Flintstones sucked us into their 10,000 B.C. prehistoric lives, The Jetsons catapulted us into the future — 2062 A.D. to be exact. As observers standing between these two worlds, we saw how their fictional lives were unexpectedly similar to our own. It didn’t dawn on me, however, until I became interested in the internet of things that the prehistoric Flintstones had a dishwasher (powered by an elephant-like animal, remember?) and the futuristic Jetsons drove their own vehicles.
How the internet of things is transforming our lives
We are no longer voluntary observers of this paradoxical Hanna-Barbera world. We are now participants, whether we want to be or not, in this global digital transformation movement, of which the internet of things has a leading role.
I see the internet of things changing our lives in three primary ways. Imagine a dial, if you will, for each:
- From analog to digital
A book you borrowed from the library or the one on your Kindle? An analog watch or a smartwatch? An “old-fashioned” doorbell or a Ring? We all have our preferences and our tolerances for new technology. Some of us prefer to stick to the old “analog” way of doing things, while others cannot become digital fast enough. It’s not a race, though. It’s a journey that we can experience at our own pace.
- From dumb to smart
Slap a sensor on that dumb thing, and it will instantly become smart! That seems to be the rallying cry or M.O. of product managers eager to be onboard the IoT train. Don’t get me wrong: I love smart, but I don’t have time for stupid. Smart hairbrushes, water bottles and toasters. Really? Who asked for these things?
- From professional to amateur
Whereby one might agree that going from analog to digital or from dumb to smart is a forward, progressive move, the shift from professional to amateur sounds a little backwards. Let me explain what I mean.Before the onslaught of “smart things,” we typically depended on professionals to provide advice, install, repair, maintain or replace our “dumb” stuff. With the internet of things, however, especially in the consumer space, there’s an unspoken expectation that we can easily plug and play these smart things into our digital lives with minimal effort.
However, it just doesn’t work that way. Often, we, the amateurs, need to roll up our geeky sleeves to figure it out. Sure, there are “smart” professionals who can help, but they are not as plentiful or competitive as the “dumb” ones.
It may be cool, but it won’t be easy
I love technology. That is one reason I have been in the high-tech industry for, well, let’s say a long time. Moreover, I love the promise of the internet of things. It brings with it a whole lot of cool, but it also brings a fair share of creepy, and a healthy dose of just plain wrong.
I have to admit: The Jetsons made it look easy — and cool. I never thought I would see “Rosie” in my lifetime, but she is here. She doesn’t roll around, and she doesn’t cook my meals. However, she is omnipresent in my home, and she walks me through recipes, complete with onscreen instructions and videos. I call her Alexa.
What the Jetsons never showed us, though, is the creepy or just plain wrong gobbledygook that sometimes accompanies the cool. Whether we like it or not, the internet of things is making privacy freaks and security geeks out of all of us; plus, we need to understand the different ways our smart things can communicate with each other.
That is one tall order, and it is only the beginning.
What can sheep teach us about securing IoT? To understand the dilemma represented by the need to have secure devices, think about the problem in terms of collective ownership, like sheep grazing in a commonly owned pasture.
In 1968, an evolutionary biologist named Garret Hardin published a paper in the journal Science with the title “The Tragedy of the Commons.” In it, he described a scenario in which the land provided adequate sustenance for the herd, so long as the number of sheep was kept in check. If each person who grazed on that land acted in their own self-interest and increased the number of sheep they sent to pasture, the land would eventually become insufficient to support the population and, in turn, would be overgrazed to the point where it would it would be unable to support the community that relied upon it. The problem stems from the fact that no single entity in the community is incentivized to take care of the pasture and, as a result, everyone suffers.
Over the past 10 years, the internet has seen an explosion of connected devices that can deliver YouTube to your various screens, unlock doors, adjust temperature from a distance and transmit energy usage to your local utility. And just like the pasture, the internet is a “commons” that has benefits and drawbacks because no one controls it.
While we all benefit from the comfort and convenience afforded by smarter, connected devices, the lack of security of these same devices comes with a downside. Though certainly not an isolated case, the Mirai attack occurred in late 2016 and used IP security cameras that were only secured with a default factory password that could not be modified. These cameras could not be secured by users even if they wanted to do so. The hackers in this case patched the security hole, presumably so no one else could take control of the password, and exploited it to take control of the IP cameras, using their bandwidth to bring down one of the biggest name services on the internet. These name services are the equivalent of the Yellow Pages of the internet. Web services rely on them to talk to one another. The attack caused several high-profile services like Twitter, Netflix and Reddit to go offline and infected an estimated 500,000 devices.
The question at hand is this: Who is incentivized to secure IoT? Should the companies producing connected chips be responsible for enabling secure devices? Should responsibility fall to the manufacturer of the devices, like the folks who make thermostats or cars? Or do we need government regulation to set the baseline for what is acceptable?
To have the government look at IoT security would mean someone is taking responsibility for management of the “internet commons,” but there are challenges on both sides of regulation. Too much has consequences, as does too little.
In a scenario where there is overregulation, the government could go the route of specifying that IoT products require certification and include advanced security features. An IP camera might require a sophisticated and hardened remote management system to upgrade the security during the product lifecycle. The camera’s manufacturer would be required to go the extra step of certifying for security, beyond the UL and FCC certifications it is likely to receive today. A certification typically goes beyond product features, and would require an organization and processes to handle the security for the lifecycle of the product.
All this extra security and certification adds cost and lengthens the time it takes for a product to get to market. For large companies and expensive products, this may be manageable, but it does present barriers to entry for small companies or low-cost/high-volume products, such as connected lightbulbs or window contact sensors. And a lot of the innovation comes from small companies with new ideas, so barriers of entry clearly thwart innovation.
On the other hand, if we stay in a mode of no regulation, the Mirai attack would likely be the first of many. In this scenario, the cyber arsenal of countries could increase exponentially as new IoT devices come online, to the point where the threat of a ballistic missile strike from a rogue nation is easier to understand than the hidden danger of embedded devices being controlled by a hostile agent. Hackers are known to gain control and wait for an opportune time to strike. By accessing billions of connected devices at a granular level — lightbulbs, security cameras, hospital equipment — hackers can be more targeted in who, when and where they attack. This capability has a price that can be sold to the highest bidder and could spawn a black market economy in extortion at levels we’ve never seen.
The tools are out there to take control of a variety of connected IoT devices. Recently, a series of documents released by WikiLeaks, called Vault7, details the specifics on tools stolen from the NSA. The toolbox it released contains hacks for phones and computers, as well as smart TVs and popular internet browsers. That toolbox is likely to expand as more device vulnerabilities are discovered.
In fact, a year ago the exclusive Austrian hotel Romantik Seehotel Jaegerwirt was subject to a ransom event when hackers took control of the connected door locks and held out for payment. The hotel has plans to retrofit now with mechanical locks.
We know today that hackers have access to the U.S. energy grid and there are teams wrestling with how to close security holes, but billions unsecured connected devices provide bad actors with vectors of attack that are nearly impossible to anticipate and defend against.
The dilemma is clear. Too much regulation could slow innovation and increase cost for the IoT. Too little and the price for IoT connectivity will be too high for widespread adoption.
So what is the right level of regulation? It’s likely to be a balance of security versus acceptable risk. Today the U.S. government is urging semiconductor vendors and manufacturers of IoT devices to take cybersecurity into consideration during the design phase. It is also advocating for post-sale and lifecycle monitoring of connected products to detect and guard against vulnerabilities.
As the government is on the verge of requiring minimum security for connected devices in Federal buildings, it seems to be counting on the purchasing power of the government to be a force for change. As regulations for IoT security are developed, here are three principles we’d like to see applied:
- The government should be proactive about planning for regulation. Politics are intrinsically reactive, but it would be best to ensure regulations are not a knee-jerk reaction to high-profile hacks or newspaper headlines.
- Regulations and requirements should be vetted and widely communicated by laying out a roadmap and creating a cadence of updates to regulations. In this way, product design cycles can anticipate changes and adapt.
- Any regulation should be done with a global perspective and market alignment. Many IoT devices are made for global markets, and if every country invents its own regulations and requirements with subtle differences, it will become very expensive and unmanageable for most companies to comply.
Done well, government regulation can make us all sleep better. Without the need to count sheep.
What would take a building from simply “smart” to fully “sentient?” Smart involves the instrumentation of core systems to provide sensing and actuation that optimize the operations of the property. Sentience takes this a step further, using the building’s smart capabilities to minimize human involvement in ongoing operations and free the building to promote, populate and operate on behalf of its owner. To move beyond smart, a building must acquire the ability to negotiate contracts, manage payments and perform predictive and reactive analytics on its own state. It can access data from the external world and from peer buildings in order to expand its awareness beyond the property and to pull in knowledge and best practices that allow it to adapt, changing its economic or physical environment. It can recruit and manage the service providers who handle everything from marketing vacant units to cleaning bathrooms to repairing a damaged roof. All of these services are then contracted using online advertising, smart contracts and sensors to confirm completion.
Energy usage and acquisition
An instrumented space can calculate historical correlations between measurements of occupancy, outside temperatures, ambient lighting and tenant behavior to forecast the energy needs of a building in the future. Forecasts measured against actual usage can lead to cost savings by allowing the building to actuate down from higher predicted use or actuate up to satisfy tenant expectations. In large organizations, this also allows for commodity hedging, a practice common in the airline industry whereby futures contracts guarantee stable prices for a year or more, reducing price volatility and financial risk. Energy exchanges exist for natural gas and electricity that allow for the payment of a futures price based on a specified volume and period of delivery. With data, energy buying can be timed to maximize savings without sacrificing comfort or requiring massive manual analysis. This will drive more and more buildings to participate in their local energy markets.
Electric generation and brokering
Buildings equipped with electric generation capabilities through solar panels and contingency generators can put these dormant assets to work to generate energy and revenues during building downtimes. By joining together with smart grid deployments, excess energy can be sold back to local microgrids to create revenue and provide resilience and capacity to local electrical grids. Contributing data related to capacity, readiness and demand allows a building to seamlessly participate in smart grid initiatives that help in both the consumption and generation phases of electric distribution.
A fully sentient building will track the lifecycle of all its components, correlating events and correlated data to predict optimum preventative and restorative maintenance schedules. By integrating with a network of maintenance providers, the building can proactively maintain these components in the most cost-effective way — based on the availability of parts and labor and avoiding high-cost peak periods in exchange for vendor discounts. With multiple relationships in place, a marketplace for the services can be created to solicit bids dynamically and award work based on cost, timing and effectiveness. This approach can be applied to routine maintenance of core building systems like HVAC and fire suppression, or episodic upgrades to core assets like roofing and asphalt.
Landscaping can be planned based on weather conditions and the needs of the specific project using the data collected from soil and sun sensors. This allows tasks such as lawn care, watering, leaf blowing, fertilization and pruning to be timed to the needs of the landscape architecture and balanced with the availability of landscaping services. By contracting for services only when necessary, a building can balance indoor/outdoor landscape environment for cost and aesthetics.
Data from sensors that measure such information as foot traffic, external weather, supply inventory and historical conditions can focus on cleaning efforts when and where they are needed, eliminating redundant or unnecessary deployments of your staff and equipment. Integrating with procurement processes can also optimize the availability of cleaning materials to coincide with the performance of cleaning duties and availability of supplies through the supply chain. Minimizing over-ordering can reduce theft and minimize storage needs.
Smart buildings can identify events that correlate with high and low usage patterns for tenants. By providing visibility to areas of overflow and underuse, the sentient building can allow common areas, recreational facilities and other spaces to be reconfigured to suit actual activity (even evolving activity). This, in turn, will increase tenant satisfaction and create incentives to extend leases. Making common areas more accessible and able to more effectively predict overflow can increase their value in the minds of tenants, justifying increases in common area maintenance fees and ensuring that the space is used productively.
Buildings that feature retail space will greatly benefit from metrics that can demonstrate the superior business value associated with a specific space. Foot traffic figures that show traffic patterns can help predict purchase volumes by combining historical traffic data with retail performance metrics from retailers. This can enable a net present value of one space versus another and a new tool for marketing and justifying leasing rates. This can also allow retailers to track the performance of marketing efforts to drive traffic which provides a valuable service that the building could provide to retail tenants.
Buildings where people live, shop and work can impact the wellbeing of their occupants in positive and negative ways. The negative impact of so-called “sick buildings” on human health has been well documented and even litigated. Now it can be quantified and mitigated, too. A sentient building can measure air quality, occupant activity and the performance of core systems like plumbing, waste and HVAC to correlate the activities and conditions that maximize the health of human occupants. It can actively move to improve environmental conditions, creating incentives for occupants to use health facilities or take the stairs rather than elevators, thus creating financial incentives to attract and retain healthy tenants. Connectivity to health and home insurers can allow the building to feed into wellness and occupational health/safety programs to create mutually beneficial relationships. Discounts can reward tenants for positive behaviors and demonstrably reduced healthcare costs or safety risks. Healthier tenants make for better business partners because they chose to stay and participate in the healthy building environment.
When people have choices, the sentient building can balance their activities to save money and generate revenue with the tenant experience. To do this, it is critical that human sentiment be measured continuously in multiple dimensions. When humans are happy and proud of their environment they behave differently. The building can integrate this data and quantify these behaviors. For instance, tenant social media feeds can be analyzed to determine satisfaction or dissatisfaction with particular building attributes. The radio of those working from home versus those working on-premises can determine which tenants warrant extra attention that can translate to improved productivity. The rate and activity of visitors to the site can be used to gauge whether tenants are inviting personal and business associates into their environment routinely. Shifts in these metrics can provide insight into the impact of operational programs and special events.
Multiple constituents, symbiotic outcomes
Benefits for a building owner
Once instrumented and activated, a sentient building can operate self-sufficiently without the need for a large staff to manage operations. A small set of expert troubleshooters and auditors can replace the army of managers and low-skilled support staff that typically operate conventional facilities. These experts can ensure that the sentient building operates effectively and efficiently and help to manage extraordinary needs that might arise.
Benefits for a service provider
A sentient building demonstrates the actual needs for services based on historical prediction and on real-time awareness to eliminate guesswork from service provision. Sentient buildings are empowered to negotiate for the best service and the best price which opens up opportunities to displace incumbent providers and expand those service vendors with efficient operations.
Benefits for a tenant
A sentient building is an active partner in optimizing your daily interactions, whether they involve working, shopping or living within its walls. The sentient building uses real-time knowledge to tune the environment and manage costs which will maximize your experience. It allows you to plan for your needs and optimize your leasing expenses.
Connecting to the IoT data economy
Energy and energy marketplaces
Dashboard overlays can present historical occupancy with weather and calendar events (weekends, holidays and company events) to demonstrate impacts on energy consumption. Predictive tools can use future weather forecasting and scheduled events to predict energy needs. Robots can monitor energy markets for pricing fluctuations that trigger alerts (via email, SMS and push notifications) to prepay for an appropriate block of energy or trigger electricity generation with a single click of the “Approve” button.
- Savings up to 40% of monthly energy bills can be had with smart actuation and predictive pre-purchase.
Cleaning and smart contracts
Dashboard overlays can present historical foot traffic and air quality measures with weather and calendar events (weekends, holidays and company events) to demonstrate correlation of impacts on cleaning needs. Predictive tools can use occupancy and weather forecasting to predict cleaning demands to drive staff scheduling. Robots can generate smart contracts with sensor-measurable service-level agreement criteria to solicit bids from three different cleaning vendors to contact for service and pay based on measurable performance. Alerts can be generated for the vendor management when SLA violations are approaching. Month-end performance reports can be fed into smart contracts to trigger appropriate payment based on predetermined SLA performance.
- Savings up to 30% are available by reducing the ambiguity of cleaning schedules and introducing claw-back terms for SLA violations.
Space value predictor
Dashboards can correlate foot traffic with historical receipts to show the value of a given space versus the predicted performance of competitive spaces. A prospective tenant can use a wizard to model their business and forecast business with a revenue/week view based on historical and projected volumes. Smart lease contracts can be created to manage variable payments based on the performance to the projected business volumes.
- Improving tenant satisfaction will create a 15% premium for space based on pay-per-performance metrics.
Wellness gamification and discounting
Integrations with health insurance providers can allow tenant companies to demonstrate the healthy behaviors of their staff and the overall health of the working environment, generating discounts along the way. Historical trends of a company can be overlaid with the performance of other companies within the building or similar buildings to gamify wellness by creating competitive views of healthy activity. Use of health facilities, internal stairs, bicycle racks and in-house kitchen facilities can demonstrate measurably beneficial behaviors that indicate thresholds necessary to achieve a monthly discount. Health insurers already awarding discounts and health club incentives can focus financial rewards on companies that actual practice good health. Televisions in the lobby can display the competitive results of the wellness programs by floor and/or company. Dashboards can show managers and employees what behaviors are necessary to achieve discounts.
- Company health care costs can drop by 25% by encouraging healthy behaviors that impact the quality of the group and draw discounts from insurers. Healthy employees are 15% more productive than unhealthy employees.
Real-time sentiment analysis
Visitors can be encouraged to use their social media feeds to post about their visit to the sentient building. Dashboards can show the new information measured against historical visits, social media mentions and social media sentiment in real time to show an overall metric like net promoter score. Activities of the moment can be highlighted to recommend start/stop/continue activities to positively impact the direction of sentiment. Sentiment changes can be charted against current building conditions like cleanliness, warmth and occupancy to see how routine operational decisions may be impacting tenant attitudes.
- Pride in the environment is the strongest predictor of lease renewal. Commercial and residential tenants that are measurably happier with their environment drive a drop in costs associated with empty spaces — in some cases as much as 75%. Sentiment is a critical indicator of financial performance over time.
“Billions and billions” used to be associated with Carl Sagan and his accentuation of how large the cosmos are. Today, it’s about the proliferation of connected “things” and is massively exciting! However, billions of devices also represents an incomprehensibly large number of vulnerability points.
With such a significant threat surface, security is sure to serve as a crucial enabler of IoT. It is vital that security is prioritized if the full extent of the IoT business opportunity is to be realized.
The industry needs to guarantee it can stop hackers and ensure devices coming into a network cannot compromise it. It needs to protect clients’ IT infrastructure and ensure firmware can be updated in a secure way.
This is very challenging, and the industry is still figuring it out. But there are some ways in which we can ensure the security of the devices and make people feel safer about living connected lives.
Be a master of all trades
There are a variety of different experts out there. However, the scope of the potential vulnerabilities within any IoT device is so vast that your team really needs people that can cover every bit of ground.
The good news is that there are also an incredible amount of technologies, all of which have different benefits and issues for different features and functions.
As a result, your security team really needs to be a jack of all trades, and master of all. If you’re a master of none, there are security holes everywhere. You need a group that has all the relevant specialties, and a product architect that can see the whole.
Build security into your top-level design
It is far too common for IoT manufacturers to think that security is something that can be addressed down the line. It is not. Security needs to be built into your product development process right from the beginning, at the design phase.
Furthermore, for each IoT system, it is vital to look at the particular things that need to be secured. This will vary on a device-by-device or client-by-client basis and entirely depends on what components are involved in each case. There is no one solution to IoT security. It is different for every device.
Security needs to be properly addressed from the very beginning of the development process, which in the long run saves time and aggravation. Some companies don’t take it seriously enough, and the repercussions of this are huge.
Given that many manufacturers are not yet getting serious about security, consumers are at risk. Yet the evidence suggests most are not yet well-informed about the risks associated with their connected devices.
This is damaging to the industry. There is little financial necessity for such manufacturers to address these gaps. There are plenty of private and government initiatives afoot to tackle the awareness problem, but manufacturers that do take security seriously need to do more.
Those that have focused strongly on security need to ensure their customers are aware of it. They also must provide, alongside the product, detailed information on the importance of securing the devices and how they can play a part in doing so. We haven’t touched on privacy, but providing detailed privacy policies also helps. It can educate users on exactly how their data will be used, which is important as well.
By paying attention to the details, we can help improve products and devices that are being released today. Therefore, if we all focus better on security, we can bring awareness to the issues. Ultimately, that will result in a more secure world where we don’t sacrifice security for functionality.
At the end of the year, analysts, enterprises, experts and opportunist make many internet of things predictions. I have been collecting these predictions for years and can confirm there were as many successes as there were failures.
2017 predictions: The main conclusions
One initial conclusion this year is that luckily there is less hype around IoT. The good news is that year after year, the internet of things is growing worldwide and despite the fact that we are far from reaching many expectations, we are on a journey that has just started. Let’s not fool ourselves; IoT is still in its infancy in terms of dollars and deployments, and that can’t last much longer before market frustration sets in. It is true that the IoT industry has not exploded yet; the acceleration many of us expected this year did not happen, but I see more optimism for 2018.
Large companies such as Cisco, Dell, HPE, SAP, Google, AWS, Intel, ARM, Microsoft and Google continue investing in IoT and many startups are doubling down on the space as well.
The barriers for IoT adoption have been many and well-known. We can be assured that IoT is not yet ready for mass deployment, but please have no doubt that the market will scale up.
Some challenges remain in IoT. For instance, we continue to be worried that there will be a large-scale IoT security breach, and the battle among low-power wide area network (LPWAN) technologies and IoT platforms is far from over.
Additionally in 2017, other terms like edge computing, fog computing, blockchain and artificial intelligence became mainstream. We also saw adoption grow in industrial sector rather than the consumer sector. Finally, we saw this year that recruiting is a challenge for organizations with IoT initiatives.
What will happen with IoT in 2018?
According with Ericsson, in 2018, mobile phones are expected to be surpassed in numbers by IoT devices.
Other main trends in the IoT and IIoT space in 2018, as you can imagine, will be LPWAN, edge computing, AI on the edge and Blockchain. It seems that 2018 will be the year when AI and IoT will converge. But it will also be the year in which CIOs will be busy integrating device management into overall IT infrastructure in a way that doesn’t overwhelm the organization. This is where the adoption of application robots, natural language processing and AI automation of processes will come into their own, offering intelligent management of IoT deployments cheaply and efficiently.
However, 2018 will not be the year of blockchain and IoT, because although blockchain-based IoT adoption will rise to 5%, blockchain is not yet ready for large-scale deployments requiring reliability, stability and seamless integration with existing technology infrastructure.
To reinforce the ongoing investment across the industry, Gartner’s Strategic Trends for 2018 back up the focus on IoT with intelligent things, digital twins and cloud-to-the-edge all making the list for the coming year.
On the other hand, Forrester affirms that 2018 will be the year in which the internet of things finally moves from experimentation to business scale.
Forrester also predicts that IoT platform offerings will begin to specialize in “design” and “operate” scenarios, and IDC predicts that new IoT applications built by enterprises will use an IoT platform that offers outcome-based functionality based on comprehensive analytics capabilities.
Chris Matthieu, director of IoT Engineering at Citrix pointed out that “vehicles will continue to disrupt markets in human transportation, agriculture, logistics, etc. These vehicles are becoming mobile (edge) data centers on wheels processing tera/petabytes worth of sensor data to react in real time to surrounding conditions.”
Almost everyone predicts more cyberthreats in 2018, which will be a challenging year for the IIoT industry. Hackers know that these companies are now online and more connected than ever, which increases vulnerability. In 2018, we will see the first medical IoT hack leading to stolen data. We will also see continued merging of traditional safety and IT security.
Marketers in 2018 are going to experience a massive increase in the number of digitally connected devices, which will certainly change the game for marketers across the globe.
To end this article, I will give one prediction of my own. In spite of the fact that I am not on this list of the 17 experts telling the most exciting 2018 IoT trends, I hope that 2018 will finally see IoT move from experimentation to business scale.
Thanks, in advance for your likes and shares!
The problem is worse in outer space, but it’s pretty bad on earth too. Invisible, odorless airborne toxins and chemicals sicken people indoors. The American College of Allergists estimates that 50% of all illnesses are caused by polluted indoor air.
What causes indoor air pollution? What are the symptoms? How can IoT reduce air pollution? How can biomimicry help? Should you read further? Check the air quality index where you live and decide for yourself.
Indoor air pollution kills over four million people every year according to the World Health Organization. Many household items release harmful toxins. These include gas stoves, building materials and furnishings made of certain pressed wood products, insulation with asbestos, carpets, household cleaning products, central heating and cooling systems. Air filters can’t help much as these pollutants are gaseous and flow right through them. The main sources and symptoms of pollutants include:
- Trichloroethylene (paint, lacquers, varnishes, paint remover). Symptoms associated with short-term exposure include excitement, dizziness, headache, nausea and vomiting followed by drowsiness and coma.
- Formaldehyde (waxed paper, particle board, plywood, synthetic fabrics). Symptoms associated with short-term exposure include irritation to nose, mouth and throat, and in severe cases, swelling of the larynx and lungs.
- Benzene (plastics, resins, detergents, paint). Symptoms associated with short-term exposure include irritation to eyes, drowsiness, dizziness, headache, increase in heart rate, headaches and confusion, and some cases can result in unconsciousness.
- Xylene (leather, tobacco smoke, rubber). Symptoms associated with short-term exposure include irritation to mouth and throat, dizziness, headache, confusion, heart problems, liver and kidney damage, and coma.
- Ammonia (window cleaners, floor wax). Symptoms associated with short-term exposure include eye irritation, coughing and sore throat.
Indoor air pollution hurts productivity
MK Think designs human-centered technologies and knows how indoor air pollution impacts productivity. “Increasing carbon dioxide (CO2) by 400 ppm (parts per million) lowers productivity by 21% and makes people feel tired and sluggish,” said Signo Uddenberg, the director of innovation at MK Think, citing a Harvard Center for Public Health study. “The impact is worse with higher concentrations and longer exposure to CO2. MK Think deploys sensors to assess environmental variables indoors. This information is analyzed to reduce internal pollution and improve working conditions.”
Air quality sensors
Foobot is a small, well-designed air monitor with a number of sensors that continuously monitor indoor air quality. It integrates with other products such as Nest and ecobee and provides an open API for integration with other applications. Some of the parameters it monitors include:
- Volatile organic compounds (VOCs) including formaldehyde, toluene and benzene. A metal oxide semiconductor is used to provide a global picture of levels.
- CO2 levels are calculated with an algorithms and VOCs levels rather than a specific sensor.
- Particulate matter is detected with an optical sensing system. It detects even the finest particles that get from the bloodstream to the lungs.
PocketLab Air is an all-in-one science lab for investigating climate change and air pollution in your environment. It measures carbon dioxide, ozone, particulate matter, temperature, humidity, barometric pressure and light. You can support its Kickstarter campaign here.
Blueair Aware measures and monitors indoor air quality. It detects airborne particles, volatile organic compounds, carbon dioxide equivalents, temperature and humidity. It communicates via Wi-Fi 802.11B/G/N, 2.4 GHz security open/WEP/WPA/WPA2. The Blueair Friend app is updated continuously and data averaged over a five-minute interval is also sent to the Blueair cloud for visualization. Air View, its online application, displays real-time air quality data at locations worldwide using local sensors.
NASA researched how to keep air clean for astronauts and found biomimicry to be effective. This approach applies elements of nature to solve complex human problems: “Plant roots and their associated microorganisms then destroy the pathogenic viruses, bacteria and the organic chemicals, eventually converting all of these air pollutants into new plant tissue.”
“Many substances that are toxic for humans can be broken down into non-harmful substances by plants,” explains Foobot. “Other substances are assimilated, becoming part of the plants’ tissues. The bacteria that live in the soil around plant roots are also capable of metabolizing harmful substances.”
A plant-based air purification approach has three benefits over air filters:
- Artificial filters remove harmful compounds in the air, but can’t replenish the oxygen.
- Filters capture pollutants but don’t eliminate them; they must be cleaned frequently. Plants break down noxious substances into harmless compounds and in some cases assimilate them.
- Green walls require much less power to operate than traditional HVAC systems.
Reducing indoor air pollution with bioremediation
Biome offers a stylish, indoor living wall with plants growing in small pods. Air is forced over the roots of these plants, where microbes remove and digest the pollutants. They release oxygen and there aren’t any filters to clean. The unit is self-contained and resembles a large, wall-mounted flat-panel TV. Air quality sensors determine how much air should be processed and adjust the speed of the fans accordingly. Forcing air over the plants’ roots increases the rate of air purification.
The next time you’re drowsy at work, check the air quality first. Perhaps bioremediation might just make you feel better!
Many organizations are excited about the potential to use the emerging internet of things for economic gain and competitive advantage. Diverse organizational entities such as municipalities, healthcare companies, manufacturers and energy producers all see the benefits of using the power of connected sensors, cloud intelligence and remote control to transform their entire industries.
While much of the early focus of IoT developers has been on building relatively simple device telemetry applications, significant challenges remain for organizations that need to build more complex end-user technologies that integrate data and insight across many products and partner ecosystems.
Building a new ecosystem application — one that involves development efforts across several products from various partners — has always been a daunting proposition. But creating new application platforms, where open partner ecosystems can flourish, is often well beyond the capabilities of even sophisticated development organizations.
Still, these ecosystem-capable application platforms are essential to IoT delivering on its promise. Without easy application access to device telemetry and controls (whether within a closed ecosystem or an open one), IoT will fail to deliver much more than did the siloed machine-to-machine technologies it promises to replace.
Closed application ecosystems
While a new smart healthcare application might involve collecting data from wearables or in-home health monitoring devices, it might also involve using sophisticated, cloud-based analytics services to extract insight from collected data. That insight may then need to be made available to physicians, who can collaborate and provide their services to patients via a telehealth application, with timely alerts and reminders being sent to patients by a third-party alerting service.
Such a healthcare system would typically be created by preselecting and integrating the capabilities of several existing connected devices and partner web services into a “system of systems” that can deliver this powerful new set of capabilities. However, the success of the final outcome will be highly dependent on how efficiently and effectively application developers can manage the inherent complexity of such system-of-systems projects. In this scenario, although IoT should provide acceleration for ingesting data from devices into the cloud, it does nothing to help manage the complexity of delivering the overall technology.
New application platforms as a service
Sometimes, closed application ecosystems are insufficient. Take the creation of a smart city, for example. Today, such a city might include any number of siloed IoT applications (such as for transportation, healthcare, energy, water management and waste management) that each serve the needs of residents. But smart cities will only be truly smart when data from these silos can be easily re-exposed to developers and integrated into innovative new applications.
Smart cities must become infrastructure service platforms for building highly interconnected applications, able to use the insights derived from the sensor data generated city-wide. Such platforms can lead to new monetization opportunities that can greatly enhance the value of IoT data.
But creating such open smart city services requires the creation of a complex developer platform for the smart city. That platform needs to support any type of IoT-connected device or service, sourcing data into the city’s storage and analytics infrastructure, and to expose an increasingly rich set of monetizable data and insight as services to developers.
Today, many organizations are exploring the creation of such IoT services platforms, where open ecosystems of partners can flourish. But developing an open platform as a service (PaaS) can be an order of magnitude more complex than developing an IoT application for a closed ecosystem of partners. As a result, despite their promise, these types of complex projects tend to go off the rails quite early, easily outstripping budgets and failing to produce successful outcomes.
Reducing complexity while enhancing outcomes
One of the common concerns with IoT development projects is that they can be deceiving. It can seem tantalizingly simple to use a public IoT PaaS offering, such as those delivered by cloud providers Amazon Web Services, Microsoft Azure and IBM Bluemix, to quickly stand up an IoT end-user application. But business executives often find themselves perplexed trying to understand why an IoT project that appeared to be 80% complete after only six months is less than 90% complete after two years.
The difference is explained by the complexity variance between a demonstration and a production-quality technology. Standing up a lightweight demo application is often relatively easy to accomplish. But delivering a production-quality service can be far more complex.
Most IoT ecosystem projects will involve multiple contributing application partners. They will also involve complex, evolving functional and non-functional requirements, such as partner service integration, service exposure using APIs, application security, regulatory compliance, service monetization and customer support.
In a world of agile development, these much-needed production capabilities are always, tantalizingly, just a few sprints away. But agile development does nothing to reduce the impact of complexity. In fact, it has a tendency to obscure the complex realities that developers should face early to ensure that their designs can deliver predictable production outcomes.
To address these challenges and reduce complexity, IoT developers are now starting to embrace collaborative lifecycle management (CLM) technologies combined with the latest continuous engineering (CE) technologies. Modern CE/CLM products help IoT developers start at the beginning by developing a big-picture operational understanding of the project requirements before design and coding begins.
These tools allow various interdependent devices, products and services to be behaviorally modeled, allowing multipartner development to proceed successfully on parallel tracks. Lifecycle traceability features allow the complex relationships between all software artifacts to be identified, maintained and analyzed to improve product quality. Regulatory and compliance management capabilities ensure that, at each stage of the project lifecycle, important issues such as data privacy and operational security are not ignored.
CE/CLM tools provide next-generation change and configuration management capabilities that work hand-in-hand with quality management services to ensure that the product moves inexorably forward without taking sudden steps backward. Finally, often neglected, program management capabilities provide much-needed oversight to manage critical paths, inform decision-makers in a timely manner, and allow the planning of necessary adjustments to keep wayward programs on track.
Back to the future
Although IoT is new, CE/CLM capabilities are not. They have long been the foundation of systems engineering, facilitating the management of complexity. Systems engineering discipline is as timeless and needed today, for complex IoT initiatives, as it has been for programs of similar complexity before IoT came into our lexicon.
The good news is that there is a new generation of highly modernized CE/CLM tools designed for easy use with both cloud-based and on-premises development projects. Companies attempting to rein in IoT project complexity can choose to employ these tools themselves or partner with systems integrators who are experienced in their use.
Organizations that want to ensure the success of complex IoT initiatives will need to mitigate the adverse potential of complexity-related development risk. Without doing so, these IoT projects might fail to get fully off the ground. Thankfully, these risks can be managed effectively using the latest CE/CLM tools to ensure that your IoT vision becomes a reality.