IoT Agenda


October 24, 2018  10:43 AM

IoT: Synonymous with poor security?

Saugat Bose Profile: Saugat Bose
#eHealth #Healthcare IOT #Wearables #wireless medical devices, Consumer IoT, Enterprise IoT, Internet of Things, iot, IoT devices, iot security, security in IOT, Wearables

IoT — a buzzword that encompasses almost every product connected to the internet or to each other. It’s become a broad term, used to probably describe any device or product that requires a connection to an office network, home or car to deliver the entirety of its feature set.

To begin, all things IoT collect and share data with their manufacturers without user awareness. In several cases, product functions depend on their connectivity to the internet, being controlled to a great degree by their manufacturer even. Simply put, the internet of things is a concept that enables the interconnection of components of our increasingly complicated lives with external and internal software applications.

Why the IoT security concern?

With the world progressing toward connectedness, companies in the technology ecosystem also appear to be rushing electronic and electrical devices to the market, adding features that require connection to the internet. In this race, however, companies that likely have zero experience with networked devices are bound to overlook the complications involved around software and hardware security design and construction.

Why does this happen? It mostly involves getting the coolest, newest function out first — working at the lowest possible cost.

For example, inexpensive old chips with archaic designs are attractive building blocks for devices that require merely limited capacities or capabilities. Software testing is rendered down to the goal of simply confirming its functionality and ease of setup, mostly with default selections and passwords. What this implies is that cybersecurity, as important as it is with cyberthreats, is an afterthought at best.

It frightens the mind to think about it. Hardware chipsets used in most new products are old with multiple known vulnerabilities. Software integrated into said devices rarely receives any form of in-depth security testing. This equates to potentially tens of thousands, and perhaps hundreds of millions in the near future, of devices being installed into businesses and homes ripe for hijacking, worldwide.

Vulnerabilities, once discovered in a widely distributed service or product line, leave hundreds of thousands of businesses and homes open to view and attack.

But IoT is everywhere: Enterprise and consumer

IoT has encroached the consumer landscape significantly, with footprints:

  • At households — In 2018, a smart home is almost commonplace with internet-connected thermostats, door locks, lights, televisions and even refrigerators. People are now able to control home functions and services without actually being physically present on site. For instance, smart refrigerators can now monitor the amount of milk you have and reorder based on usage from a preferred store, automatically.
  • On person — Smart watches, fitness devices and wearables that offer biometric measurements such as perspiration levels and heart rate as well as complex measurements such as oxygen in the bloodstream are examples of on-person IoT-connected devices. In healthcare, implanted devices frequently communicate with doctors via reports on health statuses and, in some cases, take actions based on instructions from the medical staff. Unfortunately, this data is subject to return to a central database that is hackable.
  • On the go — Consider present-day transportation systems and their utilization of sensors working in combination with GPS. Cars are getting smarter as well, with diagnostic systems and on-board navigation systems.

On the flipside, businesses are beginning to also see the importance of IoT-connected devices in terms of cost-effectiveness, efficiency improvements and newer functionalities. For example:

  • RFID tags that enable retailers in monitoring inventory;
  • Farms with connected sensors to manage crops and cattle, including the optimization of food, pesticide and fertilizer distribution;
  • Driverless trucks that can operate at 24/7 capacity; and
  • Infrastructure systems, such as delivery systems, power generation, transportation systems, water systems and more, with IoT-connectedness serving to improve accuracy of control and data.

By now, you realize that the technology activates the creation and sharing of loads of data, making individual devices susceptible to malicious attacks, breaches and misuse. This realization is a must to venture into avenues of IoT security — logic, code and vulnerability assessments to dynamic testing at the development phase itself.

Vulnerabilities in IoT

Gartner expects the number of internet-connected devices to rocket to about 25 billion by 2020. And while it is a step in the positive direction toward improving many lives, the number of security risks associated with the increase in number of devices is also something to look out for. There is cause for concern with regards to privacy as well, with most stakeholders being unaware of the situation.

In recent times, IoT devices have come under immense scrutiny over several vulnerabilities and poor security controls. Here are some of the common problems:

  • In most cases, and for several reasons, IoT users tend to approve the collection and storage of data without adequate technical knowledge or information. Think about it — this data lost to or shared with third parties produce a detailed picture of our personal lives. It’s unlikely that this is something users would consider sharing rather casually with strangers on the street. At a digital level? Well, it happens quite naturally.
  • There are people deeply plugged into the digital world, and they do prefer sharing data to improve personalization. On the flipside, despite this generosity, these people expect anonymity, at least to a certain level. And anonymity has been a constant issue in the world of IoT, with barely any importance allocated to the same.
  • Things can get dangerous with the concept of layered security protocols to manage IoT-related risks ranking at a nascent stage, still. Take the example of smart health devices used to monitor patients today — they could be altered, and it’s all the more severe when you consider that the medicines or treatment involved is decided upon post analysis.
  • Automobile devices that are now computer-controlled are at a risk of being hijacked by those with the capacity to gain access to the on-board network for personal gain, mischief or fun.
  • Internet appliances such as refrigerators, kitchen appliances, television sets and cameras could be used to monitor people within the confines and apparent safety of their own homes. This is valuable personal data, which when shared with other databases or third-party organizations are prone to being abused.

IoT is not bad, and it is becoming an integral part of our daily lives. At the very least, it is crucial for these devices to undergo thorough testing and establish what could be considered a minimum baseline for IoT security.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

October 23, 2018  1:29 PM

Limit variables like an experiment for a successful IoT strategy

James Kirkland Profile: James Kirkland
architecture, Enterprise IoT, experiment, Internet of Things, iot, standards, Testing

IoT implementations can sometimes seem like school science projects. With so much new and unknown, there’s a lot of experimentation still being done. Like a science experiment, even the best IoT plan needs tweaking and testing to be successful.

My son’s project of building a robot provides some insight. He had dreams that it would go so fast it would beat all the others in a race. In reality, his robot went haywire and failed the minimum expectation of traversing the classroom floor. In retrospect, he learned a lot of important lessons about best practices, collaboration and testing.

My son explained, “My project didn’t exactly go as planned. I thought mine was going to be the best. But Jason and David collaborated on their robot and theirs was the winner. They were using different ball bearings, different wheels, different wiring, and their process included making several prototypes. I learned that, even after making all of their changes to my robot, all mine could do was go around in circles.”

Sound familiar? Understanding the variables and how they affect the overall architecture of a project isn’t easy. Factors you didn’t expect to influence other parts can surprise you and the results can be disastrous. That’s why it is so important to do a lot of advance planning, protect the surrounding environment, and tweak and test methodically so you can quickly and easily isolate potential problems.

If you were the science fair judge, which IoT experiment would you choose?

Photo credit: kjarrett on Visualhunt / CC BY

I’ve recently had the opportunity to work with two customers on specific IoT projects. Their approaches are very different. The first, which I characterize as big, bold and adventurous, is trying to change everything at once. It is redesigning all its applications in an attempt to come up with a comprehensive, all-in-one IoT system. I applaud its bravado, but I worry about what the company could encounter in the future with this approach, knowing that if problems arise it may struggle to isolate and address them.

The other customer has a goal of getting off its mainframe over the course of the next two years. Its operations really can’t handle any downtime, so it is approaching its move very cautiously. The company is taking its operations, function by function, and looking at changing only those pieces that will have the least impact to the business right now. It will start there, and then take what it learns from that portion and apply it to the next. I applaud its wisdom and am encouraged about the company’s future.

In my opinion, approaching large IoT projects in small chunks is a great way to test things out, customize settings and optimize the architecture for current needs. It also makes it much easier to adjust the environment in case things need to change. For instance, let’s say a new product or even a new standard comes out and illustrates that it will serve the project even more efficiently than what was originally chosen. The future is filled with innovation that businesses will want to take advantage of. If approached systematically, an IoT project can be upgraded to do so. Similarly, if there’s an issue found during the testing of one segment, issues can be addressed more easily and fixed before they propagate more widely.

Architecting for change

IoT is a manifestation of today’s businesses’ digital transformation, a world that is ripe with change. For the future success of IoT, now is an important time to experiment and learn. Start planning for change by building modular, scalable architectures based on standards. I recently came across an article in Forbes. It did an excellent job of describing how architecting for change is critical. In the article, “4 Reasons Why Architecting For Change Is Critical for IoT,” the author points out that “change is no longer the exception, it’s the new normal.” Adaptable infrastructures, where components can be abstracted and modular, can evolve into support for IoT projects. This is because you can test small areas and see how they affect the business before switching everything on.

There are those who will run to a new cliff and take that leap. They trust in past experience and have faith that their technique will make them successful. Then there are those who approach the cliff a bit more judiciously. They are concerned that they may not truly know the lay of the land below. IoT is a lot like coming across a new cliff. You can make some assumptions, but there is still a lot to be learned. Knowing your limitations and planning for the unexpected is wise. Making smaller adjustments while moving into the unknown can keep you from slipping too far down a precipitous path and avoid “going in circles” like my son’s science project.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 23, 2018  1:11 PM

Realizing the Holy Grail of digital

Jason Shepherd Profile: Jason Shepherd
cloud, Edge computing, Enterprise IoT, GATEWAY, Internet of Things, iot, IT, Open source, Operational technology, Scalability, Scale, SDK

Thank you for reading the final post in this five-part series, where I bring it all together — outlining how we need an open, cloud-native edge to ultimately realize the Holy Grail of digital.

If the current trend of vertically focused platforms continues, we’ll have wildly different methodologies for connectivity, security and manageability across myriad use cases. This is not only a nightmare to deal with whether you’re operational technology (OT) or IT, but also a major inhibitor for the true potential of IoT.

For the market to scale, we need to decouple hardware and software infrastructure from applications and domain knowledge. After all, when was the last time your ERP system secured and managed the devices that access it?

Trusted, flexible interoperability is paramount
The clouds all want you to standardize on their IoT platforms, but in most organizations, different internal and third-party providers service different use cases (e.g., production, quality, safety/compliance, logistics, building automation and energy management). As such, it simply isn’t realistic for all use cases in any given environment to use one cloud — or even the cloud at all.

Then there’s the concept of multi-tenancy — in theory, multiple discrete (and even competing) service providers often could share the same sensing infrastructure in a given physical environment, but typically won’t today because they don’t trust there are proper considerations to prevent undesired cross-pollination of data.

Beyond realizing a simple multi-tenant use case at any given site, now try to interconnect a bunch of silos into a broader system of systems spanning a mix of private and public domains. Bottom line, we need more consistent, trusted and flexible infrastructure supporting use case-specific devices and applications.

You need an open, multi-cloud, multi-edge strategy to scale. Period.

A cloud-native edge to the rescue
As outlined in part two, key traits of cloud-native architecture include platform independence and breaking functional components down into microservices which enables each discrete function to be deployed and updated individually without taking down an entire system. This is especially important in IoT because taking down an entire OT process to push a software update is a big no-no. Imagine the pop-up: “Please save your work. Your production line will automatically restart in 15 minutes.”

We’ve only scratched the surface on what’s possible in IoT, so it’s critical to be architecting now for flexibility in the future. On top of decoupling core infrastructure from applications, it’s necessary to extend cloud-native architectural principles to all the various edges to provide this flexibility.

Start small, scale big
Using loosely coupled microservices distributed across edges and clouds provides the elasticity to right-size deployments by use case and enable important functions, such as load balancing, failover and redundancy, everywhere.

Adopting this architectural approach now doesn’t preclude direct edge-to-cloud connections, or force you to embrace continuous software delivery before you’re ready, rather it simply provides the most options for the future without having to rearchitect, which is extremely important in order to stay competitive as the world innovates around you.

I also realize this isn’t a panacea — we’ll still also need embedded software for constrained devices and control systems that operate in hard (e.g., deterministic) real time. These just plug into the cloud-native parts.

The fact remains that many people are developing with monolithic models because they’re not thinking for the long (or even medium) term, or they’re in Pi and the sky mode just trying to get started.

The power of open source
Let’s talk open source since it often goes hand in hand with cloud-native.

Many people believe that an open source model reduces their ability to protect their IP or introduces security risks. However, companies large and small are increasingly using open source code to lower overall development costs, accelerate time to market and increase security based on a global network of experts evaluating and patching potential threats.

In short, open source collaboration minimizes undifferentiated heavy lifting so you can focus on accelerating value. In this competitive world, money is made by differentiating through what I call the “ities.” Security, manageability, usability, scalability, connectivity, augmented reality — you get the drill. Not by reinventing the wheel.

The importance of (and practical reality with) standards
In part three, I talked about the inherently heterogeneous nature of the edge. To realize the true potential of IoT, we need to collaborate on standards and best practices for interoperability.

Connectivity standards efforts like OPC-UA (industrial/manufacturing) and OCF (consumer) are making great strides. In fact, the pace at which industrial players are adopting OPC-UA over time-sensitive networking (TSN) as an alternative to traditionally proprietary fieldbuses is testament that the “drivers for dollars” lock-in era is coming to a rapid end.

Still, we need ways to help different standards interoperate because there will never be one standard to rule the world. Plus, you can’t just rip out the majority of capital equipment out there that talks legacy protocols, so you need flexible ways to bridge them to IP networks.

Moreover, in addition to protocols, we also need to bring together an inherently heterogeneous mix of hardware, OS and programming language choices, and most importantly domain expertise. All of these things get increasingly complex the closer you get to the device edge.

You need a fast boat
Due to the maker movement, there simply isn’t enough money in the world for incumbent industry players to buy up and kill off all the innovative startups threatening their stale lock-in model. So, they must either pivot or die. The classic Innovator’s Dilemma.

Going forward, technology providers in any market will win by merit, not lock-in. Do you think the PC market would have scaled if it cost $1,000 to connect your keyboard? What if a custom protocol driver was required for every phone, credit card and website?

The new world is about floating all boats for scale through open collaboration and then making sure your boat is really good and really fast at producing meaningful differentiation.

EdgeX Foundry: Building an open IoT edge computing ecosystem
The network effect resulting from a community collaborating on tangible open source code is one of the most effective ways to accelerate interoperability between heterogeneous elements.

There are a lot of great open source efforts out there, but I want to highlight the EdgeX Foundry project in particular because it was architected from scratch to facilitate a hardware, OS, programming language and protocol-agnostic ecosystem of interoperable commercial value-add at the IoT edge.

In short, despite being more about creating de-facto standard interoperability APIs than anything, EdgeX is slated to do for IoT what Android did for mobile.

You can learn more in the project overview deck, and a blog expanding on key project tenets can be found here. It helps clarify that EdgeX isn’t just about giving your IP to open source.

Linux.com recently did a great write-up on the July “California” code release and the “Delhi” release dropping in early November. A number of announcements were made last week at IoT Solutions World Congress, including emerging EdgeX-based dev kits and more backing members including Intel.

Open collaboration for a smarter edge
The EdgeX community is also gearing up on integration with other open source projects such as Akraino, Hyperledger, Zephyr, Kubernetes and FIWARE.

The resulting potential is huge — imagine infrastructure that’s able to programmatically prioritize bandwidth for a healthcare application over a connected cat toy because each proprietary microservice has a de-facto standard API to advertise its current state and quality of service needs. Here, ledger technology can keep everyone honest (as much as I’d like to prioritize the cats).

These open source projects are also bridging to other key IoT/edge efforts, including emerging EdgeX-enabled testbed activity at the Industrial Internet Consortium.

Just say no to IoT gateway drugs!
There’s a reason it’s not a good idea to use the email alias from your internet provider — you’ll hesitate to change ISPs after that initial promotional rate expires. Using an agnostic alias like @gmail keeps your options open for later. [Side observation: If you still use @aol, part two must have been especially nostalgic].

The clouds are making great investments in edge, but they’re also purposely making it all too easy to get hooked with their dev kits because they want to lock you in early and then rake in the dough through API access charges as your data needs grow over time.

I call these lock-in dev kits “IoT gateway drugs.” Don’t get me wrong, the clouds are offering great services and I recommend them, but only when used with truly open edge SDKs that minimize your lock-in potential.

EdgeX dev kits: A better path to get started
In comparison, emerging EdgeX-based dev kits and associated plugin value-add will give developers confidence that they can prototype with their choice of ingredients, taking advantage of plugin components from the growing EdgeX ecosystem to supplement their own innovations.

And, most importantly, developers can readily swap out elements as they optimize their system and ramp into production and day-to-day operation.

Realizing the Holy Grail of digital
But wait, there’s more! I talked in part four about how trust is everything when it comes to realizing the true potential of IoT. And beyond IoT, it’s ultimately about what I deem to be the “Holy Grail of digital” — selling data, resources (e.g., compute, networking, energy) and services (e.g., domain-specific consulting) to people you don’t even know.

Over time, by combining silicon-based root of trust, universally trusted device provisioning (check out last week’s announcement of Intel and Arm collaborating toward this), appropriate connectivity and regulatory standards (e.g., privacy, ethical AI), open, de-facto standard APIs established by projects like EdgeX and Akraino, and ledger technologies, we’ll build the intrinsic, pervasive trust needed for the Holy Grail.

With differentiated commercial value-add backed by this open, trusted plumbing, anyone will be able to create data in the physical world, send it out into the ether and then, based on their terms, sit back and collect checks from complete strangers. Or in the more altruistic sense, simply share trusted data.

In hundreds of conversations with very smart people, nobody has really questioned that this is the Grail or even attempted to claim that we can possibly realize it through a bunch of siloed platforms trying to lock customers in, thinking they can then sell their data if allowed. It simply isn’t possible to build the necessary trust at scale.

It’s midnight, do you know where your data has been?
All too often I hear from data science experts that it’s someone else’s problem to get them clean data. Really? Data is only worth something if you can trust it. So even if you don’t buy all this Grail talk now, you should still care about transparent, open collaboration if your data’s really going to be the new oil.

[Side note: People like me call themselves CTO, so there’s plausible deniability if something doesn’t actually happen … but in this case the Grail will happen in due time].

A lesson from Mr. Mom
When Michael Keaton’s character tries to drop his kids off at school for the first time in the classic ’80s movie Mr. Mom, a lady comes up to him and says, “Hi Jack, I’m Annette. You’re doing it wrong.” This classic scene plays in my head when I think about how most IoT technology providers are doing things today.

A common natural inclination is to swim directly into a riptide current to try to save yourself, but instead you soon get tired and drown. Anyone who has seen Baywatch knows that you’re supposed to swim sideways.

Similarly, many IoT technology providers are swimming into the current today because of the herd mentality to lock customers in, instead of really breaking down the problem.

Meanwhile, for the past three years the team at Dell Technologies has been collaborating with a bunch of other great companies to swim sideways and build an open approach so we can realize the true potential of this market.

We welcome anyone to join the collaboration in the open community. In the immortal words of Jack, “220, 221, whatever it takes!”

Three rules for IoT and edge
The principles outlined in this series are summarized in my “three rules for IoT and edge”.

First, it’s important to decouple infrastructure from applications. EdgeX, combined with other open frameworks and platform-independent commercial infrastructure value-add (like Pulse IoT Center from VMware for managing IoT edge devices in the droves) is key here.

Second, it’s critical to decouple the edge from the cloud via open, cloud-native principles, as close as possible to the point of data creation in the physical world. This enables you to control your data destiny through any permutation of on-premises or cloud data integration, compared to pumping your data into a cloud and then having no choice but to pay API egress charges to subsequently fan it out anywhere else.

Finally, it’s important to decouple industry-specific domain knowledge from underlying technology. Many IoT platform providers tout the ability to do predictive maintenance, but their developers don’t have the necessary years of hands-on experience and historical data on the failure patterns of any particular type of machine.

Brass tacks, we need to be able to dynamically marry the right OT experts together with consistent, scalable IT infrastructure and the right technology bells and whistles in a secure, trusted fashion.

Closing thoughts
Think about how you create and capture value in the long term. I guarantee there’s a bigger opportunity that you can capitalize on. There’s only so much you can cut costs, but the sky’s the limit in making new money.

Act today to be able to deliver customer-valued outcomes through rapid, software-defined innovation, everywhere. This includes embracing open source to minimize undifferentiated heavy lifting and riding the wave of the network effect as part of a broader open ecosystem. Interoperability builds a bigger stage for a better show.

Remember and respect the importance of people, including fostering collaboration across OT, IT and the line of business.

Plan now for an increasing amount of edge computing. The deepest of deep learning will always happen in the cloud, but we absolutely need edge compute to scale.

Follow my three rules for IoT and edge as much as possible.

Just say no to IoT gateway drugs.
Above all, think about how your decisions today will impact your ability to realize the “Holy Grail of digital.” Scale and Grail, this is what it’s all about!

If all of this seems a bit overwhelming, no worries — it’s actually advisable to start small. Just remember that starting small with an open approach is the only path towards the Holy Grail.

I hope that you’ve found this blog series helpful and would love to hear your views and comments. Thanks for reading! Share and stay tuned for more on the OT/IT dynamic, AI, blockchain and beyond! And be sure to follow me on Twitter @defshepherd.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 22, 2018  3:57 PM

Airbrush-sprayed antennas spurring innovation in IoT

James Warner Profile: James Warner
ANTENNA, Antennas, Internet of Things, iot, IoT hardware, IOT Network, Wireless

The internet of things is basically the utilization of network sensors in physical devices to enable easy remote monitoring and control. IoT is one of those technologies that has attained huge traction over a short period of time, and only continues to grow. IoT is being used in a variety of fields including banks, healthcare, retail and consumer goods, among others. Companies across the globe are also creating newer ways to integrate IoT. From small-sized startups to big-sized established firms, every company is trying to make the most of IoT. By 2025, the worldwide worth of IoT is anticipated to touch the $6.2 trillion mark.

Use of antennas in IoT

In IoT, radio frequency antennas are used widely. These antennas act as the base of the networks; the functionality of antennas is most important for the proper working of IoT. These antennas not only have to be developed properly, but they also have to be integrated with precision. Even a little bit of a problem in the antennas can lead to disruption of the network. Therefore, IoT experts have to take proper measures to maintain the standard of such antennas.

Inception of sprayed antennas

Companies are now trying to make antennas easier with the help of a simple, one-step spray-on technique, developed by scholars at Drexel University. Whether it is the tens of nanometers of antennas or some microns, all of these antennas are made using titanium carbide — the same material that is linked to 2D transition metal carbides. Nitrides here are recognized as the MXenes. In MXenes, M stands for the early transition metal, which can be titanium, molybdenum, vanadium or niobium; X stands for either carbon or nitrogen.

This particular coating is capable of transmitting radio waves smoothly, and it can direct the waves irrespective of the thickness of the coating. This implies that the coating can be easily sprayed onto a wide range of items and surfaces, from solid objects to flexible ones. And it can be done without adding any extra weight or circuitry.

How are sprayed antennas helpful for IoT?

The method of spray-on antennas is done to improve antenna performance. With this technique, it is easy to make antennas optically transparent. Now there could be more space available to set up networks. At the same time, there will be a plenty of new apps and cutting-edge techniques of gathering information — some of these latest techniques can’t even be anticipated at the moment. It is also believed that flexibility will enable integration with more objects. This technique will also make integration with various metals, like aluminum and copper, a lot more convenient. In other words, spray-on antennas are only going to make the complete process a lot more useful, as well as improve the performance of the complete IoT network. Therefore, it is important to consider this technique to be one of the most useful, and a great way to boost IoT.

Conclusion

The spray-on antenna technique can be used to improve the performance of antennas without any hurdles. As such, sprayed antennas can turn out to be extremely beneficial. And, in the near future we might see the expanding of these types of antennas.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 22, 2018  1:41 PM

The importance of network standardization in the age of connected cars

Haden Kirkpatrick Profile: Haden Kirkpatrick
Connected car, Connected vehicles, Internet of Things, Interoperability, iot, IoT analytics, IoT data, IOT Network, Standardization, standards

We see it with Wi-Fi, PCs and electrical plug outlets. With smartphones and file formats. Even screwdrivers, nuts and bolts.

All of these are governed by a common set of standards that reduce confusion, make integration easier, save money and assure all parties involved are happy.

Naturally, connected cars will need to follow suit, because the key to their success is getting them all “talking.” And because connected car systems will roll out in phases, today’s automakers need to design cars that are testable and can be updated as needed.

Speaking the same language

One word often thrown around when talking about technology standardization is “interoperability.” That’s the process whereby systems exchange and make use of data. With autonomous technology, that involves cars communicating with each other about speed, braking status and road obstructions — to avoid crashes. Interoperability only works if there’s a standard network on which it performs. In this case, it will require cars to be outfitted with compatible software and speak through a common tongue (if you will).

The Department of Transportation (DoT) has long talked about standardizing the language around vehicle-to-vehicle (V2V) technology. Its initiative, which hasn’t been finalized yet, would require new vehicles to be equipped with V2V systems in the coming years. By working with manufacturers and incorporating public feedback, V2V systems could be clearly defined from the get-go. Doing so, the DoT argues, would ensure the safe deployment of autonomous cars as well as drive innovation.

It would also provide a mandate to brace today’s vehicles for our connected future.

Future-proofing today’s cars

In the age of IoT, people expect rapid innovations, and cars have come to figure more prominently in the conversation. But unlike with smartphones, say, which we upgrade every two-and-a-half years on average, most of us keep our cars for 10-plus years. That’s where future-proofing comes into play.

The challenge is for automakers to assure consumers can upgrade their cars as needed instead of being coerced into buying the newest models every time they come out. How? By making current in-car technology upgradeable. With swappable technologies, cars can continue getting smarter, even as they get older.

Connected cars are expected to generate terabytes of data an hour. Currently, networks can only support basic telematics.

Luckily, an Internet Protocol (IP) over Ethernet backbone architecture enables manufacturers to seamlessly update in-car systems with new devices, sensors and IoT technologies without starting from scratch every time. Having a standardized IP over Ethernet in place also allows automakers to test new connected features and quickly make adjustments. That means companies can roll out new features sooner than later.

We’ve seen some significant strides in future-proofing already.

Back in 2013, Audi partnered with Qualcomm to create an in-car 4G LTE wireless broadband. It also equipped models with swappable hardware for new features. Similarly, Tesla’s proven the vanguard with over-the-air software updates. These enable car owners to access new autonomous features without having to go to a dealership. And Dekra, a car-inspection company, launched a research site dedicated to vehicle and infrastructure testing as part of its future-proofing initiative.

But with all this talk about standards, another question remains: Whose standards will we adopt?

The ethics of standardization

Remember Uber and Waymo’s legal brouhaha back in February? It reflected the high stakes involved in the race to see who’ll make it to the top first. But it also exemplified another issue — lack of communication between companies.

The integral force behind connected cars is that they’re connected. They’re in constant communication with each other. But what if Waymo cars can’t communicate with Uber’s? Or if Ford’s fleet speaks a different language than Tesla’s? Some fear that whichever company waves the flag first will set the score.

After all, consumers will likely opt for the safest connected car. And the safest connected cars might be those with the most corporate sponsors, the most data and the largest fleet on the road. That means consumers will buy from the most popular brand. As a result, that brand might limit connective compatibility with other brands to flex its hold on the market. That’s where monopolies are born: options become limited and prices go up.

This raises some questions around standardization. Should the safest cars have to share their technology and research, or will they be given sole custody of our roads? And if there is government intervention, might inferior technology be weeded out in the name of safety, thereby limiting competition?

The DoT’s aforementioned mission to create pliant, scalable standards far in advance could be the antidote — at least in part. With flexible, clearly defined guidelines in place, there’d be less ambiguity around how companies build their systems. Plus, the DoT would have a direct role in how those guidelines continue to evolve. If standardization is based on the consensus of different parties involved, then a concerted effort might be our best bet.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 22, 2018  10:58 AM

IoT challenges associated with the agriculture industry

Karina Popova Profile: Karina Popova
Agriculture, Artificial intelligence, farm, farms, Internet of Things, iot, IoT analytics, IoT data, IoT devices, iot security

The impact of IoT on the agriculture industry

The agriculture industry has always experienced slower growth compared to other sectors, mainly because of food deficits and hunger problems in some areas of our planet. These are often related to the distribution and imbalance of farming, climate changes, urban influences, industrialization processes and usage of chemicals, as well as replacement of small farms by industrial plantations. Private farmers still prevail in producing essential crops such as wheat, rice and maize, although they receive lower returns, mainly due to the ineffective supply chain and absence of a proper market connection. Often during the transportation, carriers are not prepared to keep the right temperatures. They also reroute vehicles because of unexpected events. The opportunity to have a look into the real-time conditions inside fleet transport and applying weather predictions can result in a significant impact on the food industry and farmers benefits.

Data sharing in the agriculture industry

Artificial intelligence and new technologies create a great mixture of digital systems across agriculture, primarily through data sharing. Data flows in from a number of sources — field-based sensors, aerial sources and environmental data, as well as remote sensing data coming from various satellites. At the same moment, data distribution causes its own unique challenges. Typically, farmers are extremely concerned about access to their data and how it might be used against them. There two popular concerns: Could someone use the relevant data to influence the market prices? What if the government could use it to sue the farmer for violating regulations?

If we put these matters aside, there are enormous advantages to sharing data in the agriculture industry. Nowadays, many farmers who share data with their bank or insurance company receive lower interest rates or rewards. Therefore, the principal focus for IoT in agriculture should be transparency and control over data usage. But to achieve this, it is essential to assemble the right supervision from industry, government and researchers. For example, to warrant the precision farming concept, single agriculture industries will need to store logs with information about equipment, proving that it is up to date and operating accurately.

Data security in the agriculture industry

As in any other industry, farmers need to think about security to use technology in the right way. The precision farming concept naturally brings the agriculture sector to the risk of hacking and data theft. Additionally, there is a significant lack of information about data protection in this field. The most known cases occur because of hacktivists who destroy data to protest the use of genetically modified organisms or pesticides.

Farm equipment providers use IoT devices and data analysis practices to make farming more efficient. However, agriculture is still prevailing with old technologies that often do not involve the data security concept as well as a robust data backup. For example, some field monitoring drones connect to farm equipment. This equipment is often linked to general channels and the internet, yet usually it does not include basic security features like monitoring of employee logins or two-factor authentication for remote access sessions.

IoT as a service in the agriculture industry

Nowadays, many farm equipment companies provide integration of IoT as a service. The precision farming concept plays an enormous role in their business. Gathered real-time data provides better visibility to the farmer, and the knowledge of a specific territory can be easily shared with the community. Because of this, precision crop farming systems should include capabilities for device management, data storage, security and deep analytics on the data ingested from on-field sensors, aerial imagery and remote sensing techniques. It will generate real-time insights out of data for the farmers and support the agro-scientists with better decision-making.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 18, 2018  1:03 PM

Why insurance is the next big opportunity for smart home adoption

Mitchell Klein Profile: Mitchell Klein
"car insurance", Consumer IoT, Insurance, Internet of Things, iot, IoT devices, Smart Device, smart home

As we wrap up exciting changes in tech in 2018 and look ahead to a new year, one thing is for certain: smart home adoption continues to surge and mainstream awareness is growing at impressive rates. A report from Strategy Analytics found that the “sales of smart home devices are expected to reach $155 billion in 2023 with 1.94 billion units sold, officially surpassing smartphone sales (which were predicted to be at 1.84 billion in 2023).” We all know that smart home will be big, but everyone is looking out for the trends and technologies that will both advance and challenge its growth and success in years to come.

2017 and 2018 saw the rise of smart speakers and their influence on awareness in the connected home, and we’ve seen interest continue to emerge with big players like Amazon and Google announcing partnerships within the last year with smart home brands both big and small to help catapult voice control into the mainstream. And after receiving lots of interest, they are now looking to further capitalize on smart home proliferation by launching their own product lines or acquiring leading brands. At a recent press event, Amazon announced a slew of new hardware that uses voice with Alexa, many of which are designed to expand upon its smart home portfolio.

With AI and voice still top of mind, other trends and new markets with different applications for connected technology start to surface; one especially interesting one is in the world of insurance. Insurance tech, or insurtech, refers to the push for the use of technology to help insurance models evolve, become more efficient and offer new opportunities and customer touch points.

A key part of the insurtech conversation? The opportunity for the smart home.

Insurance and the smart home

Homeowners insurance and smart home. On the surface they seem an unlikely pair, but at a second glimpse, it’s clear there are plenty of reasons insurers are taking notice of the connected device boom. The standard typical home or building insurance model as you may know it has reached a critical pivot point in its evolution. As new demographics like millennials purchase homes and home insurance, insurance companies are learning that they must find new ways to create fresh, enticing sales models and revenue streams to grow their business. They also have to consider aspects of insurance-buying not previously heavily focused on, such as customer experience and deeper targeting or modernizing tools for navigating the often-frustrating world of claims. At its core, an insurance company still wants to find compelling ways to manage and mitigate risk, and it wants to do so in a way that is still profitable. So how does smart home fit in?

One of the strongest benefits of a smart home is its ability to help prevent a home or building from damage — whether that is weather-related, appliance- or home structure failure-related, or human-related (i.e., burglary or theft). A smart home, if set up to do so, wants to assist in the home-protection process. It’s not a be-all and end-all solution. A notification on your smartphone about a water sensor can only do so much to prevent damage. But give consumers access to a water management device that can actually turn off a water source from the app on their phone? That is a compelling story for an insurer. A smart lock that can be programmed with unique access codes? That’s a good start to keeping a home safe from break-ins. A smart lock that is synced to a monitored whole-home smart security system — a system that will call the police when tampered with? That is a compelling story for an insurer.

That’s not to say there isn’t a place for one-off sensors or lights in the smart home. And, when paired together, these are powerful home management tools. But from a home insurance perspective, it is when these technologies are paired with one another to offer the most complete protection and thus, best argument for insurers to offer premium or other plan discounts for their installation.

The smart home industry’s legacy of security and safety make it a prime opportunity for insurance to get involved, and many insurance companies already are, with discounts on premiums or making devices available at a low cost. Insurance giant StateFarm offers discounts for installing Canary connected devices and ADT security systems, and smaller regional company American Family partnered with Nest to offer smart smoke/carbon monoxide detectors for Minnesota homeowners at no additional cost. Some insurance companies have taken a slower approach but are still taking notice. Liberty Mutual, for instance, has invested in smart home startups like August Smart Lock, and Nationwide has championed smart home adoption in the past and announced in 2017 a $100 million investment in insurtech.

State of the industry

Outside of the insurance players, the insurtech space is seeing a lot of interest from companies looking to offer smart home-based tech or customer experience systems to insurers for implementation into their broader programs. Lots of insurance companies are seeing the opportunity start at the product level and providing the products for installation as part of a greater partnership with IoT manufacturers.

Startup SmartInsure offers a full turnkey system for smart home and insurance by supplying smart products like smart water management and security, then installing or reviewing an existing smart home setup, and also providing a home insurance package with an affordable monthly premium. All of this is designed to offer a one-stop shop for homeowners who want high-quality, lower-cost insurance that also comes fully baked with the latest technology and backed by customer-centric service.

It’s not just the insurance companies that are enticed by smart home — consumers also see the value in adding smart devices in hopes of protecting their home and saving money. A 2017 report from Parks Associates found that “nearly 40% of broadband households in the United States with an insurance policy would switch providers in order to obtain smart home products as part of a new insurance service.”

The growing interest in insurance for the smart home space is evident beyond just individual company initiatives and increasing consumer interest. The number of industry events and discussions around insurtech and smart home’s play in the insurance space are growing fast as insurance and tech leaders alike turn their eye to the vast opportunities there.

The opportunity — what’s next?

What’s next for the marriage of smart home and insurance? The industry needs to think deeper about engaging with consumer-facing technology and how home tech can be used not only in mitigation, but also in remediation, claims and beyond. Collectively, insurers and tech brands need to contemplate what type of standards or regulations may need to be in place to help insurance underwriters see a true benefit and understand the types of risk/reward scenarios they are evaluating depending on number of devices, what each device does, if they were installed professionally or DIY, and more.

The bottom line is this: Smart home makes sense for both the insurers and insured. Its impact, however, on overall smart home adoption rates and what it will mean for both the insurance and smart home/IoT industries is still being mapped. Where smart home is still relatively new in the eyes of older insurance providers, the two groups must find grounds to work together and continue to educate each other and the market to fully realize the opportunity that lies ahead.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 16, 2018  2:32 PM

Why not homegrown over-the-air software capabilities?

Thomas Ryd Profile: Thomas Ryd
backdoor, Internet of Things, iot, iot security, patching, Software patches, software update, software updates

According to Gartner, there will be almost 3 billion new internet-connected devices in 2018. Unfortunately, most of these devices lack basic security features, making them susceptible to hacking and being compromised. A few days ago, California took the first steps in making connected devices more secure with its “SB-327 Information privacy: connected devices” bill. This is a step in the right direction.

One basic security feature for connected devices is the ability to remotely and over the air (OTA) do firmware and software updates to patch security vulnerabilities. According to McKinsey, in the medical sector there will be product recalls worth more than $1.6 billion this year due to software vulnerabilities and the inability to remotely update and fix them. The same dire situation exists in most other industries.

In addition to the lack of OTA capabilities for connected devices, another worrying fact is that among devices with OTA capabilities, most are made in-house by the device vendors themselves. This article will focus on why, if you are about to develop a homegrown OTA, you should stop doing it, and if you already have a homegrown OTA, why you should consider moving away from it for your next-generation products.

Not designed from scratch with security in mind

Most homegrown systems erupt from a last-minute realization that a new, soon-to-be-launched product needs a way to be updated and reached. From this emerges an insecure and fragile OTA solution made in a hurry by in-house developers. This initial version runs the risk of becoming the core of an ever more complex and ad-hoc OTA system, developing like a Frankenstein.

At Mender.io, we have talked to many developers of homegrown systems, and it is scary to discover how many have created a backdoor, just in case. Homegrown OTAs are seldom created with security in mind. The security ignorance remains an Achilles heel throughout the lifetime of the devices for which it was made.

Not made by system management developers, but product developers

Homegrown systems emerge from product developers who typically lack basic system management insight.

The result can be seen in OTAs with no or very hard to use operational and management capabilities. Secure bootstrapping, encrypted communication, logging, monitoring and device grouping are just a few examples of basic system management features alien to most homegrown technologies.

Not open source

The code behind homegrown solutions is often closed source because the system is specific to the company and considered unique. Consequently, the code remains inspected and reviewed only by few people, sometimes even only one person! This poor scrutiny and reviewing of the code makes it a less secure and solid system than if more developers would have contributed.

Undesirable bus factor

Not considered core to the company, often only few or even a single person stand behind the homegrown OTA. In such cases, the company runs a great risk if its developer resource(s) suddenly disappears. With few or no one else knowing or understanding the code, the company might even lose their ability to do OTA at all.

Not optimized for CI/CD pipelines

Homegrown technologies, being an afterthought, are seldom API-driven. According to our findings, homegrown OTA systems normally take form as standalone applications serving a very specific need for a specific product.

Modern DevOps software development processes require API access to the OTA updating process. Reengineering a standalone application to support a suitable API schema will be both costly and time-consuming.

Not suitable for mergers and acquisitions

As homegrown systems typically grow organically and without a longer-term product roadmap, they risk ending up being extremely product-centric. Serious reengineering will be required in order to adapt the OTA for another product.

As companies with homegrown OTAs acquire new companies and technologies, they cannot rely on their existing OTA, and over time end up with a myriad of various OTA and device lifecycle management implementations.

Not brick safe

Device bricking is costly. Creating an OTA that can recover from a fatal update, for instance, due to sudden loss of power, is not trivial. In interest of cost and time, developing a solid and automic OTA seldom makes it to the world of homegrown OTAs.

Not satisfactory documentation

Documentation normally suffers as homegrown OTAs stem from one or a very few number of developers. Since the system also is operated by the same people, the need for documentation among these super users remains non-existent. If any documentation exists, it is often in the form of out-of-date wikis or other unmaintained documents.

Not high quality

A homegrown OTA, being an afterthought and monolithic application, seldom has high code test coverage. Born out of a last-minute urgent need, the system typically was developed in a hurry to meet the minimum acceptable criteria. Testing and test coverage never made it to the top of the list. Unfortunately, as the system matures, test coverage continues to suffer.

Being closed source and with few eyes contributing and reviewing it makes it easier for the homegrown OTA developer to continue having low test coverage.

Conclusions

In this article, a series of arguments have been made for why owners of homegrown OTA solutions should migrate away, and why for new products your organization should consider using a vendor-provided OTA.

In addition to all the arguments listed, the most obvious argument remains: You don’t differentiate your product at the system level. Product companies should spend all of their resources creating new revenue-generating features, not undifferentiated system management services. These services customers take for granted anyway, and are not reasons for buying your products in the first place. Products succeed because of their features and promise, not their ability to be remotely updated and stay secure.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 16, 2018  12:24 PM

How can IoT unveil opportunities in retail stores?

Dan Natale Profile: Dan Natale
Customer satisfaction, Internet of Things, iot, IoT analytics, IoT data, IoT devices, IoT sensors, POS, retail, Retail/point-of-sale applications, retailers, Smart sensors

Retailers collect a wealth of information from their stores every day. Point-of-sale (POS) data can show merchandise conversions and identify peak sales periods, while an employee scanning or stocking shelves can tell when inventory is low. Traffic counters can indicate movement patterns and popular areas of the store. The list goes on.

The challenge is this: Most of these data points are being gathered independently — only showing retailers a portion of what really goes on in the store.

For example, POS data alone can’t track how many times an item was tried on in comparison to how often it was purchased. Retailers might be able to use POS data to manage workforce staffing based on peak periods for sales, but what about peak periods for store traffic in general?

Additionally, data collected to determine bestselling items is vague and shopper-behavior insights are virtually non-existent. Retailers can’t afford to make assumptions in an increasingly competitive marketplace. Without being able to view the whole picture, retailers run the risk of missing opportunities to improve their business and boost conversions.

Digital natives are ahead of the game

Native online stores have been capable of collecting valuable data throughout a customer’s shopping journey for as long as they have existed. Because shoppers leave a digital trail, retailers can see items viewed, added to a cart and purchased. They can identify popular items, colors and sizes in real time. The data enables retailers to quickly adjust their merchandise mix and recognize opportunities to improve profits.

Because of this ability to modify strategy on the fly in the digital world, online stores that are moving into brick-and-mortar have high expectations for physical store insights. And, in many cases, they’re leading the move toward adoption of integrated, emerging technologies that help mirror the customer experience in the physical store, as well collect, analyze and report on rich data sets.

IoT goes mainstream

One example is the use of IoT technology to create connectivity and gather store data. Until recently, deploying IoT systems in stores was viewed as complex, cumbersome and expensive because it required the right mix of hardware, software and networks (while ensuring that each piece was highly scalable, rapidly deployable and easily adaptable).

But now, technology innovations have simplified the process, and often existing in-store systems can be used and connected to create a basic foundation for IoT. By using new or existing sensor networks, retailers can connect multiple disparate systems and get a single view across a wide segment of data sources. Now, they can capture and organize data, access rich and actionable insights based on the data, modify processes on the fly and help employees adjust their behaviors based on real-time input.

Here’s an example of an actual deployment. RFID sensors placed on apparel collect data on merchandise movement from showroom to fitting room and back. Using a combination of data captured through the RFID sensors, overhead traffic counters and POS data, sales managers can determine which items, sizes and styles are being tried on, compared to which are purchased. This data is used by an apparel retailer to optimize inventory by identifying bestsellers and items that can be reduced due to low demand. In addition, by understanding what merchandise is tried but not purchased, the retailer is training staff to better identify what guests want, and their sizes, when they enter the store. This reduces try-ons, improves the customer experience and improves the conversation rate.

Traffic data combined with POS transaction and labor allocation data can also help retailers determine if a store is staffed properly. If retailers know when fitting rooms are busiest, they can schedule more sales associates to help shoppers during those times. The insight can also be used to train employees on how to interact with shoppers and guide their path to purchase. For example, while POS data can tell you the time of day when you have the highest conversions, it can’t tell you if customers tried on and abandoned merchandise due to a lack of sales assistance. IoT can give you the data to make those kinds of conclusions.

By tapping into sources of data that have existed outside of reach of physical stores, retailers can finally uncover the root cause of common challenges that erode brand loyalty, customer satisfaction and revenue streams. IoT brings new sources of information together that allow retailers to uncover missed opportunities — and provide the insights needed to turn these opportunities into outcomes.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 16, 2018  10:36 AM

Reliable cellular IoT connectivity demands ‘owning’ connectivity

Svein-Egil Nielsen Profile: Svein-Egil Nielsen
4G, Bluetooth, Cellular, Internet of Things, iot, IoT applications, IOT Network, IP, Wi-Fi, Wireless

What’s the most reliable wireless technology you use? Most people I speak to say it’s cellular. That’s not to say cellular is 100% reliable. But it’s as near today to a “gold standard” in wireless engineering (with timely patches and updates, of course).

In fact, blanketed with decent 4G coverage and an unlimited smartphone data plan, you could switch Wi-Fi off altogether. You just wouldn’t need it. Cellular would be more reliable, more secure and often just as fast.

Complicating the issue

Now, may I ask you to consider what’s the most technically complex wireless technology you’ve ever used?

You may find that one harder to answer as an end user. But the answer is again cellular. No wireless technology comes close to its complexity at the physical and application layers.

So, how is it possible that a wireless technology that is so complex can also be made so reliable? The answer is something anyone in the market for a cellular IoT system needs to consider very carefully.

What makes cellular so reliable?

If I had to summarize it in one word, I would say “ownership.” Most cellular base stations and network infrastructure are built and maintained by just one of a handful of vendors worldwide. (The biggest three being Ericsson, Nokia and Huawei.) These make it their business to “own” every component that impacts performance, including above all else the wireless connectivity.

If they don’t design a component, they make it their business to know everything about that component. So much so that they might as well have designed it themselves.

And on the smartphone side of the connectivity link, the same is also true. Again, a handful of vendors make it their business to understand every component and system that impacts their device’s performance as if they had designed it themselves. And when they have built their systems, they test them to destruction, and make continuous finely tuned performance improvements throughout the product lifecycle.

Where theory collides with reality

This ownership control over every aspect of a device’s performance is the key to reliability not just in cellular, but in any wireless application. It’s the difference between theory — if a piece of off-the-shelf software or hardware intellectual property (IP) conforms to a standard, it can be combined in modular chunks and will work perfectly — and reality — unforeseen factors that can impact wireless performance at the implementation stage are numerous, so conforming to a standard only encourages seamless connectivity, it certainly doesn’t guarantee it.

As a former Chair of the Bluetooth SIG and now CTO for Nordic Semiconductor, which provides wireless semiconductors, I know only too well that even with a solid, universally agreed upon and vetted open standard underpinning a wireless technology, there are numerous subtle differences. Differences in the way different device vendors can legitimately implement that standard — “flavor” if you like — that can all throw-up previously unforeseen and indeed unknowable interoperability problems.

The problem is that all wireless connectivity relies on successfully connecting to another party’s device. In Bluetooth, that’s smartphones and PCs. In cellular, and that now includes cellular IoT, it’s base stations. If products using wireless technologies can’t connect reliably and transfer your data, for whatever reason, it’s game over. Your product is doomed for poor reviews and subsequent lackluster sales.

It might not be a simple fix either

The truth is wireless technology can sometimes run into problems that can be very difficult to find and fix. The best chance you’ve got — both now and in the future — is to be able to test and adjust every single technical parameter that could affect wireless performance at any level of the application, in both the software and hardware.

The problem is that this kind of control doesn’t come cheap. It requires designing and building the required connectivity technologies in-house — from the radio to the protocol stack to the hardware — instead of just buying-in bits and pieces of modular, low-cost, off-the-shelf IP.

The moral of the story

Put simply, when embarking on a cellular IoT project, the more multivendor IP there is between your application and the base station, the greater the likelihood of problems down the line. And the less will be your ability to fix them.

To guarantee reliable, mass-market, cellular IoT connectivity, you have to make sure you “own” (have control over) everything between your application and the base station. Otherwise you risk — sooner or later — some part of your cellular IoT’s wireless link not working right. Not only is that going to be a real pain in the ass for your end customers, it could also be almost impossible for you to fix when they come to you demanding a solution.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: