Open source software has never been more valuable than it is today. But, don’t take my word for it — just look at Microsoft’s most recent move. Why else would this tech titan spend $7.5 billion to acquire GitHub, the world’s most popular software development platform?
MOBI, the company I work for, has also hopped on the open source bandwagon to create technology-driven tools. Recently, our development team launched Numberjack, a global phone number validation and normalization software to help users validate, standardize and display phone numbers in a variety of international formats.
Modern open source technologies are even expanding to include IoT endpoints. Since few organizations can ensure integration and operability with other mobile technologies currently, global IoT initiatives are increasingly using open source software to create more efficient connectivity and data communication paths.
If implemented safely and strategically, the competitive advantages open source IoT creates can be tremendous. In fact, forward-thinking organizations are already seeing these advanced initiative investments pay off in three disruptive ways: more meaningful data, advanced artificial intelligence capabilities and unlimited IoT upgrades.
Driving data insights
Organizing IoT’s constant communication is a job that’s much easier said than done — especially when managing thousands of global endpoint devices. For enterprise mobility managers, however, that’s just step one. Only after feedback is consistently collected and stored can a company start to make sense of its data insights, much less use them to make impactful business decisions.
By introducing an open source IoT software product, organizations give themselves an always-updated tool that helps accelerate internal data analysis efforts and more accurately maps a mobile technology program’s most serious information needs. This not only eliminates the time-consuming, non-essential tasks IoT management often creates, but refocuses enterprise IT resources to capturing only the most relevant, desirable data insights possible.
Some companies are even using open source IoT to streamline data processing tasks, whether it works with one or 100 different cloud storage structures. After all, it’s much easier to migrate analytics and data sets if the exact same source code can separately reside on each enterprise domain.
Accelerating AI’s impact
Another advantage open source IoT creates is the ability to take advantage of publicly available AI tools like TensorFlow, Google’s deep learning AI framework. Since 2015 (when Google handed over TensorFlow to the open source software community), global IoT initiatives have relied on tools like these to create training models that prepare enterprise technologies for a virtually unlimited number of business applications and use cases.
Without these publicly available AI innovations, connected IoT systems simply can’t include the features many global initiatives expect today, like native image recognition or automated predictive maintenance. Moving forward, the ability to use these insights will only become more valuable — if not essential — to ensuring seamless operability between all enterprise endpoints.
Endless IoT improvements
While a lack of proprietary IoT software does pose risks that can be concerning to IoT asset management efforts, organizations are often willing to overlook those dangers for the promise of a continuously updated code base only open source software can deliver. Unlike licensed options that are only upgraded a few times every year, open source IoT technologies feature up-to-the-minute input from thousands of subject matter experts all over the world. This means that base IoT platforms are not only always compatible with the latest enterprise technology available, but that specialized software versions and instances can be created faster and more easily than ever before. Now, even the most niche industry or application can have its own custom IoT configuration to satisfy business demands.
Open source IoT development can even help organizations overcome one of the industry’s most dangerous potential downfalls: a lack of IoT manufacturer security standards. These publicly available IoT tools set a minimum standard for global endpoint governance and processes, allowing for more collaborative software security rules that can be structured to improve security in proven, measurable ways.
While enterprise IoT software still has a long way to go before it can guarantee data security and privacy, open source development projects like Numberjack could very well be the path to a successfully managed future. It will be interesting to see whether organizations ultimately trust these new-age innovations, or play it safe and choose a more traditional approach to technology instead.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Data from IIoT machines and operational applications allows industrial organizations to collaborate and innovate in new ways. Think of data as a different type of lubricant for industrial processes. But that is not its only use.
The data exhaust from connected machines and processes in industrial firms has several alternative uses. For evidence, take a look at similarities with the e-commerce arena, where data holds economic value. As a result, user profile and transactional data have gained currency as a tradable asset in a completely new advertising and data-broker ecosystem.
The IoT market is reaching a stage of maturity where industrial organizations should anticipate that a similar pattern will affect IIoT data. This calls for a conscious strategy to capitalize on new opportunities. Industrial firms have to innovate in different areas, treating data as an asset class and identifying new value propositions. This leads to a set of organizational implications and choices for engaging with other market participants.
Organizational planning for industrial IoT data
A five-point checklist that industrial organizations and their supply chain partners should run through is as follows:
- Develop IIoT data capabilities inside the organization. Begin by cataloging IIoT data assets. Then surround this resource with three capabilities. One of these takes the form of a data science team. Another is to install technology foundations to manage data securely and reliably. The third is a set of innovation capabilities spanning business development, intellectual property and regulatory functions.
- Formalize upstream supply chain opportunities. Ensure the organization controls data from suppliers as in the case of outsourced facilities management. This action ensures that changes in reporting frequency or data quality (e.g., granularity) do not incur time or cost penalties.
- Explore downstream supply chain opportunities. Explore downstream supply chain constraints and opportunities. In many cases, industrial firms will learn about data reporting conditions that channel partners may have preemptively imposed.
- Join other IIoT data ecosystems. Some industries will spawn value through external IIoT data ecosystems. An example of this is in the health and wellness sectors, be they human- or machine-related. The value stems from aggregating data for a given class of machines. One example is the provisioning of failure mode insights in the case of traditional assets, such as standby power generators or hydraulic pumps, for example.
- Orchestrate a new IIoT data ecosystem. Ambitious firms can create and orchestrate new IIoT data ecosystems. Open data portals, as in the case of London Datastore, and data exchanges are examples of pioneering efforts to explore new market opportunities by sharing data beyond an organization’s boundary. It raises issues of data certification and licensing so that downstream users can build service on reliable sources. There is also a need for oversight functions to deal with substandard data providers and to prevent rogue behavior.
Anticipate market evolution to succeed
The latent value in IoT data will expose new commercial opportunities beyond the application benefits from improved industrial operations. This change in perception surrounding IIoT data development will force industrial organizations into new business models. Firms will discover that data from individual assets creates value for several different customer segments.
To capitalize on the new opportunities, industrial firms will have to master new skills. These include the abilities to package and distribute data via horizontal trading relationships. This is a very different picture from today’s vertically aligned structures.
Without doubt, there will be a certain amount of industry disruption. Organizations that supply data-intensive industrial products and services look well-positioned to take advantage of data innovation. Examples include sensor manufacturers, instrumentation and data-logging specialists, remote-connectivity providers and alliances that pool data for cross-population analyses. For others, the challenge is to avoid being left behind and becoming detached from IIoT-enabled value chains.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The grocery store is the last stop between the farm and our forks. The business of feeding the world is not easy. Although the grocery store commands the bulk of our retail spending — representing over a third of annual spend in some cases — the margins for grocers are tight. Grocers have the challenge of providing a necessary product which is often a commodity. So, what is the local grocer to do? How does it ensure it can drive traffic and continue to capture revenue, while managing its razor-thin margins? More connectivity. As grocers become more connected, and not simply at the store level, new avenues for revenue, cost control and customer experience opportunities will present themselves. Here is how:
- Connect to the network allows for greater clarity throughout the supply chain. Grocers of all kinds are looking to gain greater and more detailed insights into their supply chains. Where is product coming from? What are the steps each product takes to get to its final destination? What are the details of the suppliers that are part of the network? Locally sourced, organic and sustainably made food is no longer just a trend — it is now the norm. Consumers want to know their food’s journey from farm to fork, and they are willing to pay a premium for it. By using technologies like IoT and radio frequency ID, grocers will be able to better tag and track products at every point across the supply chain.
- Greater in-store connectivity means better inventory management. Store space is precious, and every square foot needs to be optimized to ensure profits. Store shelf allocation is an enigma that has haunted grocers since the dawn of time. What is the right mix? How can grocers ensure store shelves are always stocked? What product is misplaced? What inventory is in the store room? Once the physical stores add connectivity at the store shelf level, the back storage room and even at the logistics points, grocers will start having a much more complete view of what is happening within their four walls. Connected store shelves with connected inventory promises no more empty shelves. Companies like Panasonic are looking to tie in their store shelves, backroom storage and even loading docks to paint a real-time view of inventory within the four walls of their stores.
- Not simply inventory management, but better customer experiences. Once grocers have a better handle on their inventory and their store, they can be savvier about how to use that store and the space within. Can you make the bakery larger? What about putting in a bar for local brews? What about a cooking station? With greater store connectivity, coupled with connected consumers, grocers can analyze and react to what is happening within their stores and make near-real-time changes to better meet customer expectations. Grocers are already using greater connectivity to employ such simple cost-savings as turning off lights in aisles when they are customer-free. But when Publix puts its mini cooking station in a certain location, is that the optimal location? Or should it look to move it around the store depending on traffic data it can ingest from connected cameras showing the heat chart of traffic?
- Smarter products, smarter usage and the long tail of insights. Connectivity does not have to end at the cash register. What about pulling more information from connected products as they are being used and consumed in the home? Grocers, as well as their suppliers, yearn for greater insights into exactly how their products are actually being used. Yes, it is important to gather point-of-sale data of what is being sold, but how are those products then actually being used? That is the key to providing the complete picture. As more home consumer products are connected — LG and Samsung refrigerators, Meater and its smart thermometer, GeniCan and the smart trash can, or even the Hapilabs smart fork — our kitchens are getting on the grid. Grocers can now aspire to get greater insights into how their products are being used, not simply that they were purchased.
Consumers will always look to grocers to provide the essentials for our daily lives. As more grocers look to enhance the way they serve us, as well as better manage their businesses, connectivity is one of those aspects that will play an important role. Think outside the grocery bag: How can greater connectivity drive deeper into the consumer supply chain and further back into the supplier network? With so much competitive pressure coming from other grocers and startup food delivery services, now is the time to look to the supply chain for greater insights and better customer connections.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
The auto industry is no stranger to change. Through its entire history, carmakers have been quick to innovate and add the latest and greatest technology to their vehicles — usually to improve safety or drive consumers to buy more cars.
These days are no different. We aren’t all zooming around in flying cars like some predicted we would by now — though that might not be as far out as you think — but there are a lot of really exciting things happening in the industry. Self-driving vehicles tend to hog the headlines, but that’s just one high-profile innovation. There is so much more happening — from facial recognition software that can detect driver fatigue to connected cars that communicate with the surrounding infrastructure to personalized features that make the inside of a car feel more like your living room.
At the heart of all the current change happening in cars are two key elements: connectivity and security. Connectivity is enabling the transfer of data between vehicles and the outside world and making it possible for all these different systems to communicate with each other. That data is extremely valuable. In today’s technology-driven world, data is the new oil, and companies want to protect that asset. Meanwhile, consumers want to know their data is protected and won’t fall into the wrong hands. They also want to be assured that some hacker can’t take control of their self-driving car as it navigates rush-hour traffic.
Today, embedded mobile connectivity is enabling some pretty cool in-car infotainment options in addition to the telematics that keeps the vehicles up to date and provides feedback data for the car maker. Many models come with an on-board 4G Wi-Fi hotspot, touchscreens that can sync with Android Auto or Apple Car Play to integrate a smartphone’s capabilities seamlessly, and navigation systems complete with real-time traffic information and voice controls.
We’ve really just begun to see the possibilities for in-car infotainment. Things like embedded voice and data services that allow one passenger to search for the best nearby restaurant while another streams music or podcasts are on the verge of another leap forward when 5G connectivity becomes more widespread in the next several months. With its lower latency and stronger signals capable of penetrating walls, parking garages and tunnels, 5G will be the backbone of autonomous vehicles as well as the services we’ll use in our cars of the future. Faster and stronger networking will help create more personalized services and seamless interaction between cars, smart cities and service providers.
We’ll start to see more data-sharing between vehicles and external services, crowdsourcing things like traffic information and road conditions. Cars will be able to communicate with intersections to optimize speeds to make it through green lights. Features like secure cloud-based service enablement, secure ID-based access and ignition, and integrated mobile payment will create new opportunities and conveniences for drivers and passengers. Flying cars might not be coming to a dealership near you, but using your car to pay at the pump or the drive-through will be here before you know it.
Of course, for this vision of enhanced mobility to become reality, drivers and passengers must feel confident that their personal data and preferences are secure. The same is true for carmakers, municipalities and other stakeholders. Trust will be earned only when many layers of built-in security become inherent to the process to protect interconnected systems from becoming open doors for hackers and others who might benefit from accessing the systems or siphoning the data. Mobility security providers will take cues from how security is managed for networking systems, digital payments and government, and they will need to learn from lapses and shortcomings in industries where security is supposed to be a top priority, such as financial services.
With the connectivity and security pieces in place, the evolution of cars from simple modes of transportation to the workplaces of the future and further opening doors for new types of services will only be held back by our imaginations. Henry Ford would be amazed at how far we’ve come.
Though the U.S. Federal Communications Commission voted to overturn net neutrality, individual states are taking matters into their own hands. More than two dozen states, including California, New York, Connecticut and Maryland, are considering legislation to reinstate net neutrality rules within their borders. Earlier this year, Washington became the first state to sign such legislation into law. While the reinstatement of net neutrality could take some time, governors in several states, including New Jersey and Montana, have also signed executive orders requiring internet service providers (ISPs) that do business with the state to adhere to net neutrality principles.
Without net neutrality, ISPs and telecom companies have the right to prioritize services according to customers’ needs. One industry likely to feel the effect of this is the automotive sector; connected cars depend on consistent data transmission.
It’s estimated that by 2021, over 380 million connected cars will be on the road. Today, the motor manufacturing industry is still in the very early stages of figuring out how to make money from connected car data. The relaxing of net neutrality rules makes it possible that auto manufacturers will team up with telcos and ISPs to offer customers a range of data-based services. Ultimately, this could be even more profitable than actual vehicle sales.
One of the biggest challenges for motor manufacturers will be to convince customers that their privacy is assured and that it will not be open season for their data.
Big data, big profits
By the year 2020, it is estimated that 20 billion in-car devices will be constantly sending back vehicle data via the internet. That means each vehicle will be transmitting around 1.5 TB of data every day –more than 25 GB per hour per car. For this reason, working out how to utilize, analyze and manage all this big data will become a key part of every motor manufacturer’s business model.
The search for which software platforms to partner with is already underway. Ford’s Ford Pass platform and GM’s OnStar AtYourService feature are early examples of how auto data is sold to third parties, such as insurance companies or roadside retailers, to help them tempt drivers with loyalty discounts and special offers.
The repeal of net neutrality allows internet providers in the U.S. to get in on the action. In the near future, deals between ISPs and auto manufacturers could give rise to a range of software-based service packages. The exact content and the speed at which each service is delivered could vary according to the car owners’ needs.
While ISPs and car manufactures would be able to use the relaxing of net neutrality rules to create more monetized services, the new rules also provide a great advantage to the public safety. Since data can be prioritized, life-critical data (e.g., public safety) can have a higher priority than someone streaming music. While new net neutrality rules might create business opportunities for ISPs, humanity will also benefit as important services will have a higher priority than someone binge watching Netflix.
One of the biggest challenges around privacy is how to develop services that reconcile the conflicting interests of product designers and marketing executives with those of the legal and compliance teams. One such compromise came to light in 2015 when the German motorist organization, Allgemeiner Deutscher Automobil-Club, discovered that large amounts of data were being captured by the on-board diagnostics (OBD) system of a BMW model, including driving destinations and phone contacts, without the permission of the user. Originally, the data could only be accessed by directly connecting to the OBD. However, as soon as the data started being transmitted wirelessly, the risk of it being captured, processed and shared without the customer’s permission becomes unacceptably high.
In Europe, especially under General Data Protection Regulation (GDPR), automakers must ensure they have the express permission of the vehicle owner before they can share car data. As the U.S. has not enforced similar regulations yet, there are concerns that connected car data may be harvested and sold at the expense of owner privacy. For example, someone could potentially make money from stealing customer details that are stored in the entertainment system of a rental vehicle. By connecting your phone to a rental car, you could unwittingly give details of your smartphone, recent locations visited and even your home address for others to mine for malicious purposes.
Consumer confidence at a crossroads
The effect of all this on consumers is hard to gauge, partly because industry studies conducted to date are colored by the interests of the sponsoring parties. Insurer Willis Towers Watson portrays a broadly positive picture saying 55% of consumers will likely purchase a new or preowned vehicle with new technology features in the next 24 months. Four out of five drivers, it says, are open to sharing their driving data.
In contrast, research published by digital platform security company Irdeto is decidedly more downbeat. The study of over 8,000 consumers in six countries revealed 93% of respondents either don’t have a connected vehicle or don’t know if they do. Nearly half (49%) do not own and do not plan on buying a connected car. A high proportion (85%) cited cybersecurity as the reason for their caution.
Car manufacturers need to be able to provide the car owner, or future owner, an easy-to-understand overview of which data is being collected and how it is used. Current and future regulations will also cover these items to protect the consumer.
Securing the connected car
One surefire way to protect driver data is to use virtual private networks (VPNs). VPNs are a useful way for motor manufacturers to ensure any driver data transmitted wirelessly remains completely private. This is especially important for GDPR compliance. VPNs in vehicles can guarantee secure connectivity, assured authentication and centralized remote management for software patches and over-the-air updates.
Additional protection can be achieved by utilizing secure connections, advanced authentication and centralized management. First, decide whether the application requires on-demand or always-on access, as well as command-line or API control. Next, implement added security with a second factor of authentication. Lastly, a central management provides the ability to remotely configure any internet-connected device to patch or update software, scale VPN connectivity and manage authentication.
Overall, while doing away with net neutrality gives internet providers much more freedom to shape how we connect to the online world, implementing VPN technology can protect our privacy and provide secure connectivity when operating any connected device or machine, especially a connected car.
We’ve spent too much time waiting for the “year of IoT.”
After years of waiting for my wishes to finally come true, I’ve given up — there will be no IoT year. Other technologies are usurping the dubious privilege of leading the technology bubble. Blockchain and artificial intelligence are now much cooler.
As has happened on many previous occasions, IoT will be replaced by other acronyms that will make you forget bad experiences and failed expectations. And with new acronyms, the illusions of those of us who continue to trust the beneficial implications that “new IoT” will bring to society will appear again.
Event organizers were the first to notice the decline of IoT
The first to realize this was the organizers of IoT events. A couple of years ago, the weight of the new tech on stage (i.e., blockchain and AI) did not detract from the main actor (IoT). Now, they are the stars and IoT is marginalized and surrounded by other technologies, badly hurt and melancholy.
Quo vadis, IoT events?
Will IoT events disappear? Sure. It doesn’t matter if is in two or three years, but IoT-only events will not make sense. In the last three months, I attended several IoT events in London, Amsterdam, Madrid and Bilbao. I am seeing a slow decline and transformation of IoT events, and most of them do not satisfy my expectations. I am tired of seeing the same case studies parroted over and over again.
Like my friend Rick Bullotta, I’d be interested to see some more innovative stories, some failure stories and lessons learned, some hard facts about how long it takes to build, what it cost to build, return on investment…
Of course, we will continue to see IoT companies, products and services at the big events like CES in Las Vegas, MWC in Barcelona, or CeBIT and Hannover Messe in Germany, or at industry-specific events or company-specific events like PTC LiveWorx or Bosch ConnectedWorld . But the same way that we do not see internet events today, we will not see internet of things events beyond 2020. It will be a good sign because the hype will have disappeared and the reality and the market will have been imposed.
Thanks for your likes, comments and shares!
The transportation sector has been ripe for a technological overhaul for quite some time now. As with other traditional industries, management reluctantly lets go of legacy systems in favor of more innovative, AI- and machine learning-based innovations. Connected cars and connected fleet maintenance garner much attention when speaking about adding new technologies to the transportation sector, but there is another industry that will see massive changes as well: air travel.
In 2017, 36.8 million flights were operated worldwide. Commercial airlines alone scheduled over 4 billion passengers last year, and both of these numbers are on the rise. Needless to say, a technological revolution in the air transportation industry will affect billions.
Airplanes are already seeing upgraded technology that collects a massive amount of data. From fuel use to weather systems, the amount of data from each aircraft is astounding. Furthermore, reports indicate that operations will create more planes with upgraded technology equipped to collect data; in fact, by 2027, 58% of the fleet will be new-generation aircraft. Naturally, airline carriers using these new-generation aircrafts will need a team to analyze the data and help create a better customer experience. With the help of data scientists, here is how the air transportation industry will evolve in the coming years.
Data improves baggage tracking
According to the SITA 2017 baggage report, industry baggage systems are expected to handle over 4.9 billion pieces of luggage in 2017. The report notes that for every 1,000 passengers, 5.73 bags are mishandled. While this is a 12.3% decrease from 2016, it still means frustrated passengers and misplaced belongings.
Luckily, on June 1 of this year, the International Air Transport Association (IATA) Resolution 753 came into effect. According to IATA, airlines must track baggage at four key points: “passenger handover to airline, loading to the aircraft, deliver to the transfer area and return to the passenger.” IATA will be an improvement, but there is still more that airlines can do with the help of data science.
RFID (radio frequency identification) tags can monitor bag movement and give customers updated tracking information on their luggage. Using data analyzation, airlines can treat bags with short connection times more urgently, as that is the number one reason for baggage mishandling. This update is only available with the use of data science, which can better track baggage movement.
Better airline food from data science
Most travelers can agree that airline food is not the best. Carriers want to fill up passengers without too much concern on nutritional value or a wide variety of onboard options. The meals are more caloric than what passengers typically eat, and the taste is typically lacking. Furthermore, carriers include a percentage of food options on each flight, for example, 40% chicken, 40% beef and 20% vegetarian, nevermind the actual preferences of the diners. Big data can change that.
Imagine enjoying a glass of red wine on your outbound flight. That information is matched to your ticket so that red wine is available on your return flight as well. That treatment is only available through data science. Carriers can better predict what food passengers will want on long flights that serve meals and better cater to specialized needs.
Data science advances airline operations
Data analytics can forecast delays and weather patterns so airports can manage appropriately. This technology can reroute passengers during a storm and better prepare carriers if severe weather is approaching. Furthermore, connected technology means that travelers will be notified of delays or changes quickly or even before they arrive at the airport.
Scheduling can improve as well. Data can track flight attendant and pilot scheduling, for example, to prevent crew fatigue as well as crew shortage. Imagine never having to wait in those 15-20 plane takeoff delays that happen all too often while traveling. Data can keep the air congestion down as well.
Predictive analytics can make flying safer by alerting operators to parts that may be damaged. Operators can replace these parts before an accident occurs, instead of after a problem happens. Data means smoother operations for airline carriers and a much better experience for travelers.
The new air transportation industry
Honeywell’s 757 connected aircraft took off last year, ushering in a new era for the airline industry. Senior Director of the program, Erica Brinker, said that the idea behind the connected aircraft was to have it self-monitor. “For example, with an airborne maintenance issue,” she said. “The airplane will tell the company what needs to be done or fixed before the next flight so that maintenance can be waiting at the gate at the end of the current flight and mitigate or eliminate a future flight delay, which costs the industry 25 billion annually.” The company estimates the cost to diagnose an issue will reduce by 25% with this technology.
Big data has the power to revolutionize airline travel, and since more aircrafts are fit with upgraded technology, more data will come with it. Carriers with robust data science teams will succeed in optimizing operations, resulting in fewer delays, better service and happier, more loyal customers.
IoT is helping to unleash a new age of digitalization across industries. From manufacturing to the home, the potential to increase connectivity and productivity across a range of sectors is massive and set to bring about new ways in which businesses operate and engage with their products and customers. All this is driving the fourth industrial revolution, Industry 4.0.
The word “smart,” from smart cities to smart factories, derives from this very revolution. These smart environments encompass a variety of devices and technologies, and we’re already seeing them today. Even in the manufacturing and industrial sectors, where automation and computerized control systems have been commonplace for many years, digitalization is driving massive levels of change. The challenge of integrating these legacy, mostly proprietary, systems that were not designed to communicate across production lines and functional areas means the journey toward Industry 4.0 should not be underestimated.
While digitalization is already well advanced in industrial environments, it is constrained by a lack of standards.
To make the most of this digital revolution, harness the full power of Industry 4.0 and capitalize on potential savings that automation will bring, companies need to federate IoT platforms and operating systems across multiple production lines and subsystems.
Interworking and interoperability
Many companies are still trying to gain a better understanding of how blending robotics, interconnected devices/systems and convergent hybrid infrastructure, together with edge and cloud/data center compute, can improve productivity and reduce costs in the long run. Add different vendors’ technology to the mix and a very complicated picture is painted.
Working to reduce this complexity is oneM2M, a global standard which hides technology complexities for IoT application developers through an abstraction layer and wide area network perspective. Its areas of expertise extend to the industrial sector, with a number of completed Technical Reports (TRs) and works in progress dedicated to this area. TR-0018 Industrial Domain Enablement, for example, maps out a number of use cases relating to Industry 4.0 development and the potential requirements which need to be addressed to ensure M2M communications truly enhance operations. Based on industrial domain research carried out, the document highlights the need to develop a common strategy for the implementation of Industry 4.0 as a means of accelerating the update of manufacturing systems that many global organizations have started to invest in.
Other completed projects dedicated to tackling the challenges of Industry 4.0 include TR-0027 Data Distribution Services usage in oneM2M and TR-0043 Modbus Interworking. Meanwhile, work is continuing on TR-0049 Industrial Domain Information Mapping & Semantics Support around proximal-distal interworking. Further support for industrial IoT applications is also expected to come when oneM2M publishes Release 3 later this year.
OneM2M also collaborates on a wider scale with other industry bodies on a wide range of projects. In its TR-0018 Industrial Domain Enablement report, oneM2M referenced several organizations and industry bodies with relevant activities in the area, including the Industrial Internet Consortium (IIC) and Plattform Industrie 4.0 and their respective reference architectures including IIRS and RAMI 4.0. Cooperation with the IIC is already advancing through joint workshops and a oneM2M testbed. Cooperation with Plattform Industrie 4.0, the central alliance for the coordination of the digital structural transition in Germany, which includes stakeholders from businesses, associations, trade unions and academia, is another important alignment of technologies and concepts in the industrial domain, especially as the PI4.0 concept of an Asset Administration Shell and the oneM2M CSE are so complementary. OneM2M has incorporated results from Plattform Industrie 4.0’s Reference Architecture Model, Industrie 4.0 (RAMI 4.0), into the TR-0018 Industrial Domain Enablement report. RAMI 4.0 provides a conceptual superstructure for organizational aspects of Industry 4.0, emphasizing collaboration infrastructures and communication structures. It also introduces the concept of an asset administration shell that incorporates detailed questions on key topics such as semantic standards, technical integration and security challenges.
Creating consistency and certainty
OneM2M’s joint work with key transformative actors to develop and deliver workshops, testbeds and reports, along with its own unique asset — the concept of a service layer on top of a connectivity layer — contributes significantly to the overall Industry 4.0 framework and industrial internet as a whole, ushering in a new wave of digitalization which will mark the beginning of Industry 4.0.
I think everyone is now familiar with cloud computing, but have you ever heard of edge or fog computing? I hadn’t until recently, but it turns out to be a new way to use computing resources, especially to enable your digital transformation process for IoT devices.
With the advent of IoT, we see more and more data being generated by sensors, somewhere in the field (often quite literally). In some use cases, we want to do a lot of local processing. Take, for instance, sensors on a modern-day car. Processing of the data needs to be done locally; and with the increasing amounts of data being sent over it, it would be impractical to send all of that to centralized location. But mention the fact that action that needs to be taken is local, and an example would be a distance sensor on the front of the car that start registering the diminishing distance between sensor and the car in front. The action, of course, is to brake.
Fog computing, fog networking or fogging is an architecture that uses edge devices to carry out a substantial amount of computation. The National Institute of Standards and Technology (NIST) defines fog computing as follows:
Fog computing is a layered model for enabling ubiquitous access to a shared continuum of scalable computing resources. The model facilitates the deployment of distributed, latency-aware applications and services, and consists of fog nodes (physical or virtual), residing between smart end devices and centralized (cloud) services.
The document from NIST is something that is nice to read and will give you more insight into the concepts of fog computing. For me, it opened up a whole new set of concepts including mist computing (I am not kidding). But what would I’d like to do rather than going into all the technical details is to discuss a use case of these new paradigms and what they mean for you!
Everyone can be a provider
You can see it as an alternative to cloud providers like Amazon, Microsoft or Google. Rather than a centralized and massive cloud computing platform, you are now creating computing resources at what they call the “edge” of the network. You can add your computing resources to a network and start making some money with some spare computing resources that you might have. Together, these devices at the edge of the network make up a massive computing platform, very much like the grid computing efforts in the beginning of the 21st century. One such company that offers such a service is SONM.
SONM offers general-purpose, cloud-like computing services (IaaS, PaaS) based on fog computing as a back end. Computing power suppliers (hosts) all over the world can contribute their computing power to SONM marketplace. Users will, according to SONM, get cheaper computing power compared to cloud providers. The system SONM is offering is Linux- and Docker-based, with payment based on Ethereum smart contracts.
Although SONM says it is for general-purpose computing, some of the examples that it gives are for a very specific use case — for instance, rendering training models in the case of machine learning and video rendering. There are also examples that are more toward the nature of fog computing (the edge) — for instance, video distribution and content distribution networks.
Would you use it?
The question is, of course, would you use it? Or even would I use it? At this moment, it’s hard to say — we get an idea about what fog computing is, but the devil is always in the details. As a company, we use cloud computing for the services we offer to clients, and we pay for what we use. We hardly have any spare computing resources that we can add to such a market — and I’m not sure if we would want to if we did.
The world is changing, and we see more and more IoT devices arising, along with the demand for edge processing. As a part of a digital transformation strategy, fog computing, be it the form that SONM is offering or not, could very well be part of the roadmap. It’s a question of additional research from our side that is necessary to see if this will be beneficial to use as an alternative to cloud computing, or even to identify if there would be any cases for our clients use for computing. But this doesn’t mean that fog computing is useless; it just means that we haven’t fully identified the use cases.
In case you do have some spare computing power that you might want to recover some of the cost associated with, look into SONM or a similar offering; if you are on a shoestring budget, cheaper computing resources might be beneficial. So, it all depends on your use case.
What do you think? Is fog computing a hype? A trend? A fad? Let me know in the comments!
Preparing for battle: Stopping IoT attacks
With Gartner predicting a world filled with more than 20 billion IoT devices by 2020, we’re becoming more connected by the minute. All of these devices and sensors will improve and enhance many aspects of life, but they also present risks when the devices are infiltrated by hackers.
Many IoT devices are built without even a minimum of security controls, so they’re exposed and vulnerable from the outset. There’s also a lack of standards throughout devices, which prevents enforcing uniformity in security settings and establishing universal security parameters. Poor patch management is another area of concern, as many IoT device manufacturers require end users to update devices and obtain the latest patches.
For an example of the risks, consider the infamous Mirai botnet, a distributed denial-of-service attack that knocked many internet services offline. Mirai targeted routers and other connected devices that used default user names and passwords, and quickly spread to infect millions of devices. Shutting down botnets such as Mirai is difficult because it’s next to impossible to “lock out” infected machines from internet access, and finding and prosecuting the botnet’s creators is very challenging. IT and individual users frequently are unaware their device is taken over as a botnet, so advanced monitoring and prevention tools are recommended to help proactively stop such intrusions before damage occurs.
For consumer devices, there’s a focus on features such as speed, image quality and value of the data, but consumers don’t typically ask for robust security features from their IoT devices. Companies that develop and deploy these devices should push for better security controls and begin to talk to consumers about the need for improved protections.
Understand the risks
The risks with IoT are fundamentally dynamic because IoT itself is adjusting and growing at such a rapid pace. This growth and sophistication brings with it parallel interest in attacking IoT for financial gain or to simply cause disruptions. Companies should carefully review the legal obligations that come with IoT devices, especially when sensors and other IoT components are used in settings that could potentially cause physical harm. Self-driving cars and connected factories are just two examples of such IoT-based environments where a hacking incident could result in death, not just inconvenience. However, this does not mean there should be complacency in the security protections afforded to wearables or other sensors that aren’t managing life-threatening situations. IoT devices of all kinds are producing data, and it’s the requirement of companies to manage and protect the produced data.
To combat the challenges with IoT, companies should perform risks assessments to fully understand where they might be exposed and how they can remedy those risks. They need a log of every connected device in their network along with a way to automate patching and updating.
Managing the devices and the data
Corporate IT must consider the security needs of provisioning and authenticating IoT devices throughout the company. This includes accounting for the role and location of all of these devices, along with details on updates and patches. The actual data sent between the devices and the network must also be protected. Many companies rely on IoT-derived data to make impactful decisions, so the integrity and security of the data is supremely important. Considerations should include how the data is protected at rest and in transit, and if tools such as encryption should be used to render stolen IoT data unusable. Complex IoT deployments warrant the use of device manager platforms that allow users to control devices remotely, update firmware and control authentication for every device.
IoT deployments are growing by orders of magnitude, and the number and complexity of attacks will follow suit. Companies should demand better products from device manufacturers, with requests for automated updating and patching and the closing of known security flaws before devices go to market. Firms that deploy IoT devices should also anticipate more stringent data management from various governments, and know they’ll need to improve how IoT data is collected, stored and transmitted. Improved visibility into IoT, enhanced devices and a security-focused mindset will all need to come together if companies want to use insights from IoT while also thwarting attacks.