Can you relate to this? You find that one elusive parking spot — and as you are backing up, another car sneaks in. A study conducted last month in the U.K. found that more than half of British drivers suffer from stress when they cannot find a parking space. Probably all drivers in major towns and cities everywhere could relate to parking stress. You would think that this modern day scourge could be addressed with IoT.
In theory, it should be simple. Smart parking sensors would be able to flag vacant parking spots to motorists on an application, such as Google Maps, Waze or Apple Maps, as they enter a town or city. Parking would be a breeze! Not so. As things stand, motorists are still far from finding a remedy for their parking headaches. The smart parking use case is a prime example of a problem holding back IoT’s full potential.
The rise of NB-IoT
The reason for this conundrum is two-fold — a lack of standards and a gap between efficiently connecting parking sensors with multiple cloud vendors and central application servers. Currently, NB-IoT, especially the non-IP data delivery, provides the cellular functionality required to efficiently connect devices across large distances with prolonged battery life. A number of mobile carriers, including Vodafone, Three, China Mobile and Zain, either have deployed NB-IoT networks or are conducting trials. According to analysts, the NB-IoT chipset market could grow from $16 million in 2017 to $181 million by 2022 at a compound annual growth rate of just over 60%. Yet, connecting non-IP devices, such as smart parking sensors, over NB-IoT to platforms, such as Azure, CloudPlatform or AWS, via central application servers is complex.
Currently, IoT gateways form the bridge between smart sensors or devices and internet connectivity via NB-IoT on a cellular network. 3GPP and the Service Capability Exposure Function (SCEF) set the standards to connect the device/sensor to the IoT gateway. Things start to get murky when you want the device/sensor to connect to multiple application servers and cloud platforms over NB-IoT gateways due to the absence of agreed standards and protocols. For example, a developer of a smart parking sensor would have to send its data to a central application server, transmit it via Amazon’s Cloud or to Microsoft Azure, and then to the likes of Waze, Google Maps and so on, which would pick up that data from multiple clouds. As the data from sensors is not federated and easily available, it is cumbersome and complex. This long-winded process is stifling innovation.
A bridge too far?
So, which players in the IoT ecosystem are best placed to find a solution for this? The organizations responsible for carrying the data on NB-IoT or, in other words, the mobile operators. Rather than just being the workhorse for transporting IoT data, mobile operators can play a central role by using gateways and building an open application ecosystem to foster interoperability between applications, devices and enterprise backend systems. Operators need to be the bridge into IoT systems such as AWS IoT and Azure IoT platforms. Importantly, operators also have the technology to secure the network and the IoT devices from attacks and malware, as well as provide network abstraction and enhanced connectivity.
To enable mobile operators to do this, gateways should have the capability to handle telco-grade distributed databases with the scale to manage millions, if not billions, of devices. Put simply, gateways should allow operators to consolidate the functionality required from standards and protocols such as SCEF/SCS and extend it to other API’s like REST-JSON, MQTT and federate data from multiple sources.
If, as Gartner predicts, there will be over 20 billion connected IoT devices by 2020, mobile operators could secure revenues of around $48 billion by capitalizing on the IoT/M2M opportunity. To do that, they need to grasp the initiative and foster innovation. Parking might sound trivial, but it is a use case nonetheless, within smart cities, that highlights a growing problem for application developers with NB-IoT connectivity — and an opportunity for mobile operators.
Smart cities are touted as the future of urban living where everything from waste bin collections to streetlights and transportation will be connected and intelligent. Of course, smart cities and IoT are still in their relative infancy and this provides an opportunity to iron out problems and fine-tune networks. We need to address the issues that are stifling innovation — namely the gap in efficient connectivity and a lack of standards. Mobile operators have the power to add the “smart” into cities and bring that bright future to life.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
October is National Cybersecurity Awareness Month. The initiative by the Department of Homeland Security and the National Cyber Security Alliance is a huge collaborative effort spanning both public and private sectors, and a good demonstration of how the industry is coming together to safeguard the digital world.
While businesses in the U.S. and globally are still reeling from the WannaCry and NotPetya ransom attacks and the massive Equifax data breach, scrambling to update their systems to protect themselves, there is another kind of threat looming on the horizon.
The internet is today in the hands of around 3.5 billion people. And there are around 6.5 billion connected devices in use worldwide — a figure that is projected to hit 27.1 billion by 2021. What’s more, as consumers, we’re more connected today than ever: the average internet user today owns 3.64 connected devices, uses 26.7 apps and has an online presence across seven different platforms.
The ubiquitous global connectivity enabled by mobile applications and the internet of things opens up great possibilities for personal and organizational growth, from smart city advancements to transforming how industries produce goods. The industrial IoT has seen significant advancement in recent years. For example, by connecting assets in a factory, organizations can have better insight into the health of their machinery and predict any major problems with their hardware before it happens, allowing them to stay one step ahead of their systems and keep costly outages to a minimum.
Yet, IoT also exposes us to more security vulnerabilities that can cause financial loss, endanger personal and public safety, and cause varying degrees of damage to business and reputation. After all, anything that is connected to the internet is a potential attack surface for cybercriminals. For example, distributed denial-of-service (DDoS) attacks are getting better at exposing vulnerabilities in networks and infecting IP-enabled devices to rapidly form a botnet army of infected devices which grind the network to a standstill. Simply put, the more devices there are connected to a network, the bigger the potential botnet army of DDoS attacks.
Furthermore, without adequate security, innocuous items that generally pose no threat can be transformed into something far more sinister. For example, traffic lights that tell cars and pedestrians to cross at the same time, or railway tracks that change to put a commuter train on a collision course.
As the number of connected devices continues to grow and both public and private sector organizations embrace IoT, IT decision-makers must pause and think about how they can work together to create an end-to-end infrastructure that can deal with the influx of new devices and the inevitably rapid spread of cyberattacks in our increasingly connected world.
First, security must be built within IoT systems and the rest of the IT estate from the ground up, instead of retrofitting piecemeal security products as new threats emerge. Second, organizations need to adopt an adaptive security model, continuously monitoring their ecosystem of IoT applications to spot threats before attacks happen. Adaptive security means shifting from an “incident response” mindset to a “continuous response” mindset. Typically, there are four stages in an adaptive security lifecycle: preventative, detective, retrospective and predictive.
- Preventative security is the first layer of defense. This includes things like firewalls, which are designed to block attackers and their attack before it affects the business. Most organizations have this in place already, but there is definitely a need for a mindset change. Rather than seeing preventative security as a way to block attackers completely from getting in, organizations should see it as a barrier that makes it more difficult for them to get through, giving the IT team more time to disable an attack in process.
- Detective security detects the attacks that have already made it through the system. The goal of this layer is to reduce the amount of time that attackers spend within the system, limiting the subsequent damage. This layer is critical, as many organizations have accepted that attackers will, at some point, encounter a gap in their defenses.
- Retrospective security is an intelligent layer that turns past attacks into future protection, similar to how a vaccine protects us against diseases. By analyzing the vulnerabilities exposed in a previous breach and using forensic and root cause analysis, it recommends new preventative measures for any similar incidents in the future.
- Predictive security plugs into the external network of threats, periodically monitoring external hacker activity underground to proactively anticipate new attack types. This is fed back to the preventative layer, putting new protections in place against evolving threats as they’re discovered.
For organizations to protect themselves, they need to get this mix right; all four of the elements improve security individually, but together they form a comprehensive, constant protection for organizations at every stage in the lifecycle of a security threat. With billions of consumer and business IoT applications exchanging billions of data points every second, IT decision-makers need to map the end-to-end journey of their data, and the threats lurking behind every corner.
At the start of this year’s National Cybersecurity Awareness Month, Assistant Director Scott Smith of the FBI’s Cyber Division said, “The FBI and our partners are working hard to stop these threats at the source, but everyone has to play a role.” Organizations that work with their peers and security specialists to secure their IoT ecosystem and network will be rewarded in the long run. There’s no one-size-fits-all approach to securing IoT worldwide; it will take a considered, collaborative effort to safeguard the super-connected world today and tomorrow.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
My high school freshman son has been pretty stressed out these past couple of weeks. Dealing with the pressures of multiple teachers giving assignments all due on the same day has been very frustrating. As I watched him tackling mounds of math homework, I considered similar struggles that happen within IoT development. It dawned on me that the processes thwarting IoT progress could be boiled down to a simple mathematical equation: IoT = DevOps2.
IoT equals DevOps squared?
Let me explain. To achieve a successful IoT application implementation, you need an awareness of the complete physical environment in your IoT scenario. At the same time, you need to recognize that you are dealing with several groups scattered throughout your organization; OT development, OT operations, IT development and IT operations.
Each of these four distinct groups, like teachers of different subjects, has specific goals and concerns. Coordination and planning among groups are required. Otherwise things will run inefficiently, or worse, at cross purposes. For things to run smoothly, you need a coordinated plan with assignments and processes, and a constant exchange of information.
Of course, it’s important to remember why each group is its own distinct entity. Differences in their objectives must be considered. Just as math and history have very different aims in what is being taught, each group within an organization has different objectives and metrics. While operations and development are similar, developers are measured against creating new and better software and doing it as quickly and efficiently as possible. And staff in IT operations are much more concerned with keeping an environment stable and operating. Whatever they decide to implement must be able to safely integrate into their current environment.
Like the goals of an educational institution, the goal of a business implementing an IoT project is to ensure everyone completes the process together and on time. If the OT development group finishes its assignment and throws it over the wall with no one there to catch it, or if the IT development group claims success for having finished “on time” but the other groups can’t, the IoT project fails. So, what can be done to avoid development pitfalls in the new atmosphere of IoT?
X can’t equal old waterfall techniques
First, let’s acknowledge that older software development methods hinder the collaborative processes required to support IoT. I’m talking about methods whereby months are spent developing software, documenting it and then testing it only to find out (too late) that the software is either too buggy or no longer meets the expectations of the customer. This timeframe and methodology won’t fly in the IoT context, where requirements of all four distinct and very important groups must be met. IoT is truly all about real time.
Strong integration and communication among all four groups is required to make IoT work. By merging these four groups into one team that works together, you can feed in requirements earlier and make sure things are supportable sooner in the process. The team can then more easily pick up on concepts and have those concepts reinforced through different tasks. It’s all about determining what is important to the greater goal and finding the pieces that are required in each area to support that. In the image below, you can see that there are indeed common requirements between development (Dev) and operations (Ops). Focusing on these requirements to create a DevOps platform smooths the way for greater efficiency and innovation.
Give an A+ to Agile
Agile software development, such that is enabled through DevOps and CI/CD (continuous integration and continuous delivery) improves collaboration, and thereby, innovation. When everyone is in charge of quality management and people are working together, rather than in silos, the potential to break functionality in other areas is avoided. When you are constantly releasing products with small, incremental changes and testing them as you go, you know immediately if something has broken and can pinpoint what needs to be fixed. By catching and addressing the problem quickly, the whole team produces more stable and better-quality software.
Containers are often used in an agile development process. Being able to make modifications to “layers” of code without impacting others allows for changes to be made quickly while keeping the code stable. Scrum is another important agile project management tool. Unlike traditional waterfall-style project management, scrum deals with real-time, face-to-face interactions to ensure clear communication to all involved.
Agile development environments scale quite well, which is essential in today’s quickly changing markets like IoT. These environments are far more efficient and allow updates to occur quickly and often without affecting underlying layers.
Using the right tools is important to success
You can take this whole idea of agile integration and development processes and expand it out into other aspects of IoT, including your architecture approach. My colleague, Ishu Verma, discussed this at length in his blog post, “Using Agile Integration for IoT.” As he points out, it really comes down to transforming from customized technologies to those based on standards and focusing on the ability to collaborate. This is why customers are using container platforms and other development tools to update their IT infrastructure and adopting an agile, DevOps approach to application development.
Combining agile integration with tools found in device application frameworks specialized to build and manage machine-to-machine or IoT applications, such as Eurotech’s Everyware Software Framework and Everyware Cloud, gives IoT developers on-demand, self-service capabilities while making it easier to work together.
Now, if only the freshman curriculum were this easy.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
IoT-enabled products are everywhere — or at least will be soon. Gartner projects that there will be more than 20 billion connected devices by 2020. Businesses already recognize the potential of IoT to deliver value. According to a recent survey, 82% believe they will adopt some form of IoT within the next two years, in part because they believe that the information provided by IoT-embedded devices will promote other innovations.
But adopting IoT is no simple matter. And few companies presently have the skills or infrastructure needed to safely and productively deploy IoT technology.
Implementing IoT requires highly specialized skills which nearly half of all companies believe they lack. This has created a wide and troublesome skills gap — a disconnect between the demand for a technology and the know-how to apply it. Additionally, most companies lack the necessary software, security and IT infrastructure capabilities to employ IoT. Instead of undergoing the costly and time-consuming process of acquiring these competencies, businesses should turn to specialist service operators — those with the right expertise, experience and equipment — to extract optimal value from cutting-edge IoT technologies.
Managed service providers (MSPs) are well-suited to fill this important role. Throughout the business world, MSPs handle complex IT deployments, like device as a service and managed print services, to add significant value for their customers. This value comes in the form of cost-savings, efficient deployments and support services, all of which are made possible by MSPs’ familiarity with the technologies they offer.
MSPs are heavily incentivized to acquire rare and in-demand skills, such as a competency in IoT, and have a unique economic advantage when it comes to complex and cutting-edge technologies. Their business models allow them to spread the cost of knowledge acquisition across a wide customer base, making their investment cost-effective. Ultimately, this translates to expenditure savings for the customer and a transition from heavy initial Capex to cost-efficient, use-based payments.
With the broad benefits of a service provider who can help guide IoT projects now established, there are three key areas in which they can offer value specifically to the IoT space:
1. Advanced analytics
IoT technologies will generate a great deal of precise, real-time data on how their host devices are performing. This data can be used to improve device performance and lifespan. However, McKinsey found that only 1% of all IoT data collected is ever used. Clearly, companies aren’t capitalizing on the available data. The issue being that it’s too complex and voluminous to be easily manipulated — it requires an understanding of big data analysis.
MSPs will develop the ability to analyze and interpret this important data — which will be fed directly back to them from the devices they manage — using big data principles. Properly employed, this data can offer important insights into usage, performance and physical condition — all of which can be used to optimize devices, configure them for specific end users and forecast wear and tear. These insights will greatly improve user experience.
2. Service innovation
MSPs can also combine this new classification of device data with their existing databases of client information. Contrasting macroscopic corporate insight with microscopic usage analytics will yield a range of findings. They will also empower MSPs to develop and deliver a range of innovative and tailored services — preventative repairs, product tailoring, product tuning, improving customer service — to make sure companies and their employees are getting the most from their products.
There are several situations and fields in which new, IoT-enabled services could provide a significant boost. In logistics, sophisticated asset tracking would enable companies to more closely monitor their supply chain, allowing for logistics and inventory optimization and stricter quality control. In manufacturing, construction and natural resources, field equipment intelligence can deliver reduced maintenance cost and downtime with preemptive repairs. In the office environment, IoT-driven customer insights could allow MSPs to plot the distribution of printers and personal systems to maximize productivity but minimize outlay.
Patterns in IoT data could reveal ways to improve business operations and efficiencies. MSPs could soon also be combining recommendations on device deployment with data-driven counsel on maximizing device deployment to meet business goals.
3. Fleet security
New capabilities mean new vulnerabilities. And more data means more sensitive information. Ninety-six percent of security professionals responding to a recent survey said that they expect to see an overall increase in industrial IoT breaches this year. And attacks like the massive distributed denial-of-service Dyn attack, which targeted insecure IoT devices such as webcams, video recorders, routers and baby monitors, will only increase in frequency and impact. In fact, security is number one in Gartner’s top 10 IoT technologies for 2017 and 2018, saying “IoT security will be complicated by the fact that many ‘things’ use simple processors and operating systems that may not support sophisticated security approaches.”
Diligent MSPs will gain the know-how necessary, or enlist a suitably qualified professional partner, to ready their customers for IoT-related threats. As part of their growing suite of evolved services, MSPs will provide security assessments, devices with built-in detection and recovery capabilities, automatic security updates across fleets and appropriate data destruction for retired devices. MSPs will also continue their move beyond network security into endpoint security, and will design and deploy robust but unobtrusive security protocols. Fundamentally, this means far better fleet security than would be possible for a business without significant in-house expertise.
In many unforeseen ways, IoT is a game-changer. To maximize and protect every dollar invested demands a unique skillset and a deliberate, thoughtful deployment. Managed service providers will deliver the intelligence, skills and experience to use the full potential of IoT-connected devices, while ensuring that company assets are safeguarded.
Where do CEOs come from? What roads led them to their exalted and all-powerful roles? In a world of sprinting technology, internet-enabled systems across every enterprise and increasing cybersecurity threats, this is an unexpected but existentially relevant question to ask.
According to a recent Forbes article, 75% of Fortune 100 CEOs come from operational backgrounds, and 32% were also CFOs at one point. These ladders to the top are long-established, but I’d like to pose this provocative question: Are we in a new era where CTOs should have a path to top-dog status?
The recent news stories about threats to the enterprise frame up the argument in compelling and dramatic terms.
Would a CTO as CEO have saved Equifax?
There are guarantees. But there’s no doubt that someone with a deep and current cyberbackground would not have allowed the porosity, sloppiness and defenselessness that led to the Equifax breach and its far-reaching implications. Given the impact of the hack and its cascade of consequences affecting more than 100 million Americans, the inadequacy and emptiness of the response of the former CEO, Richard Smith, make clear that traditionally trained CEOs are not up for the task of running a data-first organization.
And let me underscore “former.” Smith’s inadequate testimony, and his clear inability to run a company whose very existence is based on data security, led to his demise.
I believe that a talented, sophisticated and savvy CEO with a CTO background would have constructed and managed a more aware and resilient security apparatus than Equifax had. It would have known how to ask the probing questions and organize people and processes more strategically. Indeed, CEOs without a deep understanding of today’s cybersecurity challenges, complexities and demands are at a serious disadvantage.
In fact, the challenges are so great that even a technology-led company like Facebook can still fall victim to technology threats. While the abuse of the platform by Russia was not a hack, it used holes in the Facebook system in order to mask the actual sponsor of the ads. Imagine what might have happened if Facebook was led by someone who had run supply chain!
As the world becomes even more global and interconnected, future CEOs will be confronted with an armada of unforeseen issues and challenges. Consider the range of businesses whose strategic partnerships, consumer relationships, reputations and overall trust, as well as regulatory compliance, are contingent on cybersecurity.
Or more accurately stated: Which ones are not? Putting aside database businesses like Equifax, CEOs of companies in industries ranging from transportation to healthcare, e-commerce, software and infrastructure all require an in-depth security background. One which goes well beyond the check-the-box experience that is part of the typical CEO track. In all these cases, the CEO is responsible for data which, in hacked hands, can cause tremendous harm to the lives of millions.
IoT adds additional layers of complexity
Previously, I reviewed the importance of cybersecurity in medical devices, where an ineffective threat management platform can lead to murder. That’s just the start, though. Autonomous vehicles, for example, pose grave threats to personal safety and reputational security that can’t be pushed down to IT departments; the inadequate protections and procedures can be business-ending.
We need to have faith in the ability of our CEOs to guard the data they have been given responsibility for — meaning they need the chops to set in-depth briefings from their security IT personnel and, in turn, push back hard with pointed questions.
This is a complex mission for any CEO, who by definition needs to deal with so many different business operations simultaneously. We expect them to ask the right questions and understand and brief their board of directors, but in cybersecurity — perhaps more than in other technology-related areas — the devil is in the details (and the malware). Without a thorough understanding of the reality of today’s advanced targeted attacks and threat actors, a CEO cannot effectively frame the right questions and assess if the answers they get from their IT employees are good enough. These details are broad, related to not just the technologies their business relies on, but the entire architecture that their defenses rely on as well.
So, in this era of unprecedented cyberthreats, including the new risks attendant to IoT exposure, we need to create a new generation of CEOs who come from CTO and CISO backgrounds. This next generation can be accountable to shareholders, boards and the business community because their apposite backgrounds will liberate them from sole reliance on their IT departments.
From CTO to CEO
Of course, the era of the CTO-to-CEO track will not happen overnight. The reason that CEOs come from operational and finance backgrounds is that decades of management training are behind this process and trajectory. They are groomed for advancement early on, and are intentionally rotated through different functional areas to give them exposure to the breadth of experience that CEOs requires.
This is not happening with CISOs and CTOs. Their careers typically start and stop in the same department. There are no systems or processes in place to identify CEO prospects from the ranks of technology and IT, so until those mechanisms are put in place — requiring board-level recognition of the challenges I described earlier — it is unlikely that they will ascend to CEO status.
Until then, we will have to make do with the crop of CEOs and CEO successors we have. And they have big gaps in cyber-awareness to make up. Now, don’t get me wrong, I am not saying a CEO needs to understand all the details and be involved in the day-to-day evaluation of his defense architecture, of course. But they should have the ability to make wise decisions given today’s threat landscape and new technologies, which accelerate productivity, but also extend the threat surface of the company by creating weakness and vulnerabilities.
CEOs must rely on informed intuition, but because virtually 100% of them have had minimal exposure to cybersecurity — with some of it probably decades old — and as a result have a check-the-box approach to this existential threat, their intuition will be handicapped.
So, while we are waiting for the next generation of CTO/CEOs — which, in my view, cannot happen soon enough — we need current CEOs who push themselves to be educated about the complexities of today’s threats. This must be an on-going process; there is no “Cybersecurity for Dummies” silver bullet for CEOs. But as the Equifax disaster has shown us, a CEO who doesn’t full grasp cybersecurity might be soon changing their LinkedIn profile to “Former.”
Designing a stencil for a conventional rigid printed circuit board (PCB) is challenging enough in this day and age, but designing a stencil for a much smaller IoT rigid-flex or flex circuit takes on a considerably new meaning.
On its surface, a stencil looks a bit like kitchen tinfoil, but is made of extremely thin stainless steel and not overly flimsy like tinfoil. Based on a specific stencil design, an automated laser cuts small, specified openings on that stencil for each surface mount (SMT) component and device supporting a particular IoT product.
The stencil’s job is to serve as a guide for transferring and printing the dispensed solder paste onto the PCB. In the case of smaller IoT products, solder paste is dispensed on rigid-flex or flex circuits to solder SMT joints to a bare PCB. The pick-and-place system in the assembly process then places components and devices on to the tiny boards.
At this point, fixtures come into the picture. Fixtures are small, flat metal carriers used to move the circuits along in the assembly processes. They’re key to making sure the paste is properly dispensed during the paste-printing process. Fixture sizes vary, but for IoT circuits, they’re about the size of a 3-by-5-inch notecard as IoT products are typically smaller in size. The main purpose of a fixture is to ensure surface mount pad stability and preciseness during the printing process.
So, that’s easily said. However, actually designing the stencil is another matter. Why is it so important? The answer is that we’re now dealing with smaller device packages such as the tiny 0201 and 01005 package types that are difficult to print due to their minute sizes.
This means the stencil must be so precisely designed that the exact amounts of solder paste are accurately dispensed. The goal is to ensure sturdiness and stability by making sure those micro-packages are solidly soldered on and connected to the circuit board. Here, the burden falls on the electronics manufacturing services provider to assure that certain steps are taken to properly design an IoT rigid-flex or flex circuit stencil. It calls for three major criteria:
- Make sure that the printing surface is completely flat. That’s vital for even solder paste distribution. If you’re dealing with a combination IoT rigid-flex circuit, the rigid board is generally 0.062 mils thick and the flex board is five to 10 mils thick. Therefore, dispensing solder paste on the pads and ball-grid array (BGA) package balls becomes challenging.
A step stencil could be an answer to resolving this difference in board-thickness printing. A step stencil is multilevel; its purpose is to either reduce or increase the thickness at certain points on the stencil. Hence, this technique dispenses the solder paste on the different thicknesses while printing is performed on the combination rigid-flex circuit.
- The stencil must also provide optimum stability and control solder paste spillage to avoid electronic connection flaws. Keep in mind that an IoT flex circuit will incur a multitude of bends and twists during the product’s routine use. If the stencil surface is not flat and stable during the assembly process, there’s a high probability of solder paste spillage occurring between pads or joints. The end result is the creation of shorts or bridges, or latent flaws that occur once the IoT product is being used in the marketplace.
- Good stencil design uses techniques like window paning and overprinting or underprinting depending on package differences. Window paning refers to altering the paste dispensing area or openings of a stencil when typically dealing with center ground or thermal pads of a device leadless package like a quad flat lead or dual flat no-lead.
To cap this discussion off, designing the right stencil is extremely important and takes good experience in process engineering to design it. Some of today’s more advanced rigid-flex and flex circuits have a number of analog components on them. The experienced process engineer must have expert knowledge about those specific components. For example, a heavily populated analog component board may require overprinting. That key aspect of the design must be factored in and calls for an extra amount of solder paste. But if the IoT rigid-flex or flex circuit is mostly populated with digital devices and micro BGA balls are closer to each other, the stencil designer might consider underprinting with less solder paste to avoid shorting two adjacent balls.
There are already more connected devices on earth than there are humans. The number of network-connected and intercommunicating devices is expected to climb well into the tens of billions within a few years. As the world grows ever more connected, how can we trust these devices? Such a network of intercommunicating objects requires a secure and efficient way to track all interactions, transactions and activities of every “thing” in the network. Interoperability, security, compliance, privacy and reliability are all barriers to IoT growth, but the paramount challenge for ubiquitous connectivity, never mind device autonomy, is a lack of trust.
Meanwhile, an emerging database technology known as blockchain is demonstrating capabilities previously absent from IoT’s development — many of which foster trust. In the simplest of terms, blockchain is an advancement in record-keeping systems. A blockchain is a construct which supports distributed and immutable record-keeping, verification and authority across all participants, rather than relying on a single point of authority to verify a transaction.
Blockchain’s role as an enabler of IoT lies in its ability to securely facilitate interactions and transactions between devices, and to use technology to eliminate corruption through immutable records.
Kaleido Insights’ analysis of the intersections between IoT and blockchain surface a wide range of use cases and industry applications. What follows are three scenarios in which decentralized record-keeping offers an architectural advancement to foster trusted machine interactions.
Trusted product identities: Blockchain can provide a “single truth” infrastructure for tracking a product’s lifecycle.
Just about every product undergoes a series of phases in its existence: growth, design, sourcing of components, manufacture, distribution, retail, service and repair, ownership transfer and so on. By adding sensors and network connectivity to products, we gain visibility into interactions, but to date, visibility remains highly siloed and opaque across parties.
For instance, most vehicles on the road today have numerous data streams (including sensors) tracking everything from speed to engine vibration, yet visibility into these analytics is fragmented to the manufacturer in one case, a mechanic in another, an insurance company in another. Even if there are data streams flowing from physical products, visibility into the life of that object — a car in this case — remains fragmented to any single entity.
Blockchain offers a mechanism to register, record and verify the integrity of all interactions along a product’s lifecycle. In the case of a vehicle, a distributed shared ledger could underlie all interaction and transaction information associated with the car. Such information could be shared and permissioned across the myriad parties including, but not limited to:
- Parts and component manufacturers
- Vertical software providers
- Fleet providers
- Municipal services and agencies
- Insurance companies
- Car buyers
In an effort to centralize all information about a car to a shared and immutable database, immune to fraud or tampering, a company called BigChainDB is developing CarPass alongside energy company Innogy and Volkswagen Financial Services. The first phase of this “machine identity” initiative registers a car’s title, prior damage, service providers, maintenance history and inspection history to a ledger. Introducing telemetry and telematics data, sensor data, financial services data and other third-party data streams introduce all manner of possibility in the shared visibility enabled by blockchain.
Trusted device-to-device transactions: Blockchain can provide a “single truth” for device transactions and microtransactions.
Most people don’t associate devices or machines with the ability to make transactions on their own, never mind negotiate or accumulate any kind of revenue. Yet, new revenue opportunities unlocked through machine interactions and the ability to automate these interactions are two principal value propositions of IoT. Yet, the current IoT space lacks a transaction structure for network of distributed machines.
Blockchain offers an architecture which could enable devices or machines themselves to conduct transactions on behalf of their owners or users, or potentially autonomously on behalf of themselves. Blockchain-based configurations can vary widely and such transactions could leverage digital currencies like Bitcoin or Ether or built-in assets like tokens or credits. As such, this can include “microtransactions” such as digital advertising, gaming, online storage, rights, energy and far beyond.
Oaken Innovation recently demonstrated the idea of a blockchain-enabled tollbooth in which both car and tollbooth have Ethereum nodes which use smart contracts to trigger a machine-to-machine (M2M) transaction. Piloting the concept using a Tesla, the car automatically pays as it passes through the toll booth. Sounds simple enough, but here a blockchain configuration presents significant potential cost savings compared to current server and payment infrastructure, fees and time required for traditional tollbooth transactions. Oaken submitted the pilot to UAE’s GovHack for smart city innovations, with similar technology supporting autonomous parking, short-term vehicle leasing and other services which require trusted M2M transactions.
Trusted product services: Blockchain can provide a “single truth” for ecosystem-based business models.
The advent of IoT has imposed a painful new reality on product companies: To deliver the value of connected products, traditional product-based business models must transform to data-driven service-based business models. More often than not, these models require diverse partnerships or marketplaces of multiple service providers — an ecosystem-based business model.
IT and OT integration may be IoT’s sweet spot, but when it comes to exchanging or generating new value across an ecosystem of service providers, transaction processes still require swaths of middlemen and complicated legal structures. Furthermore, diversity of ownership (i.e., of those controlling devices and cloud infrastructure) limits sharing and interoperability.
It is in this construct that blockchain offers the potential for ecosystems to generate and move value with vastly greater efficiency and security. One of the earliest blockchain projects called ADEPT (Autonomous Decentralized Peer-to-Peer Telemetry), brought to life a distributed ledger facilitating various types of IoT transactions between devices. In it, IBM and Samsung partnered to prototype a secure and low-cost universal digital ledger to manage roles, permissions, behaviors, transactions and events — interconnecting disparate connected devices.
Consider the opportunities for these disparate manufacturers, brands, service providers, advertisers, insurance companies, energy providers and so forth to use each other’s platforms to extend and improve their services. Smart contracts embedded in the code could trigger, for instance, a TV to request its repair, a dishwasher to order detergent from the supplier, a car to run updated safety checklists — all with the ability to permission and authenticate secure interactions, to verify orders, payments and shipments.
While this example illustrates the potential in a smart home construct, we see ecosystem-based services extending across industries:
- In collaborative economy contexts (e.g., car share, autonomous vehicles, home share)
- In retail contexts (e.g., multiparty loyalty programs, subscription models)
- In labor contexts (e.g., skills-sharing, “autonomous organizations”)
- In manufacturing contexts (e.g., 3D printers, shared machinery, tools)
- In supply chain contexts (e.g., food safety, anti-counterfeit, provenance)
This broad shift from centralized networks — operational, technical and financial — to the edge marks a profound change in how economic structures work and demands new mechanisms for trust.
Questions, comments, feedback on the opportunities, challenges and use cases for IoT and blockchain intersections? Feel free to reach out! This area marks an ongoing research stream for Kaleido Insights and we welcome inputs regardless of where you sit in the ecosystem.
IoT technology may be able to save brick-and-mortar retailers — provided they have a proper security strategy in place. After all, IoT introduces risks that go beyond the typical retail payment security concerns.
It’s easy to see why IoT technology is attractive to traditional retailers. Consumers have grown accustomed to a smoother, quicker, easier shopping experience online. IoT devices can help stores look and feel more like that online experience. But as with the adoption of any new technology, businesses need to understand the risks, ensuring they’re protecting their customers’ personal information, intellectual property and payment data.
Traditional retail in decline
There are many questions today about the longevity of the brick-and-mortar retailer. More stores are closing, malls are half-deserted and commercial real estate organizations are looking for tenants that are Amazon-proof. Consumers can now purchase almost anything with a single mouse click and have it delivered directly to their homes within a matter of hours, if not minutes. How easy!
The numbers also show that brick-and-mortar retail is declining. According to a recent report by CoStar, sales per square foot (at all but a few public retailers) have declined to an average of around $325 in recent years, down from nearly $375 in the early 2000s.
Contactless payments are one way in which IoT can be widely applied to brick-and-mortar retailers. While many retailers are allowing customers to pay in-store with their smartphones, customers also have the ability to pay with their Apple Watch using the Apple Pay technology. The Apple Watch-wearing customer merely has to double click a side button while the watch display is near a store’s contactless reader. The watch will vibrate and the payment is complete.
Taking it one step further, Amazon developed an entire store based on smart shopping technology. The advanced brick-and-mortar store, Amazon Go, is designed to challenge the “boundaries of computer vision and machine learning to create a store where customers could simply take what they want and go.” The flagship store, located in Seattle, is armed with IoT sensors used to understand customers’ interactions and behaviors. Amazon’s “Just Walk Out Technology” automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When customers are done shopping, they can just leave the store. Shortly after, Amazon charges customers’ Amazon accounts and sends receipts.
Other retail IoT devices being introduced into brick-and-mortar retail may sound whimsical on the surface — like Oak Labs’ “smart mirrors” which use a touchscreen to allow shoppers to customize their fitting-experience — but even they rely on trustworthy data to do their jobs.
Trusting the data
As businesses and individuals begin to enjoy IoT technology’s benefits, they also need to consider the security risks that are introduced. As we’ve seen many times, connected devices that were not designed with security in mind become easy targets.
For the technology to be successful, IoT devices must provide data that’s trustworthy. If you can’t trust the data, there’s no point in collecting it, analyzing it or making business decisions based on it. And that’s ultimately what retail IoT technology is all about. The concept of trust for IoT data is based on data security — from the time is it created and collected, through its wider distribution in the IoT ecosystem.
Think of it this way: If a smart mirror incorrectly records a customer’s size and orders an entire wardrobe that is too small, the retailer will lose credibility. Or if one of Amazon’s smart shelves incorrectly counts the number of ketchup bottles in stock and neglects to order more ketchup, a customer’s weekend BBQ might be ruined.
For now, we are seeing that an enhanced user experience can be easily enabled by IoT devices. If manufacturers get data integrity right, IoT technology might just be enough to save brick-and-mortar retail.
Over the past few years, the internet of things has exploded. Thanks to Moore’s Law, which states that the number of transistors in each chip will double about every 18 months, hardware developers have been able to fit much more functionality into the same footprint. This creates smaller computers, smaller phones and other consumer-ready electronic devices.
Everything that connects to the internet needs chips to do so, but only recently have chips become sufficiently small. This, combined with the explosion in wireless network availability, has made it easy to keep devices connected and give them remote functionalities.
That’s the summary of IoT: simple devices that can be controlled and monitored through new cost-effective chips of appropriate size. As large companies like Apple and Microsoft continue to invest heavily in the development of this technology, the question of how to build IoT becomes a question of how to manage the massive influx of data.
Oceans of information
For years, companies have relied on systems that compute and control from a relatively central location. Even cloud-based systems rely on a single set of software components that churn through data, gather results and serve them back.
The internet of things changes that dynamic. Suddenly, thousands of devices are sharing data, talking to other systems and offering control to thousands of endpoints.
This brings about new issues with data collection and analysis. Due to how these new networks share data, IoT devices are often slow, sharing bits of information without guarantees about when that data will arrive. This is particularly true in smart cities and buildings, where thousands of sensors generate data at various intervals and leave the processing up to the cloud.
As these networks evolve, they encounter new problems made possible by now-popular computing trends. Thanks to big data and smarter networks (through mesh networking, IoT and low-power networks and computing), the older systems cannot handle the influx of information they helped create.
The answer to these problems is a blend of cloud storage and edge computing. To take advantage of both technologies, however, IT professionals must understand how they operate.
The edge and the cloud
Edge computing and cloud computing are nearly opposites in the way they’re organized. Cloud computing efficiently uses a large chunk of a network to process and store information through a centralized spot — the data center where the cloud pop lives. This serves its purpose well, thanks to the tight interconnectivity of nodes sharing data with one another on a high-performance network.
With the rise of IoT, more companies want their computation capacities closer to the devices that are collecting information. Devices on IoT systems tend to be low on power and computational capability, so edge computing moves central computational power out of the cloud and closer to where the end users’ devices exist. When you work with a large number of clients, this makes processing move much more quickly.
Putting the two technologies together allows the cloud to handle general computation tasks while edge computing takes care of more client-specific needs. For example, data aggregation can rely on edge computing to aggregate data into a single set and then send it to the cloud for further processing.
By centralizing general workloads and handling more specific tasks on the edge of the network, IT professionals can improve user experiences while optimizing network and computational resources.
Use technology to get more from data
Edge computing is only now gaining popularity at telecommunications companies, but as more 5G networks become universally available, this technology will quickly become widespread. IT professionals should follow these three steps to prepare themselves and their companies for the approaching IoT data tidal wave:
1. Prepare network architecture
Today, early versions of edge computing are only being used in content delivery networks and some software-defined networks or telecommunications networks. For companies outside this niche, preparing to accommodate edge computing now will make adoption much easier in the future. Start thinking about current architecture and prepare for expanded edge capabilities.
2. Address data aggregation
Industries that currently control their edges, such as IoT networks and telecommunications companies, should already be aggregating their data as close to the edge as possible before transmitting in bundles back to central systems. Introduce queues and caches on the edge to prepare for the computational capabilities of merging and compressing data.
3. Seek out opportunities to optimize
Edge computing is all about efficient use of resources. Mapping architecture from a resource usage perspective is useful to find new ways to optimize.
Systems can compensate for increased cost of edge computing through the decreased cost of transmitting better data to the center and computing there. As edge processing matures, computational abilities at the edge will increase, providing more opportunities for prepared companies to leverage the technology.
Network and IoT-based companies can’t afford to wait until new technology arrives before beginning preparations to adopt. By following these steps, IT leaders can consider the progressions of IoT data and edge computing and prepare for their widespread arrival.
Rewind to roughly a year ago when the Mirai malware and associated botnet burst onto the scene with the largest distributed denial-of-service attack ever seen, hammering service providers and websites alike. It was one of those wake-up calls we all like to write about — a chill breeze that presaged the coming of an IoT winter. (Yes, that’s a Game of Thrones reference. I think it’s a fair analogy given the circumstances and the nature of the threat.)
And like most wake-up calls, we promptly hit the snooze button and went back to thinking about other priorities. Well, kind of. Over the last year, there has at least been some initial movement in making IoT a little more secure but we clearly have a long way to go. Thankfully there hasn’t been an IoT Armageddon in the year since Mirai, which is actually a pleasant surprise. But the internet of things has continued to develop in both predictable and surprising ways.
First, it’s still growing. Gartner estimated that there are around 8 billion devices in action this year, a growth of over 30% from 2016. However many there are, the amount of traffic these devices is generating is massive. While IoT devices may not yet dominate the net in terms of bandwidth (there’s a LOT of video streaming out there), the total traffic is growing so fast we’re having to get used to new terms just to measure the size.
At the same time, IoT itself is becoming more integrated into other emerging trends. Whether it’s digital twinning, feeding the voracious data appetite of AI/machine learning development or simply accelerating the event-driven agenda of digital transformation, IoT has faded as a central element of conversation. Now, IoT is discussed in terms of its supporting role in new productions. We’ve stopping talking about IoT as though it’s an emergent trend in its own right, and are starting to see it more and more as a crucial element in other trends.
While some may view this as a reflection that the whole IoT thing was really overhyped to begin with, I believe the opposite is true. The downplaying of IoT in these conversations is not a result of its diminishing importance, but rather the result of the foundational impact IoT is having. In much the same way as we no longer ponder how businesses will react to the advent of the internet itself, so the conversation has already moved past a discussion of IoT as such, and is now reorienting around the impact that IoT is having on other aspects of information technology.
Simply put, in a year’s time, IoT has become so foundational to the digital transformation of business that we assume its presence and must now begin to plan for the effect it will have on other emerging trends. The need to collect data from, and feed information to, IoT-enabled infrastructure is changing how we think about edge computing, cloud services and the way we model and manage the world around us. You would be hard pressed, even a couple of years ago, to have found many credible predictions for that level of impact.
A year after IoT became the test bed for the largest distributed denial-of-service attack ever, we are already beginning to experience the first wave of impact of this transformative trend. IoT isn’t just changing how we think about security or service delivery or the value of products, it’s starting to change the capabilities of information technology. And it’s barely even begun to arrive yet. A year from now? Who knows.