IoT Agenda

Page 5 of 70« First...34567...102030...Last »

November 2, 2017  2:31 PM

Putting the M and the C into 5G cellular IoT

Vicki Livingston Profile: Vicki Livingston
3G, 3GPP, 4G, 5G, Cellular, Communications, Connectivity, Internet of Things, iot, IOT Network, LPWAN, LTE, M2M

A majority of industry participants recognize that M2M, and now IoT, represents one of the key growth opportunities for telecommunication service providers and enterprises of various sizes in the next decade. Whereas 4G has been driven by device proliferation, bandwidth-hungry mobile services and dynamic information access, 5G will also be driven by IoT applications. There will be a wide range of IoT use cases in the future, and the market is now expanding toward both massive IoT (MIoT) deployment, as well as more advanced technologies that may be categorized as critical IoT.

Massive IoT and critical IoT requirements

Differing requirements for massive and critical IoT applications 1

Massive IoT refers to the tens of billions of devices, objects, and machines that require ubiquitous connectivity even in the most remote locations, like sensors buried deep in the ground, and that report their sensing data to the cloud on a regular basis. To reach massive scale, which is defined by global standards body 3GPP as at least 1 million devices per kilometer2, mobile networks must more efficiently support the simplest devices that communicate infrequently, and are ultra-energy efficient so they can deliver an extremely long ten-year battery life. The requirement would be for low-cost devices with low energy consumption and good coverage.

Massive use cases, for example smart metering, have different network requirements and highly commercial needs:

  • While reliability and availability might be more or less important, extended coverage is often a must and long latency is not problematic
  • Infrequent, mainly downlink, and small data packages
  • Devices need to be inexpensive (i.e., simple modules)
  • Device battery must last many years

Alternatively, critical IoT applications will have very high demands for reliability, availability and low latency, which could be enabled by LTE or 5G capabilities. The volumes would be much smaller, but the business value is significantly higher.

Critical use cases, for example public safety applications, have very demanding network requirements:

  • Low latency
  • High reliability and availability

There are also many other use cases between the two extremes, which today rely on 2G, 3G or 4G cellular connectivity. For example, a third group might be defined as “enterprise applications:”

  • They might need moderate bitrate, support for mobility or even potentially include voice (Voice over LTE)
  • They use smarter devices than just connected sensors (for example, personal digital assistants and insurance telematics), but these must be fairly inexpensive and have a long battery life.

The main questions asked by industry participants in the early stages of considering entering the field of M2M and IoT are the following: What are the key market drivers for IoT? How will we realize the 5G vision of connecting the massive IoT and critical IoT applications?

Globally, M2M connections will grow from 780 million in 2016 to 3.3 billion by 2021, a 34% compound annual growth rate and a fourfold growth in the evolution from 2G to 3G to 4G3.

M2M growth and migration

Global M2M growth and migration from 2G to 3G and 4G+ 4

The underlying fundamental enabler that makes this growth happen is the technology evolution. We are now at the point in time where viable technologies are available at the same time as concrete needs from different stakeholders are taking shape. This is the key reason for the emerging 3GPP Release 13 standards — NB-IoT, LTE for Machines (LTE-M), Extended Coverage-GSM (EC-GSM) — in low-power wide area networks (LPWAN).

Different IoT devices need different connectivity services — some only need to send a few bytes at long intervals, while others require continuous high-bandwidth connectivity with low latency. Operators, therefore, need to define a strategy that describes in which segments they want to compete and which will then guide them on how to tailor services for these segments and what technologies to deploy.

Frequency licenses are a key operator asset. If the operator has available FD-LTE capacity in sub-GHz frequencies, implementing NB-IoT functionality is a natural choice. If TDD-LTE capacity is available, then both NB-IoT and LTE-M are feasible. If GSM is available, EC-GSM IoT can be deployed, and if 3G is available, spectrum can be refarmed to support NB-IoT. All these options are possible due to the cellular standards development for IoT in 3GPP Release 13.

Correct pricing and price differentiation are essential. As the low-power wide area (LPWA) IoT market expands, lower prices for connecting devices to the internet will support the emergence of new applications with new types of sensors.

For mobile operators, IoT connectivity is a clear differentiator between companies like Amazon, IBM and Microsoft, which all compete in the IoT arena, but offer no IoT connectivity.

As well as connectivity, operators can offer additional services such as devices and device management, as well as applications and services related to those devices.

The first cellular IoT networks supporting massive IoT applications, based on Cat-M1 and NB-IoT technologies, were launched in early 2017 and are called LTE-M or LTE for Machines. LTE-M is the commercial term for the LTE-MTC LPWA technology standard published by 3GPP in the Release 13 specification. It specifically refers to LTE Cat-M1, suitable for IoT. LTE-M is a low-power wide area technology which supports IoT through lower device complexity and provides extended coverage, while allowing the reuse of the LTE installed base. This allows battery lifetime as long as 10 years (or more) for a wide range of use cases, with the modem costs reduced to 20-25% of the current EGPRS modems. LTE-M IoT devices connect directly to a 4G network, without a gateway and on batteries.

The LTE-M network is deployed with global, 3GPP standardized technology using licensed spectrum for carrier-grade security. This differentiates LTE-M from the technologies offered by other companies with non-cellular IoT offerings. LTE-M supports large-scale IoT deployments, such as smart city services, smart metering, asset tracking, supply chain management, security and alarm monitoring, and personal wearables.

Advantages of LTE-M over traditional IoT connectivity options include:

  • Longer battery life (expected up to 10 years)
  • Better coverage for IoT devices underground and deep inside buildings
  • Reduced module size (as small as 1/6 the size of current modules)

LTE-M has been embraced by leading global carriers to build a broad base of ecosystem partners, devices and applications for global markets. Supported by all major mobile equipment, chipset and module manufacturers, LTE-M networks will coexist with 2G, 3G and 4G mobile networks and benefit from all the security and privacy features of mobile networks, such as support for user identity confidentiality, entity authentication, confidentiality, data integrity and mobile equipment identification.

Commercial launches of LTE-M networks are taking place globally in 2017-2018. As of Sept, 2017, worldwide, there were 13 commercial NB-IoT networks including six in Western Europe and four in Asia Pacific, and six commercial LTE-M networks including two in the U.S. and three in Asia Pacific.

The U.S. is one of the largest and most advanced IoT markets in the world with both AT&T and Verizon launching nationwide LTE-M networks.

For example, AT&T launched LTE-M services in the U.S. at mid-year 2017, and will launch in Mexico by the end of the year to create an LTE-M footprint covering 400 million people. AT&T’s LTE-M deployment — the result of software upgrades — marks another step forward on the path to 5G and massive IoT.

Sprint launched an LTE Cat-1 IoT network nationwide by end of July 2017, and is anticipating a further upgrade to LTE-M in mid-2018, in turn followed by the rollout of LTE Cat NB1 (NB-IoT).

T-Mobile has tackled some of the biggest obstacles slowing IoT innovation with offerings for IoT and M2M. T-Mobile U.S. CTO Neville Ray said the operator will be deploying narrowband LTE (NB-LTE) and LTE-M, although a timeline had not been announced at mid-year 2017.

LTE-M benefits

Benefits of LTE-M. 5

Analyst firm ABI has predicted that Cat-M technology will see strong growth beginning in 2018 as network operators become more aggressive in their deployments. The initial cost advantage of non-cellular networks is likely to dissipate as cellular operators move on their deployments.

“Size and speed matter in the burgeoning LPWAN market,” said Steve Hilton, analyst at MachNation. “The more devices ordered for a technology like Cat 1, the lower the per unit price per device. And most assuredly the success of this market is going to depend on extremely inexpensive devices. In addition, the sooner that LPWAN solutions are available on licensed spectrum from carriers like Sprint, AT&T and Verizon, the less market opportunity there is for non-dedicated spectrum solutions like Sigfox and Ingenu.”6

With 3GPP Release 13 eMTC and NB-IoT as a basis, more enhancements to existing features, such as mobility and Voice-over-LTE support, were added in Release 14. Two key features — broadcast and positioning — were introduced. Latency and battery consumption reduction were also planned. New user equipment categories were added to enable UEs with expanded capabilities and supporting higher data rates. The 5G Americas white paper, “LTE and 5G Technologies Enabling the Internet of Things,” provides expanded information on many features for both massive IoT and critical IoT.

Looking ahead, 3GPP Release 15 introduces 5G Phase 1: While the requirements address MIoT along with ultra-reliable low latency communications, enhanced mobile broadband and other new cellular capabilities, 5G Phase 1 will focus on eMBB as an early deployment option of the 5G new radio and evolved core network. 3GPP SA2 has initiated a study on the architectural impacts of MIoT in the Release 15 timeframe, which will enable the architecture and protocol enhancements for MIoT to be realized in Release 16. These enhancements will further expand IoT capabilities to take advantage of increased coverage with wide area cells and small cells, improved resource efficiency for IoT devices with limited or no mobility, and support for multiple access technologies, from LTE and 5G new radio to WLAN and even fixed access. As the market for IoT continues to expand, 3GPP will continue to support the growing demands for new and improved communications for IoT devices.

Future white papers from 5G Americas will address the advancements for CIoT in Releases 14, 15 and 16 as the market continues to grow and expand through many different use cases and services. Massive IoT and critical IoT will be key parts of 5G.

1 “Cellular Networks for Massive IoT,” Ericsson white paper. Jan. 2016.
2 “Leading the LTE IoT evolution to connect the massive Internet of Things,” Qualcomm white paper, June 2017.
3 Visual Networking Index, Cisco, March 2017.
4 Visual Networking Index, Cisco, March 2017.
5 IEEE ComSoc Technology Blog.
6 IEEE ComSoc Technology Blog.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

November 2, 2017  1:23 PM

Data context vs. content in an IIoT environment

Christopher Bergey Profile: Christopher Bergey
Artificial intelligence, Cloud storage, Data Management, Data storage, Edge computing, IIoT, Industrial IoT, Internet of Things, iot, IoT data, M2M, Machine data, Machine learning

There is a tremendous amount of raw data being generated from machines, machine sensors and robots on the factory floor as part of the new industrial internet of things revolution. Some of this data will have intelligence and value that can improve operational efficiencies, foresee maintenance requirements and deliver smarter and faster business decisions. But most of it will be somewhat wasteful, not that interesting and certainly not worth saving for any extended period of time. The challenge that many industrial companies face — or will soon face — from their IIoT systems is how to extract value and intelligence from machine data?

Data content vs. data context

To answer this question, it is important to understand the difference in data between content and context relating to an IIoT environment. Automated production machinery and built-in sensors record endless hours of unfiltered operational machine data. That entity of recordings is content. There is a lot of information within the recordings, but nothing that can extract value, or intelligence, or enable process improvement decisions.

From the data that a machine collects, wouldn’t it be valuable to know more about the machine’s operation and whether it has deficiencies, is running off-center or requires maintenance or immediate attention? How about information that includes peak hours of machine use, associated yields, product rejects, operator effectiveness, congestion areas, material use estimates and so forth? These entities are context. Interesting or valuable data which needs to be stored for further use, analysis or mining for the unexpected, at some point in the future.

As this raw content resides in machines, sensors or robots at the edge of the network, compute processing is required to add that intelligence to the data. As data changes, more real-time processing at the edge of the network will be required. Historically, humans wrote algorithms in an attempt to transform the data from content to context. These simpler, fixed algorithms were processed over a small data stream, at near real time, local to the raw data in the devices. Nowadays, the sensor-created data streams are enormous and high fidelity.

Artificial intelligence is an example of massive real-time processing at the edge, enabling machines to perform human-like tasks. The in-machine sensors read, compare and physically map machine or robotic data to its environment, and include analysis and intelligent algorithms that look for patterns in the data, and will alert operators to anomalies and opportunities for process improvements that can save a manufacturing operation significant time and money.

Machine learning is the “teacher” of localized AI and developed from learning patterns within very large data sets so, when applied to machines or robots, will analyze behavioral machine patterns and interpret real-life operational scenarios that the machine can learn from. The more it learns, the more it can improve the localized AI algorithms to be even more accurate and effective.

IIoT edge-to-cloud storage strategies

The abundance of data generated from IIoT systems and AI-supported machine applications is creating new challenges for industrial operations. Either the systems respond to developing situations or review and analyze historical data, looking for areas where the process can be improved so the artificial intelligent agents can be trained to monitor the system. The ability to respond to real-time data changes requires that the data be immediately accessible and locally available (edge storage), while data worth saving for future use, analysis or process training purposes will be moved to the cloud.

Data analytics at the edge has become more of a requirement as well, as the growth of “smart video” in today’s surveillance systems creating business value and intelligence. Once placed in a commercial or retail setting, the smart surveillance cameras can perform real-time analytics that can recognize customer facial expressions, identify the number of people in the store at a given time, where they go, how long they stay, the effect of sign placements and a host of other possible options. The analysis is performed locally at the edge, in real time, as the results reside in the cloud for archival, or deleted and re-recorded.

In order to generate content locally that will lead to valuable context, either the compute and storage elements will reside directly within a sensor, and sometimes as part of the machine, or the data will be sent to a local wired or wireless network, using an edge gateway but located on the production floor. The IIoT storage strategy is to not funnel all of the data to the cloud, but instead, use a combination that stores data locally at the machine-level, as well as an edge gateway at the factory-level so that data can be aggregated locally, not exposed to the outside world, analyzed at the edge for intelligence and translated into a common cloud format for long-term data storage.

Forward-looking statements: This article may contain forward-looking statements, including statements relating to expectations for Western Digital’s embedded products, the market for these products, product development efforts, and the capacities, capabilities and applications of its products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 1, 2017  3:46 PM

Girding the grid: The missing piece of the smart city revolution

Steven Martin Steven Martin Profile: Steven Martin
Electric grid, energy, Internet of Things, iot, IoT applications, Service providers, Smart cities, smart city, Smart grid, utilities

Three and a half billion people live in cities around the world today, and this number is expected to nearly double by 2050. The rapid pace of urbanization, coupled with advancements in cloud computing, artificial intelligence and sensors, is driving a smart city revolution. From San Diego to Barcelona, the latest IoT technologies are tackling pollution, traffic congestion and crime — just to name a few.

A fundamental key to scaling and advancing these IoT technologies comes down to a city’s ability to transport reliable, affordable and clean power. But many developed nations — particularly the United States — are still using electrical grids constructed in the 1950s and ’60s with 50-year life expectancies. Our grids are not only out of date, they were built to manage a uniformed flow of electricity primarily from coal, petroleum and natural gas. The introduction of wind and solar energy, which ebbs and flows based on weather patterns, creates new stresses the grid was not designed to handle.

Combined with an uptick in extreme weather, the U.S. is experiencing a steady rise in blackouts. If this continues, the American Society of Civil Engineers estimates that U.S. gross domestic product will fall by a total of $819 billion by 2025 and $1.9 trillion by 2040. Furthermore, it predicts that the U.S. economy will end up with an average of 102,000 fewer jobs than it would otherwise have by 2025 and 242,000 fewer jobs in 2040.

While significant updates to our electrical grid infrastructure are needed, there are several emerging technologies that cities can start integrating right away to help power a more cost-effective, reliable and secure grid.

Digital twin

The heart of a smart city is its ability to anticipate a problem before it happens. Through digital twin technology, we can create digital replicas of physical assets to continuously learn from multiple sources and share what they learn with other people and machines. By combining sensor data, advanced analytics and artificial intelligence, digital twins build a bridge between the physical and digital worlds, providing a holistic view of the entire asset network.

Through simulation capabilities, digital twins can uncover historical behavioral patterns that show when, how and why a grid breaks — in turn, allowing cities to prevent and more quickly recover from power outages. It also allows cities to test multiple scenarios to better prepare for growing urban populations, extreme weather and more renewable energy sources coming online.

Energy storage

While we’ve made huge technological strides over the last decade in increasing the effectiveness and efficiency of renewable energy production, the challenge has been how to reliably transport and distribute renewables given our current infrastructure.

To ensure a continual and reliable flow of energy, the grid must carefully manage electrons to ensure the right amount of electricity is on the grid at all times. Too much or too little and there’s a blackout. But what happens when the wind stops blowing or the wind turbine produces more electricity than consumers need at any one point in time? This is where energy storage comes in.

When production outpaces consumption needs, there is excess energy. Rather than allowing excess energy go to waste, battery storage can capture the energy for later use and protect the grid from blackouts. In contrast, when the wind is not blowing or the sun isn’t shining, energy storage can serve as a backup to ensure a consistent flow of energy from renewables.

The challenge: Current battery technology can only output electricity for a few hours. In contrast, a peaker power plant can run as long as power is being fed. Similar to the automotive industry, utility companies now have access to transitional technology which, with its digital control system, blends electric and gas power outputs into a continuous energy flow and minimizes battery degradation, which is key to making battery technology cost effective for the long haul. So far, this has enabled gas peaker plants to cut greenhouse gas emissions and air pollution by 60%.

Smart city technologies are already changing the way we live, work and play in many parts of the world. We have cities filled with lights that fight crime, thermostats that adapt to human behavior and cars that talk to each other. This would have sounded like a utopian pipe dream to my dad growing up in the 1950s. Fast forward more than 60 years and a lot has changed. However, in order to fully realize the benefits of a smart city revolution, we need a strong foundation and it all starts with a modern electrical grid.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 1, 2017  2:16 PM

Why startup-corporate collaboration is a hot topic and why you should care

Tanya Suarez Profile: Tanya Suarez
Collaboration, companies, corporations, Internet of Things, iot, iot security, Security, security in IOT, Start up, start-ups, Startup, Startups

I don’t think anyone doubts that we are in an era of disruptive innovation or that truly disruptive innovations are unlikely to come from corporates. They tend to have “innovation antibodies” that are born from the very factors that have made them successful in the first place:

  • Internal processes that mean results are repeatable and reliable across the whole organization, but can lead to inflexibility and a culture of #notinthebook
  • Legacy systems that underspin operational success. A byproduct of having been around a long time, the cost of retrofitting legacy systems in a retail bank has been estimated at $400 million
  • Regulatory compliance requirements which are born to a certain extent from mass adoption and therefore part of the price paid for success

So, the perennial question corporates ask themselves is: Do we buy or do we build? The right answer (in so far as there is one) is that you do both. You build in areas in which you have core competencies and you buy in frontier or adjacent markets that can significantly add value to your existing portfolio.

For a Cisco or an ARM, it may or may not make sense to work with IoT startups to extend their product offering, although it does makes sense to work with them as potentially scalable customers.

For industry, however, the opposite may be true. Although many firms, from travel to betting, across a wide array of industries are at their core technology firms (for example, Goldman Sachs has 30,000 employees, 10,000 of which are developers), their expertise is narrow and directed at a specific application. Partnering with startups can help revive old business models by providing market differentiation from competitors. For example, Aviva is basing its marketing campaign in the U.K. on an app that tracks driving habits and rewards good or risk-reducing behavior. Manufacturers can move from product-based models to recurring revenue models that exploit cyber-physical opportunities. This is the model being pursued, rather polemically, by John Deere. In any real sense, ownership of the tractor is being transferred from the farmers to the company.

But there is another pressing reason why corporates need to work with IoT startups: Security.

As Bruce Schneier has said, “We no longer have things with computers embedded in them. We have computers with things attached to them.” Corporates need to understand the possible ways in which their products or processes could be hacked. The IoT Reaper botnet is infecting over 10,000 devices a day, with millions of IoT devices being queued for infection, according to Qihoo’s 360.

One of the best ways corporates can begin to understand what can affect their reputation and their business is by working with IoT startups. It’s an ecosystem approach that will not only help startups better understand how to build in security and resilience, but also help corporates better understand the chain of vulnerabilities.

Context: The above comments are based on our experience of working with 132 IoT startups with founders from countries such as the Ukraine to the U.K. Our equity-free program, part of Startup Europe, aimed to get them market-ready and to establish solid foundations for growth. Overall, 78% were B2B or B2B2C startups, and part of the acceleration program involved supporting them in the process of getting to their first pilots and/or contracts with corporates.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


November 1, 2017  1:28 PM

Privacy: IoT’s big break or big wall?

Don DeLoach Don DeLoach Profile: Don DeLoach
Data Analytics, Data Management, Data privacy, Data protection, Data-security, GDPR, Internet of Things, iot, IoT applications, iot security, privacy, security in IOT

Security and privacy are two prevailing concerns most people have about technology. At any given lunch meeting or even standing around with drinks at a cocktail party, it is not uncommon to hear people talking about the Equifax breach or the looming concerns about what Facebook or Google are “doing with my data.”

Along with the excitement about what technology can do for us is a not so latent fear of being hacked and an equally present concern about compromised privacy. It is no surprise that the growing concerns about privacy are leading people to call for regulations to curtail this move towards unfettered use of private data.

In fact, in the European Union, the move toward regulation has been solidifying for some time, resulting in the General Data Protection Regulation (GDPR), which goes into effect in May 2018. These regulations are seen as the first and perhaps most comprehensive approach to ensuring personal information is protected. It is largely viewed as a clarion call for organizations to anonymize data or face astoundingly large fines. In fact, the fines can be 4% of a company’s annual revenue. That means a company like GlaxoSmithKline, with annual revenues in 2016 of $37.09 billion could potentially be shelling out $1.48 billion in fines. If that doesn’t get your attention, what will? So, the companies based in Europe, as well as those doing business in Europe, understand they cannot be driving analytics using personal information where they don’t have express permissions to use that information. Among other things, that means potentially mountains of data collected in the past that may become illegal. And therein lays the dilemma.

But let’s set that aside for the moment.

In the world of the internet of things, more and more people and organizations are beginning to understand that the real value in IoT is a function of the underlying data and the insight that can be gained from the analysis of that data. By definition, this suggests that the richer and cleaner the data set, the greater opportunity for insight and better resulting actions. So, instead of using an IoT-enabled device and the data from that device being used primarily by the device provider, enterprises and their CIOs and chief data officers want in on the action. They are beginning to demand ownership and control of the IoT data from all devices so they can look at data from device A in the context of devices B, C, D and so on.

Furthermore, companies are combining this information with data from their enterprise systems, such as ERP, point of sale, crew scheduling and others, and then augmenting it further with external data, like weather data, demographic data, as well as more and more data coming from a range of public IoT devices and data shared from increasingly IoT-instrumented partnerships. This data is cleansed and enriched, then propagated to a variety of constituents in what is being referred to as a first receiver architecture, designed to get the right data to the right constituent in the right way at the right time. In essence, this separates the creation of the data from the consumption of the data to the utility value of the data, thus maximizing use. Everybody wins, right?

Now go back to the GDPR regulations.

What happens when 50% of that data is contextually delinked and minimized? This poses a serious limitation. Then, when you consider the combination of data sets as described above, the minimization is compounded, thus the use is reduced substantially.

But the idea that you have to either delink and minimize your data or circumvent compliance at the risk of extreme penalties is a false choice. As with almost any large market opportunity, challenges that surface drive innovation, in some cases through radical breakthroughs by a specific technology and in others via the combination of existing technologies applied in new and innovative ways.

There is a team in Poland headed by University of Warsaw Professor Dominik Slezak doing some incredible work using statistical metadata for high-value approximation over massive amounts of underlying data that has great potential, both for investigative analytics as well as machine learning against those huge data sets, but in a fraction of the time. Further, if one looks forward, it would appear to be a capability, when coupled with other technologies that protect privacy, that can yield equivalent insights to otherwise “illegal data” that does not comply with the emerging regulations.

Another perhaps even more specific example is a new company called Anonos. (Full disclosure: I am an occasional advisor to Anonos.) Most of the privacy companies aimed at GDPR accomplish anonymization — and therefore compliance — by permanently delinking significant context. But the GDPR regulations have certain safe haven exceptions deemed acceptable that can, in fact, allow for greater flexibility, but requiring accommodations that many of the pure play anonomyzation offerings lack. Anonos is an example of a market response to a big problem, and there certainly will be others. Either the GDPR regulations will fail to go into effect (which that won’t happen) or the market will respond with more innovation to fill the gaps. And other countries around the world are sure to follow.

Now back to IoT.

The wall is created when organizations don’t take the time to understand the goals, in this case being maximum use of data, and simply rely on a path of least resistance. But whether it is taking on the issue of data primacy or data privacy, the wall — and its implied limitations — can become an opportunity with a little innovation and thoughtful planning. And since so many organizations will indeed take a path of least resistance, the magnitude of the opportunity for those getting it right is all the greater.

Many believe IoT will ultimately dwarf all other technology waves in terms of importance and especially in terms of scale. Most also agree that the real value of IoT lies in the use gained from the underlying data. While data scientists are becoming amongst the most important people in the equation, it really doesn’t take a data scientist to figure out that an enterprise taking a thoughtful approach to data, be it for security, privacy or primacy, makes all the sense in the world. It is the opportunity.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 31, 2017  2:49 PM

How mobile operators can bring the smart city to life

Aman Brar Profile: Aman Brar
GATEWAY, Gateways, Internet of Things, iot, IoT applications, IOT Network, MNO, Mobile connectivity, Mobility, Smart cities, smart city

Can you relate to this? You find that one elusive parking spot — and as you are backing up, another car sneaks in. A study conducted last month in the U.K. found that more than half of British drivers suffer from stress when they cannot find a parking space. Probably all drivers in major towns and cities everywhere could relate to parking stress. You would think that this modern day scourge could be addressed with IoT.

In theory, it should be simple. Smart parking sensors would be able to flag vacant parking spots to motorists on an application, such as Google Maps, Waze or Apple Maps, as they enter a town or city. Parking would be a breeze! Not so. As things stand, motorists are still far from finding a remedy for their parking headaches. The smart parking use case is a prime example of a problem holding back IoT’s full potential.

The rise of NB-IoT

The reason for this conundrum is two-fold — a lack of standards and a gap between efficiently connecting parking sensors with multiple cloud vendors and central application servers. Currently, NB-IoT, especially the non-IP data delivery, provides the cellular functionality required to efficiently connect devices across large distances with prolonged battery life. A number of mobile carriers, including Vodafone, Three, China Mobile and Zain, either have deployed NB-IoT networks or are conducting trials. According to analysts, the NB-IoT chipset market could grow from $16 million in 2017 to $181 million by 2022 at a compound annual growth rate of just over 60%. Yet, connecting non-IP devices, such as smart parking sensors, over NB-IoT to platforms, such as Azure, CloudPlatform or AWS, via central application servers is complex.

Currently, IoT gateways form the bridge between smart sensors or devices and internet connectivity via NB-IoT on a cellular network. 3GPP and the Service Capability Exposure Function (SCEF) set the standards to connect the device/sensor to the IoT gateway. Things start to get murky when you want the device/sensor to connect to multiple application servers and cloud platforms over NB-IoT gateways due to the absence of agreed standards and protocols. For example, a developer of a smart parking sensor would have to send its data to a central application server, transmit it via Amazon’s Cloud or to Microsoft Azure, and then to the likes of Waze, Google Maps and so on, which would pick up that data from multiple clouds. As the data from sensors is not federated and easily available, it is cumbersome and complex. This long-winded process is stifling innovation.

A bridge too far?

So, which players in the IoT ecosystem are best placed to find a solution for this? The organizations responsible for carrying the data on NB-IoT or, in other words, the mobile operators. Rather than just being the workhorse for transporting IoT data, mobile operators can play a central role by using gateways and building an open application ecosystem to foster interoperability between applications, devices and enterprise backend systems. Operators need to be the bridge into IoT systems such as AWS IoT and Azure IoT platforms. Importantly, operators also have the technology to secure the network and the IoT devices from attacks and malware, as well as provide network abstraction and enhanced connectivity.

To enable mobile operators to do this, gateways should have the capability to handle telco-grade distributed databases with the scale to manage millions, if not billions, of devices. Put simply, gateways should allow operators to consolidate the functionality required from standards and protocols such as SCEF/SCS and extend it to other API’s like REST-JSON, MQTT and federate data from multiple sources.

If, as Gartner predicts, there will be over 20 billion connected IoT devices by 2020, mobile operators could secure revenues of around $48 billion by capitalizing on the IoT/M2M opportunity. To do that, they need to grasp the initiative and foster innovation. Parking might sound trivial, but it is a use case nonetheless, within smart cities, that highlights a growing problem for application developers with NB-IoT connectivity — and an opportunity for mobile operators.

Smart cities are touted as the future of urban living where everything from waste bin collections to streetlights and transportation will be connected and intelligent. Of course, smart cities and IoT are still in their relative infancy and this provides an opportunity to iron out problems and fine-tune networks. We need to address the issues that are stifling innovation — namely the gap in efficient connectivity and a lack of standards. Mobile operators have the power to add the “smart” into cities and bring that bright future to life.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 31, 2017  12:34 PM

Four stages of securing the super-connected world

Srinivasan CR Profile: Srinivasan CR
cybersecurity, Data-security, DDOS, Internet of Things, iot, IoT data, iot security, security in IOT

October is National Cybersecurity Awareness Month. The initiative by the Department of Homeland Security and the National Cyber Security Alliance is a huge collaborative effort spanning both public and private sectors, and a good demonstration of how the industry is coming together to safeguard the digital world.

While businesses in the U.S. and globally are still reeling from the WannaCry and NotPetya ransom attacks and the massive Equifax data breach, scrambling to update their systems to protect themselves, there is another kind of threat looming on the horizon.

The internet is today in the hands of around 3.5 billion people. And there are around 6.5 billion connected devices in use worldwide — a figure that is projected to hit 27.1 billion by 2021. What’s more, as consumers, we’re more connected today than ever: the average internet user today owns 3.64 connected devices, uses 26.7 apps and has an online presence across seven different platforms.

The ubiquitous global connectivity enabled by mobile applications and the internet of things opens up great possibilities for personal and organizational growth, from smart city advancements to transforming how industries produce goods. The industrial IoT has seen significant advancement in recent years. For example, by connecting assets in a factory, organizations can have better insight into the health of their machinery and predict any major problems with their hardware before it happens, allowing them to stay one step ahead of their systems and keep costly outages to a minimum.

Yet, IoT also exposes us to more security vulnerabilities that can cause financial loss, endanger personal and public safety, and cause varying degrees of damage to business and reputation. After all, anything that is connected to the internet is a potential attack surface for cybercriminals. For example, distributed denial-of-service (DDoS) attacks are getting better at exposing vulnerabilities in networks and infecting IP-enabled devices to rapidly form a botnet army of infected devices which grind the network to a standstill. Simply put, the more devices there are connected to a network, the bigger the potential botnet army of DDoS attacks.

Furthermore, without adequate security, innocuous items that generally pose no threat can be transformed into something far more sinister. For example, traffic lights that tell cars and pedestrians to cross at the same time, or railway tracks that change to put a commuter train on a collision course.

As the number of connected devices continues to grow and both public and private sector organizations embrace IoT, IT decision-makers must pause and think about how they can work together to create an end-to-end infrastructure that can deal with the influx of new devices and the inevitably rapid spread of cyberattacks in our increasingly connected world.

First, security must be built within IoT systems and the rest of the IT estate from the ground up, instead of retrofitting piecemeal security products as new threats emerge. Second, organizations need to adopt an adaptive security model, continuously monitoring their ecosystem of IoT applications to spot threats before attacks happen. Adaptive security means shifting from an “incident response” mindset to a “continuous response” mindset. Typically, there are four stages in an adaptive security lifecycle: preventative, detective, retrospective and predictive.

  1. Preventative security is the first layer of defense. This includes things like firewalls, which are designed to block attackers and their attack before it affects the business. Most organizations have this in place already, but there is definitely a need for a mindset change. Rather than seeing preventative security as a way to block attackers completely from getting in, organizations should see it as a barrier that makes it more difficult for them to get through, giving the IT team more time to disable an attack in process.
  2. Detective security detects the attacks that have already made it through the system. The goal of this layer is to reduce the amount of time that attackers spend within the system, limiting the subsequent damage. This layer is critical, as many organizations have accepted that attackers will, at some point, encounter a gap in their defenses.
  3. Retrospective security is an intelligent layer that turns past attacks into future protection, similar to how a vaccine protects us against diseases. By analyzing the vulnerabilities exposed in a previous breach and using forensic and root cause analysis, it recommends new preventative measures for any similar incidents in the future.
  4. Predictive security plugs into the external network of threats, periodically monitoring external hacker activity underground to proactively anticipate new attack types. This is fed back to the preventative layer, putting new protections in place against evolving threats as they’re discovered.

For organizations to protect themselves, they need to get this mix right; all four of the elements improve security individually, but together they form a comprehensive, constant protection for organizations at every stage in the lifecycle of a security threat. With billions of consumer and business IoT applications exchanging billions of data points every second, IT decision-makers need to map the end-to-end journey of their data, and the threats lurking behind every corner.

At the start of this year’s National Cybersecurity Awareness Month, Assistant Director Scott Smith of the FBI’s Cyber Division said, “The FBI and our partners are working hard to stop these threats at the source, but everyone has to play a role.” Organizations that work with their peers and security specialists to secure their IoT ecosystem and network will be rewarded in the long run. There’s no one-size-fits-all approach to securing IoT worldwide; it will take a considered, collaborative effort to safeguard the super-connected world today and tomorrow.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 31, 2017  10:48 AM

DevOps issues multiply in IoT environments

James Kirkland Profile: James Kirkland
Agile, Agile software development, Collaboration, DevOps, Internet of Things, iot, IoT applications, IT, IT staff, Operational technology, Software development, Waterfall

My high school freshman son has been pretty stressed out these past couple of weeks. Dealing with the pressures of multiple teachers giving assignments all due on the same day has been very frustrating. As I watched him tackling mounds of math homework, I considered similar struggles that happen within IoT development. It dawned on me that the processes thwarting IoT progress could be boiled down to a simple mathematical equation: IoT = DevOps2.

IoT equals DevOps squared?

Let me explain. To achieve a successful IoT application implementation, you need an awareness of the complete physical environment in your IoT scenario. At the same time, you need to recognize that you are dealing with several groups scattered throughout your organization; OT development, OT operations, IT development and IT operations.

DevOps squared

Each of these four distinct groups, like teachers of different subjects, has specific goals and concerns. Coordination and planning among groups are required. Otherwise things will run inefficiently, or worse, at cross purposes. For things to run smoothly, you need a coordinated plan with assignments and processes, and a constant exchange of information.

Of course, it’s important to remember why each group is its own distinct entity. Differences in their objectives must be considered. Just as math and history have very different aims in what is being taught, each group within an organization has different objectives and metrics. While operations and development are similar, developers are measured against creating new and better software and doing it as quickly and efficiently as possible. And staff in IT operations are much more concerned with keeping an environment stable and operating. Whatever they decide to implement must be able to safely integrate into their current environment.

Like the goals of an educational institution, the goal of a business implementing an IoT project is to ensure everyone completes the process together and on time. If the OT development group finishes its assignment and throws it over the wall with no one there to catch it, or if the IT development group claims success for having finished “on time” but the other groups can’t, the IoT project fails. So, what can be done to avoid development pitfalls in the new atmosphere of IoT?

X can’t equal old waterfall techniques

First, let’s acknowledge that older software development methods hinder the collaborative processes required to support IoT. I’m talking about methods whereby months are spent developing software, documenting it and then testing it only to find out (too late) that the software is either too buggy or no longer meets the expectations of the customer. This timeframe and methodology won’t fly in the IoT context, where requirements of all four distinct and very important groups must be met. IoT is truly all about real time.

Strong integration and communication among all four groups is required to make IoT work. By merging these four groups into one team that works together, you can feed in requirements earlier and make sure things are supportable sooner in the process. The team can then more easily pick up on concepts and have those concepts reinforced through different tasks. It’s all about determining what is important to the greater goal and finding the pieces that are required in each area to support that. In the image below, you can see that there are indeed common requirements between development (Dev) and operations (Ops). Focusing on these requirements to create a DevOps platform smooths the way for greater efficiency and innovation.

DevOps features

Give an A+ to Agile

Agile software development, such that is enabled through DevOps and CI/CD (continuous integration and continuous delivery) improves collaboration, and thereby, innovation. When everyone is in charge of quality management and people are working together, rather than in silos, the potential to break functionality in other areas is avoided. When you are constantly releasing products with small, incremental changes and testing them as you go, you know immediately if something has broken and can pinpoint what needs to be fixed. By catching and addressing the problem quickly, the whole team produces more stable and better-quality software.

Containers are often used in an agile development process. Being able to make modifications to “layers” of code without impacting others allows for changes to be made quickly while keeping the code stable. Scrum is another important agile project management tool. Unlike traditional waterfall-style project management, scrum deals with real-time, face-to-face interactions to ensure clear communication to all involved.

Agile development environments scale quite well, which is essential in today’s quickly changing markets like IoT. These environments are far more efficient and allow updates to occur quickly and often without affecting underlying layers.

Using the right tools is important to success

You can take this whole idea of agile integration and development processes and expand it out into other aspects of IoT, including your architecture approach. My colleague, Ishu Verma, discussed this at length in his blog post, “Using Agile Integration for IoT.” As he points out, it really comes down to transforming from customized technologies to those based on standards and focusing on the ability to collaborate. This is why customers are using container platforms and other development tools to update their IT infrastructure and adopting an agile, DevOps approach to application development.

Combining agile integration with tools found in device application frameworks specialized to build and manage machine-to-machine or IoT applications, such as Eurotech’s Everyware Software Framework and Everyware Cloud, gives IoT developers on-demand, self-service capabilities while making it easier to work together.

Now, if only the freshman curriculum were this easy.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 30, 2017  2:20 PM

How managed services can get the most out of IoT

Christoph Ruef Profile: Christoph Ruef
Analytics, Data Analytics, Fleet Management, Internet of Things, iot, IoT analytics, iot security, MSP, Service providers

IoT-enabled products are everywhere — or at least will be soon. Gartner projects that there will be more than 20 billion connected devices by 2020. Businesses already recognize the potential of IoT to deliver value. According to a recent survey, 82% believe they will adopt some form of IoT within the next two years, in part because they believe that the information provided by IoT-embedded devices will promote other innovations.

But adopting IoT is no simple matter. And few companies presently have the skills or infrastructure needed to safely and productively deploy IoT technology.

Implementing IoT requires highly specialized skills which nearly half of all companies believe they lack. This has created a wide and troublesome skills gap — a disconnect between the demand for a technology and the know-how to apply it. Additionally, most companies lack the necessary software, security and IT infrastructure capabilities to employ IoT. Instead of undergoing the costly and time-consuming process of acquiring these competencies, businesses should turn to specialist service operators — those with the right expertise, experience and equipment — to extract optimal value from cutting-edge IoT technologies.

Managed service providers (MSPs) are well-suited to fill this important role. Throughout the business world, MSPs handle complex IT deployments, like device as a service and managed print services, to add significant value for their customers. This value comes in the form of cost-savings, efficient deployments and support services, all of which are made possible by MSPs’ familiarity with the technologies they offer.

MSPs are heavily incentivized to acquire rare and in-demand skills, such as a competency in IoT, and have a unique economic advantage when it comes to complex and cutting-edge technologies. Their business models allow them to spread the cost of knowledge acquisition across a wide customer base, making their investment cost-effective. Ultimately, this translates to expenditure savings for the customer and a transition from heavy initial Capex to cost-efficient, use-based payments.

With the broad benefits of a service provider who can help guide IoT projects now established, there are three key areas in which they can offer value specifically to the IoT space:

1. Advanced analytics

IoT technologies will generate a great deal of precise, real-time data on how their host devices are performing. This data can be used to improve device performance and lifespan. However, McKinsey found that only 1% of all IoT data collected is ever used. Clearly, companies aren’t capitalizing on the available data. The issue being that it’s too complex and voluminous to be easily manipulated — it requires an understanding of big data analysis.

MSPs will develop the ability to analyze and interpret this important data — which will be fed directly back to them from the devices they manage — using big data principles. Properly employed, this data can offer important insights into usage, performance and physical condition — all of which can be used to optimize devices, configure them for specific end users and forecast wear and tear. These insights will greatly improve user experience.

2. Service innovation

MSPs can also combine this new classification of device data with their existing databases of client information. Contrasting macroscopic corporate insight with microscopic usage analytics will yield a range of findings. They will also empower MSPs to develop and deliver a range of innovative and tailored services — preventative repairs, product tailoring, product tuning, improving customer service — to make sure companies and their employees are getting the most from their products.

There are several situations and fields in which new, IoT-enabled services could provide a significant boost. In logistics, sophisticated asset tracking would enable companies to more closely monitor their supply chain, allowing for logistics and inventory optimization and stricter quality control. In manufacturing, construction and natural resources, field equipment intelligence can deliver reduced maintenance cost and downtime with preemptive repairs. In the office environment, IoT-driven customer insights could allow MSPs to plot the distribution of printers and personal systems to maximize productivity but minimize outlay.

Patterns in IoT data could reveal ways to improve business operations and efficiencies. MSPs could soon also be combining recommendations on device deployment with data-driven counsel on maximizing device deployment to meet business goals.

3. Fleet security

New capabilities mean new vulnerabilities. And more data means more sensitive information. Ninety-six percent of security professionals responding to a recent survey said that they expect to see an overall increase in industrial IoT breaches this year. And attacks like the massive distributed denial-of-service Dyn attack, which targeted insecure IoT devices such as webcams, video recorders, routers and baby monitors, will only increase in frequency and impact. In fact, security is number one in Gartner’s top 10 IoT technologies for 2017 and 2018, saying “IoT security will be complicated by the fact that many ‘things’ use simple processors and operating systems that may not support sophisticated security approaches.”

Diligent MSPs will gain the know-how necessary, or enlist a suitably qualified professional partner, to ready their customers for IoT-related threats. As part of their growing suite of evolved services, MSPs will provide security assessments, devices with built-in detection and recovery capabilities, automatic security updates across fleets and appropriate data destruction for retired devices. MSPs will also continue their move beyond network security into endpoint security, and will design and deploy robust but unobtrusive security protocols. Fundamentally, this means far better fleet security than would be possible for a business without significant in-house expertise.

In many unforeseen ways, IoT is a game-changer. To maximize and protect every dollar invested demands a unique skillset and a deliberate, thoughtful deployment. Managed service providers will deliver the intelligence, skills and experience to use the full potential of IoT-connected devices, while ensuring that company assets are safeguarded.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


October 30, 2017  2:18 PM

Where is the first CTO turned CEO?

Avi Chesla Avi Chesla Profile: Avi Chesla
CEO, CISO, CTO, cybersecurity, Equifax, Internet of Things, iot, iot security, Security, security in IOT

Where do CEOs come from? What roads led them to their exalted and all-powerful roles? In a world of sprinting technology, internet-enabled systems across every enterprise and increasing cybersecurity threats, this is an unexpected but existentially relevant question to ask.

According to a recent Forbes article, 75% of Fortune 100 CEOs come from operational backgrounds, and 32% were also CFOs at one point. These ladders to the top are long-established, but I’d like to pose this provocative question: Are we in a new era where CTOs should have a path to top-dog status?

The recent news stories about threats to the enterprise frame up the argument in compelling and dramatic terms.

Would a CTO as CEO have saved Equifax?

There are guarantees. But there’s no doubt that someone with a deep and current cyberbackground would not have allowed the porosity, sloppiness and defenselessness that led to the Equifax breach and its far-reaching implications. Given the impact of the hack and its cascade of consequences affecting more than 100 million Americans, the inadequacy and emptiness of the response of the former CEO, Richard Smith, make clear that traditionally trained CEOs are not up for the task of running a data-first organization.

And let me underscore “former.” Smith’s inadequate testimony, and his clear inability to run a company whose very existence is based on data security, led to his demise.

I believe that a talented, sophisticated and savvy CEO with a CTO background would have constructed and managed a more aware and resilient security apparatus than Equifax had. It would have known how to ask the probing questions and organize people and processes more strategically. Indeed, CEOs without a deep understanding of today’s cybersecurity challenges, complexities and demands are at a serious disadvantage.

In fact, the challenges are so great that even a technology-led company like Facebook can still fall victim to technology threats. While the abuse of the platform by Russia was not a hack, it used holes in the Facebook system in order to mask the actual sponsor of the ads. Imagine what might have happened if Facebook was led by someone who had run supply chain!

As the world becomes even more global and interconnected, future CEOs will be confronted with an armada of unforeseen issues and challenges. Consider the range of businesses whose strategic partnerships, consumer relationships, reputations and overall trust, as well as regulatory compliance, are contingent on cybersecurity.

Or more accurately stated: Which ones are not? Putting aside database businesses like Equifax, CEOs of companies in industries ranging from transportation to healthcare, e-commerce, software and infrastructure all require an in-depth security background. One which goes well beyond the check-the-box experience that is part of the typical CEO track. In all these cases, the CEO is responsible for data which, in hacked hands, can cause tremendous harm to the lives of millions.

IoT adds additional layers of complexity

Previously, I reviewed the importance of cybersecurity in medical devices, where an ineffective threat management platform can lead to murder. That’s just the start, though. Autonomous vehicles, for example, pose grave threats to personal safety and reputational security that can’t be pushed down to IT departments; the inadequate protections and procedures can be business-ending.

We need to have faith in the ability of our CEOs to guard the data they have been given responsibility for — meaning they need the chops to set in-depth briefings from their security IT personnel and, in turn, push back hard with pointed questions.

This is a complex mission for any CEO, who by definition needs to deal with so many different business operations simultaneously. We expect them to ask the right questions and understand and brief their board of directors, but in cybersecurity — perhaps more than in other technology-related areas — the devil is in the details (and the malware). Without a thorough understanding of the reality of today’s advanced targeted attacks and threat actors, a CEO cannot effectively frame the right questions and assess if the answers they get from their IT employees are good enough. These details are broad, related to not just the technologies their business relies on, but the entire architecture that their defenses rely on as well.

So, in this era of unprecedented cyberthreats, including the new risks attendant to IoT exposure, we need to create a new generation of CEOs who come from CTO and CISO backgrounds. This next generation can be accountable to shareholders, boards and the business community because their apposite backgrounds will liberate them from sole reliance on their IT departments.

From CTO to CEO

Of course, the era of the CTO-to-CEO track will not happen overnight. The reason that CEOs come from operational and finance backgrounds is that decades of management training are behind this process and trajectory. They are groomed for advancement early on, and are intentionally rotated through different functional areas to give them exposure to the breadth of experience that CEOs requires.

This is not happening with CISOs and CTOs. Their careers typically start and stop in the same department. There are no systems or processes in place to identify CEO prospects from the ranks of technology and IT, so until those mechanisms are put in place — requiring board-level recognition of the challenges I described earlier — it is unlikely that they will ascend to CEO status.

Until then, we will have to make do with the crop of CEOs and CEO successors we have. And they have big gaps in cyber-awareness to make up. Now, don’t get me wrong, I am not saying a CEO needs to understand all the details and be involved in the day-to-day evaluation of his defense architecture, of course. But they should have the ability to make wise decisions given today’s threat landscape and new technologies, which accelerate productivity, but also extend the threat surface of the company by creating weakness and vulnerabilities.

CEOs must rely on informed intuition, but because virtually 100% of them have had minimal exposure to cybersecurity — with some of it probably decades old — and as a result have a check-the-box approach to this existential threat, their intuition will be handicapped.

So, while we are waiting for the next generation of CTO/CEOs — which, in my view, cannot happen soon enough — we need current CEOs who push themselves to be educated about the complexities of today’s threats. This must be an on-going process; there is no “Cybersecurity for Dummies” silver bullet for CEOs. But as the Equifax disaster has shown us, a CEO who doesn’t full grasp cybersecurity might be soon changing their LinkedIn profile to “Former.”

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 5 of 70« First...34567...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: