IoT Agenda

Page 30 of 42« First...1020...2829303132...40...Last »

September 30, 2016  11:42 AM

Who needs the internet of things and why

Tom Bradicich Tom Bradicich Profile: Tom Bradicich
Internet of Things, iot, Sensor data, Sensors

Data has value. Combined data has more value. The data pent up in the “things” of the internet of things will promote bringing more data together from more sources than has ever been achieved in human history. If your company doesn’t have an IoT strategy, consider getting one. For a good brief on IoT and its value, take a look at my other blogs.

A key moment in time

Boards of directors and CEOs that were questioning executives about PCs in the 80’s, internet and ecommerce in the 1990s, mobility and cell phones in the 2000s, and data centers and cloud computing in the 2010s, are today asking about the internet of things. Ignoring their interest isn’t likely to end well, considering how complex IoT can be — and how it will be essential for competing in the digital economy.

IoT: Everything you want it to be

Definitions of IoT vary for the same reason that everyone’s experience with IoT varies based on what the types of devices to which they connect, and what data they extract as a result. Pulling temperature data from a thermostat and activity logs from a furnace isn’t the same as tracking the flow of oil on an offshore drilling platform.

And those are just two examples. Thousands more are out there, adding to the rich diversity of IoT. This year alone, Gartner says that we can expect 6.4 billion things to be actively connected — a 30% increase over 2015’s total. Moving fast to glean value from the resulting streams of data should be a top priority for most global businesses. The big questions are how to do it and where to leverage it.

The first of these two questions can be tough and requires some detailed explanation concerning device connectivity, the type and variety of data, the networks, and the process of digitizing what is primarily analog information for the physical world.

Reasons your business should leverage IoT

Experienced enterprises are building platforms to handle the complexity of it all. Not surprisingly, we’re serving a diverse set of use cases and benefits such as predictive maintenance.

Consider offshore oil platforms receiving regular upgrades and repair whether they need it or not. Why? Safety requirements and cost avoidance demand that companies take necessary precautions with industrial equipment. So, in the absence of data to the contrary, energy companies order maintenance to keep crews and the environment safe. Now, if that same equipment were under constant sensor surveillance with data collected and analyzed regularly, more efficient and precise repair suddenly becomes possible. This can avoid both types of errors — withholding maintenance when it’s really needed, and performing maintenance when it’s not needed.

Farming is a tough business. Generally, every once in a while we rely on a medical doctor, a lawyer or a repair person. But every day we rely on farmers. Successful farming can be largely based on gut feel and experience. What if it didn’t have to be that way? Consider what I like to call “Agriculture 3.0” — farm tractors and equipment imbued with instrumentation and computer systems that place fertilizer with the precision of a lab pharmacologist could also read data from moisture sensors in the fields, adjusting doses as it travels? Mother Nature would still have her say during the planting season, of course, but the odds of a more optimal crop yield at the time of reaping would also increase dramatically.

There are many other internet of things benefits, and many more will emerge as time goes on. These will be seen as more business leaders answer the question raised by the “perpetual connectivity” of IoT. What could your business do if your product, location, assets, people and customers (things) that embody value were perpetually connected to each other and to a central system? What would you do first?

That we can even ask those questions today speaks volumes for how far we’ve come since the 1980s. From the PC revolution to the cloud to smartphones, data centers and finally IoT, there’s a never-ending sea of valuable information that is becoming available to smart business leaders.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

September 29, 2016  12:29 PM

Securing industrial IoT: Spotlight on DMZ and segmentation

Sravani Bhattacharjee Profile: Sravani Bhattacharjee
DMZ, IIoT, Internet of Things, iot, iot security

Industrial IoT is ushering the era of IT and OT convergence. However, making traditional OT assets smarter with IT technologies also means a larger cyber threat surface and, hence, more exposure to cyberattacks which are growing smarter as well.

Unlike in IT, a breach in OT security means not just data and identity loss, but possible breakdown of critical infrastructure and even human safety.

Industrial equipment like turbine engine or industrial control systems was never built to evolve as cybersecurity-savvy IT products. But now these very OT assets are getting connected to open data networks. Lack of evolved cybersecurity stack in these “live” assets calls for immediate security countermeasures.

That’s when the spotlight turns on segmentation and DMZ (also known as perimeter network) for plausible rescue.

Network segmentation started in enterprise IT networks using shared media/Ethernet as a means to improve performance and bandwidth. Over time, however, it proved itself as a proactive security tool. Segmentation of a network into separate zones or subnetworks (such as HR, operations and engineering) reduces traffic collision and improves throughput. However, a side and more compelling benefit is that it allows the containment of network traffic within a specific zone. That in itself is a very effective way to prevent malicious traffic from spreading across the enterprise network.

According to ISACA, “a common technique to implement network security is to segment an organization’s network into separate zones that can be separately controlled, monitored and protected.”

DMZ, until now, was selectively used in industries by defining few broad perimeter networks. However with increased sophistication of cyberattacks and as traditional factory assets become intelligent and connected, just one line of defense isn’t enough.

In case of an intrusion, a countermeasure is critical to prevent lateral movement of the malware. As a result, zoning inside corporate IT networks, and even between IT and OT networks, is important.

Now the question is: Are conventional segmentation principles good enough to secure industrial OT assets?

OT and IT segmentation: The dynamics differ

Zoning industrial networks to create multiple perimeters is a promising way to secure OT assets. However, to implement it we just can’t replicate IT segmentation techniques such as VLANS, routing and firewalls as is. VLANs and routing can become complex very quickly. Implementing IP address and subnet configurations often run into complexity, require reconfigurations and demand specific technical skills. This may not be easy to get around in OT environments.

Most IoT devices are either too low in computing resources to integrate robust security stacks or not designed (think of turbine engines, industrial belts) to integrate security stacks prevalent in IT.

Easy virtual zoning

In the case of large industrial systems, they are too large to move around. In many cases the devices are located in remote locations and not easily accessible to maneuver physically. Besides, in OT, well-defined software upgrade cycles to integrate patches, etc., are missing. And most importantly, these systems must run uninterrupted. Industrial equipment downtime translates to disruptions which are mostly unacceptable.

That’s why when planning segmentation in OT environments, we must think differently.

A zoning solution needs to be easy and done in a centralized manner without having to move around bulky industrial gear or reengineer existing systems.

Instead of physical segmentation as in the case of IT, in OT environments we need to think of virtual or logical zones, without any physical dependency and simpler user interface through which these zones can be centrally configured and controlled.

Deep packet inspection

DMZ defines the periphery of a zone and uses firewalls, either standalone or cascaded. Firewalls are used for deep packet inspection and traffic filtering. Deep packet inspection and intrusion detection provide the needed information to process incoming traffic.

However, IT firewalls are designed to read IP protocols and as such cannot serve the purpose for many industrial protocols such as Modbus, MTConnect and OPC. To secure OT networks, we need firewalls which in addition to IP can also inspect the packets and extract contextual information to monitor, protect and control OT zones.

Bottom line

As security technologies in industrial IoT steadily evolve, segmentation can provide considerable relief to security concerns in OT environments — so long as we develop and use the right tools and methodology specific to OT. Simply replicating IT legacy may not help that far.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 29, 2016  9:53 AM

Cyber-physical systems and IoT

Eduardo Krumholz Profile: Eduardo Krumholz
Internet of Things, iot, iot security

There is a fourth industrial revolution coming. Following the lean revolution of the 1970s, the outsourcing phenomenon of the 1990s and the automation that took off in the 2000s, cyber-physical systems have become the next frontier.

When it comes to IoT, the convergence of smart connected devices ($26 billion by 2020 according to Gartner, $212 billion by 2020 according to IDC) and their role in cyber-physical systems surfaces several important security concerns. These include scalability (the capability of a system, network or process to handle a growing amount of work), pervasiveness (also called ubiquitous computing) and persistence (referring to the characteristic of a computing state that outlives the process that created it). These security issues may also lead to a dramatic increase in threats against a much larger attack surface which many enterprises may not be ready to undertake.

The burning platform

Historically, the edge of a network — for example, internet-connected “things” — was not connected to an IP-based network and was out of the jurisdiction of the CIO. Advances in communication protocols, the miniaturization of electronic devices and the advent of IPv6 have enabled the expansion of the network to these new elements of the enterprise. With this increase also comes an increase in revenue opportunities, but at what cost to enterprise security?

A cyber-physical system can be defined as a large physical system that can be part of a system of systems (SoS) where a distributed set of computing elements interact to control, monitor and manage the exchange of information from machine-to-machine, machine-to-human or to other cyber-physical systems. Characteristics of these cyber-physical systems can include physical distribution of systems, distributed control, supervision and management, subsystem autonomy, dynamic behaviors and reconfigurations as well as continuous evolution of the cyber-physical system itself. Also, as part of this cyber-physical SoS, there may exist within it a number of partially physically or programmatically coupled elements, where some other elements may be able to provide services independently.

The lack of built-in security in cyber-physical systems can result in unauthorized access to services and data, exposure of key enterprise elements, compromise of private data, denial of services, backdoors and malware, as well as loss or damage to critical infrastructure. As an example, the global market for network intrusion detection and prevention equipment and services is estimated at $95 billion and expected to reach $155.7 billion by 2019, while at the same time the role of threat actors such as cybercriminals, nation states, hacktivists, cyberterrorists and insiders continues to accelerate at an unprecedented rate in terms of cybercrime specialization, monetization schemes and tactics, and exploits.

Cyber-physical systems increase the attack surface and, as a result, an important aspect of this new cybersecurity frontier is the need to go beyond confidentiality, integrity and availability to protect cyber assets and extend it to a physical system to provide stability, controllability and observability (MITRE, AFCEA Conference 2016). Stability refers to the ability of a cyber-physical system to provide services within specified criteria. Controllability refers to challenges related to time- and event-driven computing, software, variable time delays, failures, reconfiguration and distributed decision support systems. Observability is used to achieve resilience within a network.

What should I do next?

Organizations should start by evaluating their cyber-physical systems in terms of six control areas: confidentiality, integrity and availability as well as stability, controllability and observability. The impact of this new paradigm will require cyber-risk practitioners to work with the physical systems engineers they support and develop new techniques to monitor and control infrastructure and enable its performance.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 28, 2016  12:42 PM

Why you should apply rapid experimentation to your IoT projects

Johan den Haan Profile: Johan den Haan
Applications development, Internet of Things, iot, Sensor, Sensor data, Use case

Something big is happening, and it is changing everything — how we interact with each other, our products and the companies around us. And it’s the data explosion and the internet of things. The hype is real, everything around us is being equipped with sensor technology that sends data to the cloud. By 2020 the world’s data will be doubled every two months.

The internet of things has become the new and shiny toy, so companies are naturally eager to make IoT part of their business in order to create new experiences for their employees, customers and partners. But it is important to emphasize the word “new” here. There is no shortage of impressive, potentially groundbreaking ideas out there, but the technologies are still very new and therefore the solutions are not well defined, requirements are loose and changing, and ideas are hard to prove.

With the lack of use cases already out there, it is hard for users and customers to even know what they want. This is why it is important to constantly experiment and get something into the market quickly without spending a lot of time and money. With this nascent market come uncertainty and the need to adapt quickly to changing conditions and transform your business in different ways.

For a business to adopt IoT it will mean creating entirely new services and business models. Let’s say you have an idea for a digital project that would create a new revenue stream, or a better way to engage with your customers that would differentiate your company. Because these ideas represent new and uncharted territory, turning them into value-driving applications requires frequent iteration and close collaboration between developers and the business.

Enable rapid experimentation

In order to adopt a process of development that allows for this rapid, low-cost experimentation, companies need to approach IoT projects with a willingness to fail often in order to figure out how to succeed sooner. Thomas Edison didn’t invent the lightbulb with a brilliant eureka moment that all of a sudden got him to the right answer. Instead, he was an experimenter. In his eyes he thought, “I have not failed, I’ve just found 10,000 ways that won’t work.”

And Google, one of the most successful and innovative companies in the world, has a failure rate of 95% with the idea that you only need to succeed once in order to win. According to former CEO of Google, Eric Schmidt, “This is a company where it’s absolutely okay to try something that’s very hard, have it not be successful, and take the learning from that.”

With that idea in mind, companies need the right set of tools and processes to foster this low-cost, high-value experimentation.

Some best practices for fostering rapid experimentation in your organization include:

  • Allocate time and resources for this type of experimentation to prove to your organization that this new approach can work, and then scale it widely as a new mindset.
  • Create cross-functional teams that include the business and IT. Bring together a person with an idea and someone with the technical aptitude to bring it to life.
  • Use visual, model-driven development in order to create a common language between business and IT to allow for faster experimentation and greater collaboration.
  • Create a feedback loop. It is important to have a mechanism to continuously capture feedback from users that you can take back into the process for continuous innovation.
  • Test a minimum viable product (MVP) early in the process to ensure the ability to change direction with minimal risk based on what you learn.

From idea to reality in just one week

A media and entertainment firm has adopted these practices to rapidly bring their ideas to market with an IoT app. The firm was faced with the challenge that many of their clients are forced to turn away attendees at the gate of their free events due to overcapacity without being able to take into consideration how many people left the venue. The firm had the idea to create a solution that would be able to leverage sensors in turnstiles to visualize attendee traffic in real time.

With constant ideation and experimentation, the firm was able to quickly make this idea a reality and present the results within a week of starting. Due to its new found collaboration between business and IT and its willingness to experiment and try again, the company was able to validate its idea quickly and continue to iterate and test frequently to evolve how it uses the incoming data, which has resulted in new business cases.

For example, the firm found that the data from these sensors could also be used to help clients understand how many people enter the venue to optimize how much food and beverages to provide the following night, and can ensure security by scanning for credentials to ensure that all event staff are legitimate.

The entertainment firm can now help clients optimize, secure and enhance the experience of its events.

Remember: It is important to get these ideas out there quickly in order to validate or invalidate them. Don’t get stuck in the “it must be perfect” mindset. Instead, build and deploy an MVP and continue to iterate with feedback from clients, customers, employees and partners to create the right new experiences.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 28, 2016  12:30 PM

Massive IoT: Security is vital

Herve Pierre Profile: Herve Pierre
5G, 5G technology, Authentication, Denial of Service, Internet of Things, iot, iot security

While it is early days for 5G, one thing is clear: security and privacy will remain fundamental requirements, with the changes foreseen for 5G likely to broaden the range of attractive attack targets. We believe that massive IoT, one of the main application segments for 5G identified by standardization body 3GPP, is no exception.

The massive IoT segment will be extremely broad, covering not just M2M, but consumer-based services too. It is likely to consist of an ecosystem of potentially low-cost devices such as sensors and trackers as well as high-end home appliances, toys, some wearables, meters and alarms.

Device connectivity requirements will vary by use case. Some will require permanent connections, while others will only connect occasionally. Some devices may connect directly to a 3GPP network, whereas others will connect via a relay or gateway, using either a 3GPP or non-3GPP network. Some devices will be static, whereas others will move and will need the ability to manage network handover securely.

Data is likely to encompass geolocation data, sensor data such as meter readings, and private consumer data. Location and privacy protection for data must be enforced. For example, data from meters must not allow thieves to know if premises are occupied or not. In addition, much value in IoT comes from the integrity of the data so integrity protection is also vitally important.

Threats may include data manipulation, use of low-cost endpoints for entry into the network, rogue devices, equipment cloning and denial of service. Another major threat comes from suppliers themselves failing to invest in security for low-cost devices.

Means of mitigating these threats include secure provisioning, secure remote administration and configuration, authentication and data integrity measures. It’s also vitally important to understand that security should be proportionate to the value of the data being stored or in transit and the risk of compromise rather than to device cost.

Managing initial network connectivity securely will require secure provisioning of unique device and user identities for both network- and service-level access, network and service authentication credentials, and communication cryptographic keys as well as application identifiers.

Managing identities on the network will require identification of the application and corresponding application provider. It will also need secure storage of the unique identity on the device.

Mutual authentication of the device and network will also be necessary (it has been mandatory since 3G) as may mutual authentication for applications back to their service platforms.

In combination with the other major classes of use case: critical communications, enhanced mobile broadband and V2X, massive IoT poses a range of security challenges and requirements. SIMalliance believes that it is crucial that security is built into 5G from the outset. It has recently published An Analysis of the Security Needs of the 5G Market outlining its view of the security needs of each 5G segment.

SIMalliance is soon to launch a follow-up technical security requirements paper in late Q3 2016. Further articles from SIMalliance on TechTarget will draw on findings from that paper.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 27, 2016  3:42 PM

Why enterprise IoT — not consumer — is the next big thing

Arup Barat Profile: Arup Barat
Internet of Things, iot

We hear so much about the potential of the internet of things every day in the news. For the most part, examples and discussion is focused on consumer trends and products for IoT, but in the real world, enterprise and industrial IoT will be where the cool stuff happens … here’s why.

Our fascination with consumer products is pretty straightforward — it’s easy to identify with things we use every day at home and in our lives. This has been useful to get people educated and excited about IoT, but we need to move the discussion beyond the basic example of a coffee maker that will brew you a fresh cup of Joe when a sensor tells it you’re awake. Yes, that’s possible and there’s value in automating things that are a little tedious, but how much would you really be willing to pay to avoid pressing a button and waiting five minutes? We need to think bigger to understand where the real value of IoT will come into play.

Global IoT device forecasts

Global IoT device forecasts

There’s lots of market research out there and it all points to IoT being big … really big. Consensus says there will be around 20 billion devices by 2020, up from around 10 billion right now. Gartner’s recent forecast says we will add about 5.5 million connected things each day!

But how will all those connected things be split between consumer and enterprise IoT? Some forecasts, like that from BI Intelligence, expect enterprise IoT to far outpace consumer while others see consumer as the force driving the number of connected devices.

Looking historically at adoption of technology shows us how rapidly things are changing. While it took 50 years for most of the population to adopt electricity, modern trends like smartphone adoption show full adoption in a 5 to 10 year span. Older technologies mostly began as something available to only industry followed by becoming accessible to consumers. With vertical adoption curves, however, that distinction no longer holds and new technologies are almost immediately available to both groups.

Consumers and the firms serving them, at least right now, see the primary benefit of IoT as making life easier (although they may not even do that sometimes). While many products and services do incrementally increase efficiency or save some money, the primary benefit is that life is a little easier and the tech is really cool! The question, however, is what are we willing to pay for this benefit. It clearly isn’t a must-have, although over time we may eventually see it that way.

Adoption of technology in the U.S. (1900 to Present)

Adoption of technology in the U.S. (1900 to Present)

Enterprises, on the other hand, see the primary benefit of IoT adoption as a positive ROI (or NPV if you prefer). This can take the form of efficiency gains, cost reductions or even opening up new business models that were previously unattainable. When implementing IoT to automate a process could save even 1%, which can equate to billions of dollars, it’s much easier for an enterprise to take a calculated risk and be willing to pay to implement IoT now. Certain industries are already investing.

Investments in IoT by industry

In industries like energy and agriculture, we have seen significant interest and action already. In solar energy, monitoring assets like solar panels, inverters and more to maximize uptime and performance is critical. If one panel is performing poorly, it can affect many others, and poor performance on a sunny day can be costly. Solar developers are already implementing IoT solutions for asset management that can overcome many of the challenges faced in the past like low power requirements, intermittent connectivity, real-time data needs and more. The need is real and therefore these enterprises are willing to invest in both the development of these technologies and their implementation.

Agriculture is also looking to make positive ROI investments by implementing IoT. Companies have looked to optimize how they use resources like water for some time by monitoring crop yields, soil moisture and more. IoT can take this concept to the next level by enabling a flexible (and more importantly cost-effective) platform that can handle many data streams from sensors in the soil, silos and tractors, weather feeds, pricing signals, supply chain details and more. As more information is brought together, the system becomes more valuable and a farm can make better decisions. Each farm will be able to pick and choose which connected services make sense and layer different offerings from different vendors for key advantages including:

  • Operate more efficiently: Reduce fuel usage by optimizing speed, route and dispatch timing of harvesters based on real-time orders and needs
  • Open new revenue streams: Provide detailed fertilizer, water and environmental information for each piece of food to conscientious consumers

The success of enterprise IoT will be driven by technical innovations and services that enable it. Tools like a scalable and secure IoT platform that operates in real-time, low-power/low-cost hardware to power devices and modern networks to transmit data are the building blocks for enterprise IoT, as well as consumer IoT to change how we operate and live.

In the big picture, the technology is ready and the need is there to drive IoT forward. While consumer services will certainly improve quality of life, enterprise IoT will face down some of the biggest challenges facing the world today:

  • How are we going to feed 10 billion people?
  • How do we improve renewable energies uptime while reducing its OPEX?
  • How are we going to manage the future’s most valuable resource — water?

Ultimately, both enterprise and consumer IoT will impact our world in ways we can’t even imagine now, but it is enterprise IoT that will be the next big thing!

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 27, 2016  12:23 PM

IoT connectivity: Five steps to IoT innovation, from prototype to production

Keith O'Byrne Profile: Keith O'Byrne
Internet of Things, iot, iot security, prototype

We’ve been thinking a lot about the stages that an IoT innovation goes through, from the light bulb over the head moment through to (hopefully) mass deployment. We’ve grouped these into five steps, each presenting a set of challenges and considerations about IoT connectivity. The decisions you make at each point about connecting the “thing” to the internet will intensify as you scale.

We’ve summarized in the image below the five key stages of an IoT project and the IoT connectivity considerations (Wi-Fi, LoRa, Satellite, LTE, NB-IoT, etc.) you’ll need to make along the way.

IoT connectivity

IoT connectivity enabling IoT innovation

1. Idea

Every IoT project starts with an idea, not a purchase order for $100,000. Today, makers have never had it so good for quickly turning those concepts and “what-ifs” into something real. In many cases, a Raspberry Pi or Arduino will do the job fine. Connectivity also isn’t a problem at this stage. You could even use a physical cable to link the sensor to an IoT gateway such as the Dell Edge Gateway, although Wi-Fi is the more likely option.

2. Prototype

In my time in the industry, rapid prototyping has become easier and cheaper. It used to be very expensive and a big commitment to create early-stage proof-of-concept devices; now, you can get a Chinese manufacturer to build in volumes of one. But once you bring the proof of concept from your lab to the field — whether that’s to an actual farm or to an investor meeting — can you really rely on the Wi-Fi signal? Depending on the environment, you might think cellular connectivity is your best option, so traditionally, many makers have you gone to their nearest mobile operator to buy a cheap SIM card.

3. Iterate

Here’s where the questions start to come thick and fast, however. Once you have an idea of what you’re trying to build, will any of the connectivity options you adopted at the earlier stage still be with you as you refine your original idea? Will you be able to source SIMs reliably at higher volume? Will the cost model of buying retail SIMs work with a 500-unit-a-month deployment? This step is where you’ll encounter issues like security, authentication or authorization on your connectivity. Can your chosen connectivity handle these issues reliably?

4. Deploy

By design, IoT devices are constrained — that term has a specific meaning which implies limited processing power, storage and bandwidth. Where you plan to physically deploy will be a factor: if you aim to put a miniature computer on every streetlight, it isn’t practical to visit every one of them every Tuesday to install a patch. Or, if you fit a sensor in the road which is then embedded under concrete, you only have one time to get it right.

5. Scale

So, what type of connectivity are you going to embed in your service as you deploy it? This isn’t a trivial question. If you’re serious about building an IoT innovation to operate on a global scale, it makes sense to think about IoT connectivity from the very start. Otherwise, you’re stuck on the treadmill, continually solving the same problems over and over again as you progress through the five steps as outlined here. SIMs from the EU will not roam in Saudi Arabia, and a U.S. SIM won’t necessarily work in France. In some of the biggest growth markets, Saudi Arabia, Turkey and Brazil, governments don’t permit global or permanent roaming. We’ve seen similar restrictions in places like Cambodia, Indonesia and Myanmar. So if you want to ship your “thing” all over the world, you’d better be sure at the idea, prototype and iterate stages that you’ve built in connectivity that works for your global application.

What now?

The assumption we’ve made all along is that connectivity is hard. Simply putting your devices on Wi-Fi doesn’t give you guaranteed service, besides the fact that connecting constrained or low-power devices to the public internet is not a good idea. As we like to say, there’s nothing quite as complicated or dangerous as simple connectivity.

Fortunately, technological advancements have come to our rescue yet again. In the old days, you had to pay in advance for technological capacity that you guessed you might need to use months from now. Cloud computing services like Salesforce and, later, Amazon Web Services or Microsoft Azure completely upended that model, giving enterprise-class software tools to smaller companies who only had to pay for what they used.

That same model now applies to connectivity, to let you link services and “things” over the internet at industrial scale but at volumes of one (idea), so you can start small and then flex your network as you go through the subsequent steps (prototype, iterate, deploy and scale).

When considering what technology to enable your IoT project scale the network connectivity you need to look to the models developed for infrastructure as a service and computing elasticity. Look for technologies that can automate the “declaration” of an IoT network topology via simple API calls and that remove the manual, time consuming bespoke steps. Consider technologies that can aggregate different bearer networks so that you can mix and match multiple different connectivity types, and avoid lock-in from one single communication service provider.

For more information on this topic read our ebook on IoT connectivity.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 26, 2016  2:20 PM

How smart cities will use IoT

Brian Zanghi Brian Zanghi Profile: Brian Zanghi
connected, Internet of Things, iot, smart city, Smartphones

Futurists, tech visionaries and urban stakeholders have been talking about “smart cities” for a number of years now, as though they’re the product of a simple recipe: Take a city, add the internet of things, and voila — smart city!

Needless to say, it’s hardly that simple. Smart cities will, indeed, require connected technology to deliver on their promise. However, they aren’t created just by association with technology, but rather by intentionally applying four core principles to all civic initiatives powered by that technology.

Before we get into the how of developing smart cities along those principles, though, we should explore exactly what they are and why today’s stakeholders should be working to bring them about.

A smart city leverages the internet of things for the distinct purpose of delivering services sustainably and efficiently without overtaxing its resources. It accomplishes this by prioritizing those elements that cover functions that are absolutely necessary — emergency services, for example — and virtualizing the supporting elements that force people to perform mundane tasks but don’t contribute to their work or enjoyment. The classic example is the parking meter: give people an app that shows where available spaces are and allows them to pay digitally, and they will never again have to drive in circles hunting for an open spot, only to discover they don’t have any coins to feed the meter.

This approach will become increasingly important over the coming decades. According to the United Nations, by 2050 more than 6.4 billion people — representing nearly two-thirds of the world’s population — will be urban. If cities haven’t figured out how to use smart technology to streamline their infrastructure and delivery of services well before that influx, the economic consequences will be catastrophic. This is where is the four smart city principles come into play.

Once the stakeholders — citizens, government agencies and enterprises — begin considering these guidelines, the internet of things will help bring their smart initiatives to fruition:

1. Smart cities are experience-oriented
In the smart city, people only show up physically to have an experience or receive a service, not to plan it or purchase it. Time-consuming activities that do not contribute to work or recreation, like standing in line to pick up tickets, will be eliminated. People will have the same experience whether they are renewing their driver’s license, booking a facility for a wedding rehearsal dinner, paying their taxes or transferring between modes of public transit.

2. They are solutions-oriented
Technology will be applied to address specific problems. There will be no one-size-fits-all solution handed down by a centralized authority or vendor. Cities will instead encourage a diverse network of innovators and developers to create best-of-breed apps to provide the solutions needed by officials and citizens to improve the urban experience.

3. They are fully digitized
All services and infrastructure are optimized for a mobile-first population, with physical backup services for those without digital access. There will still be physical infrastructure, of course — roads, transportation, power, water supply, hospitals, etc. — but it will be supported by smart technology. A combination of sensors and open data exchanges will provide constant information about these systems and allow cities to address, in real time, everything from heavy public transit demand during rush hour to emergency responses during natural disasters.

4. They are seamlessly interconnected
All services and infrastructure are open to deep links and data sharing, so that if required, they can work together through an ecosystem of best-of-breed apps. To facilitate this, civic data sources are made available through open portals that anyone can access as a resource for identifying problem areas and creating solutions to address them.

Putting IoT in everyone’s pocket

The key to all of this is the smartphone, which will be the citizen’s personal node in the internet of things and gateway to the smart city. The Pew Research Center noted in a 2016 report that smartphone adoption continues to grow across the globe, even in developing countries. It won’t be long before smartphones are ubiquitous, therefore allowing smart cities to be a global reality.

The imperative is urgent and clear: the time is now to begin the process of building smart, tech-enabled cities, with a seamless flow between the different services provided for residents, commuters and visitors. The internet of things will empower citizens through their smartphones, creating create tremendous benefits for the people who live in, work in or travel to and within smart cities.

When today’s stakeholders encourage the creation of a best-of-breed ecosystem that leverages civic data and supports the internet of things, in line with the core principles above, they are giving their cities an advantage over those that are not taking these same steps. The smartest cities of 2050 will have put down their roots today.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 23, 2016  1:52 PM

Why geospatial professionals need the cloud

Ron Bisio Profile: Ron Bisio
cloud, Cloud Services, GEOSpatial, GNSS, gps, Sensor, Sensor data

Unlike some traditional IoT applications, which typically feature endpoints equipped with sensors as the primary source of data collection, geospatial technology leverages GPS/GNSS location networks to collect data streams before transporting that data to be processed in the cloud.

In units of time, we’re not far removed from the world of paper and ink. But if we measure the distance using terms of productivity, flexibility and value, today’s geospatial business environment is light-years away from its former location.

We regard electronic data and communications as an essential for modern business. But these technologies are still young and growing. Not so long ago, typical project deliverables consisted of hardcopy drawings and reports. Even when using computer-aided data collection, processing and drafting, results often came as 2D drawings accompanied by written analyses. Over the years, clients and geospatial professionals recognized that the information contained in a 2D drawing could be put to work more quickly if it arrived as a computer file. As a result, many deliverables moved to electronic formats. CAD files, spreadsheets and reports have become the norm in most enterprises.

Today many organizations employ information technology that enables stakeholders to easily share deliverables. Common data formats and visualization tools enable downstream users such as engineers, architects and planners to utilize the information. For example, in the days of hardcopy it would be unusual for a surveyor or engineer to develop a physical 3D model of a building site. If such a model were needed, architects would generally build it as part of the building’s design and approval process. By using modern instruments and software, geospatial professionals can create digital 3D models in a fraction of the time and cost. And instead of physically moving drawings and computer storage media, deliverables now arrive via 1990s technology: The internet.

Cloud-based systems for data collection

Using cloud-based systems for data collection and management, field and office teams can share common workflows across multiple platforms.

The internet now plays a core function in the operation of most geospatial businesses. Organizations use the internet to communicate with their clients, contractors and employees via email, intranets and social media. Proposals can be delivered, contracts negotiated and results conveyed electronically, shortening up-front processes and producing tighter feedback loops.

Cloud solutions build on the internet’s foundation of connectivity and interaction. In addition to moving information, remote servers can provide powerful computing capabilities. By tying handheld and desktop computers to cloud services, it’s possible to bring sophisticated data processing to more users and locations. As a result, cloud-based systems for geospatial information management and analysis are poised to provide new flexibility in enterprise operations.

Cloud-based access to utility info

Cloud-based access to utility information enables maintenance crews to quickly assess conditions in the field.

While these trends have produced significant gains in productivity, the geospatial industry has yet to realize the full value of the cloud. Let’s look at some examples.

Real-time GNSS positioning: Precision from the cloud

Many geospatial users gained their first experience with cloud solutions by using groups of interconnected global navigation satellite system (GNSS) receivers known as real-time GNSS networks (RTN). Using RTN, networks of GNSS reference stations streamed data to a powerful server where the information could be merged and analyzed. Then customized data streams could be sent to individual GNSS rovers for use in real-time kinematic positioning. Freed from the need for a reference station, surveyors could work quickly and freely over large geographic areas.

The speed, ease and flexibility of the RTN technology helped fuel a dramatic increase in the use of real-time GNSS positioning. Today, cloud-based positioning services support applications in surveying and engineering, construction, agriculture and more. For example, structural or geotechnical monitoring solutions utilize cloud positioning and web interfaces to deliver critical real-time information to stakeholders in remote locations.

Cloud solutions for geospatial enterprises

There are now cloud-based platforms of software, data and services to serve the geospatial community. Focused on applications in surveying, engineering and GIS, the solution uses the cloud to support work in geospatial data management, field data collection and transfer, equipment management and spatial data catalogs. By combining cloud services with technologies for positioning, communications and data analysis, companies can leverage point-of-work delivery of information needed by geospatial professionals in the field and office.

As an illustration, a surveyor typically needs multiple types of data to plan and execute a project. This includes previous surveys, government data, maps and other information. Some of the data may be held in the surveyor’s own records, while other information may come from government agencies and private suppliers such as DigitalGlobe or Intermap. There are now tool providers that enable the surveyor to quickly discover and use the geospatial information specific to the project. Rather than manually searching through multiple information sources and formats, users can streamline the most common and time-consuming pre-survey tasks. The surveyor can quickly find the pertinent information and download it to his desktop.

Geospatial data can be accessed via cloud-based services

Libraries of geospatial data can be accessed via cloud-based services, enabling stakeholders to retrieve information from any location with Internet connection

On the job site, field crews can use cloud-based applications and hosting services to exchange information to simplify workflows and data management. Additional services include the ability to track and manage the location and status of field equipment, including warranty information, software and firmware. Software offerings now allow users to customize workflows to streamline data collection on multiple platforms including iOS, Android and Windows. The system can automatically sync field information from multiple crews to a central server.

The shared foundation

The accurate and feature-rich data that geospatial professionals collect can be very valuable for other purposes. The cloud offers an ideal platform for individuals and organizations to exchange or sell their data as they feel appropriate for their business.

Data is the cornerstone of any geospatial workflow. By enabling professionals to easily discover, access and utilize different types of data, the cloud will soon become an essential part of the daily processes of data collection, processing, modeling and analysis.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


September 22, 2016  6:37 PM

Intuitive Machines brings space tech down to Earth

Mark Bunger Mark Bunger Profile: Mark Bunger
Connected Health, cybersecurity, Drones, Internet of Things, iot, iot security, NASA

We recently spoke with Mark Gittleman, Executive Vice President at Intuitive Machines, a “product development think tank” in Houston, Texas, that offers engineering services based on a deep NASA pedigree. Gittleman was previously head of the space division of Oceaneering, a large oil and gas services company specializing in offshore and other harsh environments, where NASA was his primary customer. There he met Steve Altemus, who was the Director of Engineering at Johnson Space Center (and later Deputy Director). After a successful and exciting skunkworks-type project developing a new lunar lander, Altemus and his team left NASA seeking a more entrepreneurial, permanently skunkworky environment, and started Intuitive in 2012. It currently has 45 employees, pulled from NASA, U.S. National Labs, industry and academia.

The company’s main revenue-generating activity is engineering services, but Gittleman told us that Intuitive’s differentiator is rapidly commercializing space technologies for Houston’s three major industries: space, energy and medicine. As an example of the company’s speed to market, Gittleman described the company’s development of an autonomous orbit-to-earth sample return vehicle in less than 12 months — the blink of an eye in aerospace time — using open-source software from NASA. It applied similar fundamental navigation principles and software to create:

  • Software for cybersecurity. Intuitive is developing three cybersecurity technologies derived from aerospace flight control systems, including a “next generation” antivirus scanner, endpoint (e.g., mobile phone or IoT device) protection and network communication security. For example, its Ender endpoint protection system is based on an artificial neural network that detects deviations from typical user behavior, which could mean it’s being used by an unauthorized person — say, someone who had stolen the phone — and shuts down the device before harm can be done.
  • Drones for inspection, surveillance, and security. Most adjacent to its aerospace heritage, the company has two drones — one fixed-wing (called Tiburon), one tethered (similar to CyPhy Works, named OSPREI). The company claims that its “Tiburon Jr.” can carry payloads up to about 10 kg, with an 80-knot cruise speed (and “even faster dash”) and a 15- to 40-hour loiter endurance; it has a modular architecture that allows for 15-minute assembly or disassembly.
  • Precision drilling equipment for horizontal wells. Building on its sensing and navigation expertise, Intuitive has used two conventional downhole magnetic ranging assemblies in test wells with a leading oilfield services provider. Gittleman said that with better software on existing hardware and sensor data, Intuitive can look five meters to eight meters ahead of the drill bit in passive magnetic well ranging, and 15 meters in active ranging. He claimed the technology was able to deliver a 90% improvement in the time it takes to start acquiring data, and can ultimately “cut several days from the time it takes to drill a well.” Gittleman said that due to the current low oil price, interest in the technology has cooled, and the company is only expending minimal effort to take it forward until an industry partner appears.
  • Medical devices for IV insertion, child safety and suicide prevention. Intuitive has a prototype of an automated intravenous needle insertion device, capable of finding veins, moving to the proper point above the skin and inserting the needle to the appropriate depth. It has also developed a “clip-on” safety device meant to be attached to a child’s car seat, which connects with an app that alerts the parent if they have left their child inside the car. A third device monitors the gestures and other user-defined behaviors of a patient in a hospital room (or prisoner in a jail cell), looking for signs that foretell potentially violent or suicidal actions. The latter two applications are particularly interesting as they align with the growing monitoring technologies segment of digital health and wellness. Gittleman said that these technologies are also non-core, and taking them to the next step will require a yet-to-be-identified industry partner.

Gittleman said that of 20 inventions the company has started to develop, about half are currently active projects. Intuitive hopes to spin off both the back-burnered efforts and the ones still going, but needs partners to do so (as mentioned above, it needs a new oil and gas partner willing to do additional research in test wells, since the original one fell to belt-tightening in today’s low-price oil environment). In the meantime, the company can sustain itself on revenue from engineering services, like it is doing with Lockheed Martin on the Orion space program, Halliburton, and Axiom on a private commercial space station module.

The company’s biggest asset is its breadth of applications, in our view, but this is also its biggest challenge: because its fundamental science has so many potential areas of application, it has not yet chosen a focus on the business side (which is impossible to scale like the science and engineering work). While such hopes for creating a “platform technology” leading to limitless products are common among scientist-led startups, success is extremely rare. Navigational algorithms can credibly be used in positioning spacecraft, drones, drills and medical devices, but a team of fewer than 50 people can’t attend all the industry events, client meetings, networking cocktail events and pitch meetings required to build the trust and commitment that will lead to shared goals and signed deals, and turn prototypes into products. The company’s thinly spread business development efforts are thus pursuing a mixed-model business strategy: Intuitive wants to be paid for contract research; but also, according to Gittleman, “develops, owns and in the future will operate its own, internally funded” technologies (for example, using the drones to develop a data services business). This would require even more staffing, skills and passion, since entrepreneurial businesses can’t be a sideshow, and Intuitive has only one CEO.

Gittleman did emphasize that the company’s focus is on commercializing the cybersecurity technologies first, and believes that they are closest to commercialization. Still, any investor putting venture capital into the technology would pressure the company to drop all other efforts (including independent contract research), which it seems unlikely that Intuitive would be willing to do.

Clients attracted to Intuitive’s inventions should attempt to negotiate a licensing deal, since the company’s attentions are spread across so many projects that getting additional development support could prove challenging. Particular attention during the due diligence phase should be paid to the uniqueness of claims and patent priority dates, as other companies across some of the target industries are developing similar solutions — for example, Vasculogic and Veebot are both developing automated venipuncture systems based on navigational algorithms paired with application-specific robotics. Given the widespread reach of the core technology and the relative lack of specific industry expertise within the company, there is a reasonable chance for unintentional IP infringement, which can potentially spell trouble for licensees.

Alternatively, those interested should contractually stipulate resources and milestones for joint development agreements where Intuitive’s apparently capable engineering staff will actively participate. For its own part, Intuitive may shift its strategy, aligning with an incubator like the Houston Technology Center, or taking a page from the few organizations like SRI International (full disclosure: a Lux Research partner), UCSF’s QB3 and Otherlab that execute the research-to-spinout strategy reasonably well.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.


Page 30 of 42« First...1020...2829303132...40...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: