I’ve written about some of the changes coming to the workplace, workforce and overall economy because of the continued proliferation of IoT devices. One area I’ve found especially interesting as of late is how IoT is helping something near and dear to my heart: food. Well, food production technically. My father-in-law is a farmer in rural Indiana and, through conversations with him, I’ve learned about all the pressures being placed on today’s farmers. They include:
- Availability of water: One of our customers, a large seed company, is investing heavily in agriculture that needs less and less water to grow because farmers must rely on increasingly unpredictable weather and battle urban demand for underground aquifers.
- Land: As Mark Twain quipped, “Buy land, they’re not making it anymore.” As urban sprawl continues, the price of land continues to rise. Sadly, many farmers are being pressured to sell their land as subdivisions and businesses move in around them. If they don’t sell now, who will want to buy their property when it’s land-locked between a neighborhood and a shopping center?
- Government/environmental regulations: The federal government (and often other foreign governments), are putting more and more restrictions on how food can be produced. This can impact things like pesticides, genetically modified organisms, land management, workforces or animal welfare. Even customers are demanding their own standards from farms these days.
- Managing pests: Controlling insects and disease is something that has plagued (pun intended) humans since the first days of farming. It takes time and money to ensure that whatever measures are used to drive away or exterminate pests are safe for the crops and those who consume them.
- Maintaining a profit: Like every business, farmers struggle with maintaining a profit in a market where prices are almost always stable or falling and expenses continue to rise.
- Increasing yields: Improving the yield (amount of output per acre) is important because it not only offsets increasing costs, but also helps accommodate an increasing population on less and less land.
While the average farmer may not be held in high regard by a layperson, the truth of the matter is these are individuals who often run multimillion dollar businesses and can handle all the complexities and challenges that come with it.
Increasingly, both the agriculture industry and individual farmers must rely on technology to overcome business challenges. IoT is a key part of these plans — connected devices are being implemented by millions of farmers across America.
Looking at the list of challenges listed above, IoT and smart farming can help farmers in all these areas:
- Availability of water: Deployed field sensors can create data points that monitor things like rainfall in specific areas or water requirements for crops to help drive irrigation strategies and reduce water consumption.
- Land availability: While IoT isn’t able to control land prices, it can help make farms better neighbors. IoT technologies can look for things like disease outbreak in livestock, allowing sickened animals to be separated from the herd and treated.
- Government/environmental regulations: Increased regulation means farmers must now provide data points from farm to fork and every step in between. By doing so, farmers ensure that various government import requirements are followed by local producers, making products available to a wider market overall.
- Managing pests: Sensors can monitor and scan the environment for infestations to pinpoint pest hotspots, allowing for more targeted applications of insecticides and other pest controls. This not only controls costs, but also minimizes potential negative environmental impacts.
- Maintaining a profit: Most people have read about the Silicon Valley’s self-driving vehicle obsession by now. Companies like Google, Uber and Tesla are all working on this technology, but people may not realize that self-driving technology and IoT sensors are already being applied to tractors and other farm implements. The ability to reduce the need for labor is creating direct cost savings for farmers, allowing them to spend resources on other aspects of their business instead.
- Increasing yields: As mentioned above, having access to real-time data that aids crop and animal monitoring allows a farmer to quickly identify and resolve problems, improving their overall yield. Data points that can be drilled down to a very specific location help immensely. Even tractors are helping by monitoring real-time yields as they plow, fertilize and harvest.
As we demand more and more from our farmers, harnessing IoT technology for smart farming appears to be the only conceivable way they’ll be able to succeed. With a world population projected by the United Nations to be 9.8 billion in 2050 and 11.2 billion in 2100, IoT in agriculture is an absolute necessity.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
For any currency to be usable for IoT, there are some non-negotiable requirements:
- Support for transactions, especially small ones
- Stable store of value
- Predictable low latency
- Support for very high volumes of traffic
At first sight, bitcoin seems like an ideal choice for IoT. But if you look closer, bitcoin and other virtual currencies are deeply problematic.
There’s lots of information on how bitcoin works out there, but for our purposes it can be summarized as follows:
- Bitcoin is a ledger of transactions.
- So A decides that B now owns one of his bitcoins.
- The change of ownership is put into a queue.
- Lots of people working in lots of places take that queue and search for bitcoins.
- When somebody finds a bitcoin, the change of ownership is baked in using a complicated hash that prevents people from changing older versions of the queue, and the hash it results in is used in the next batch of transactions.
Because many different people are processing the same queue at the same time, and the process never stops, it’s more or less impossible to make a retroactive change. The ledger is in multiple places and is updated constantly with changes which build on prior modifications.
These concepts are broadly similar to how high availability works in transaction processing systems, but there are some very important differences anyone looking to implement bitcoin for IoT needs to consider.
The first issue is latency. For a transaction to finish in bitcoin, we need to know that bitcoin (the ledger) has finished processing.
In IoT, credit card fraud and telecommunications operations, latency is measured in either milliseconds or microseconds. A typical transaction, such as checking someone’s mobile phone credit when connecting a call, will take from 350 microseconds to 3.5 milliseconds in the real world, depending on the circumstances.
Now consider that a transaction is not just a one-way “fire and forget” operation. It is an event whose outcome matters and is not known in advance. If we’re just recording usage, the end result may not matter. If we’re spending some resource, (for example, money), the outcome does matter as we may be told we can’t complete a call due to low minutes, complete a purchase due to low account balance and so on. This is where latency suddenly becomes very important. Until we get a yes or no response, the client’s application can’t do its job.
In an IoT context, where we’re making decisions at computer speed (milliseconds) not human speed (seconds), poor latency can render a good idea unworkable. As I write this, it takes 31 minutes for a bitcoin transaction to be confirmed, which makes it utterly useless for automated transactions. Bear in mind that if we’re trying to implement some kind of iterative micropayment approach, IoT devices could be spending money every couple of minutes, which means we either deny services until the payment goes through, or we could have a massive queue of transactions in the system and be carrying the risk they will start to fail. There’s also an issue with predictability. The transaction confirmation time can, in practice, be anything due to the way bitcoin works.
The second major issue is support for the anticipated traffic levels we will see in IoT. A single IoT application could easily generate well over 10,000 transactions per second (TPS). In comparison, Visa is capable of handling over 56,000 TPS globally. Bitcoin is limited to 7 TPS. Not 7,000 TPS, 7 TPS. Think about that. Not only is it implausible for real-time transaction processing for an IoT app, it also calls into question if bitcoin’s architecture is even relevant when we’re planning on hundreds of thousands of TPS.
This brings us to our next issue: small transactions. In April 2013, the developers of bitcoin introduced a limitation on what they referred to as “dust” transactions. Transactions less than $0.007 were no longer accepted. This in turn raises two problems. The first is that some IoT applications might plausibly need transactions that are 7/10 of a cent or less. The second is a broader governance issue. If the people who run the currency can decide to change the minimum transaction size with no notice, what’s to stop them for doing it again?
Finally, we come to stability. Bitcoin has weekly price swings of up to 60%. While it’s been an excellent platform for speculation, who could plausibly price a product or service in it when its value is continuously changing? Bear in mind that a lot of IoT plays will be operating on razor-thin margins that could be wiped out by a currency swing.
While it’s clear that IoT will require some form of micropayments architecture, the current state of bitcoin renders it more or less useless as an IoT payment mechanism. What’s equally disturbing is that as a consequence of bitcoin’s lack of central control — despite issues concerning latency and scalability being obvious to anyone who cared to look — no substantive progress has been made in resolving them. We could, at this point, start looking at other digital currencies, but that opens up a further rat’s nest of questions about convertibility and survivability. The internet of things presents enough of a challenge as it is without bringing bitcoin on board.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Here at our firm, we are at the leading edge of developing many, many IoT products and potential products. With IoT, it is important to recognize the enduring principals that lead to long-term, successful products. Just as importantly, you need to know when IoT doesn’t make sense, and here are our insights regarding how to evaluate that:
The value proposition is not there
Consider the example of an IoT product around a coffee mug that senses how much coffee is left in the mug and the temperature. The mug can also be location-aware via GPS. This data can then be wirelessly communicated to a cloud system connected to other web- and server-based data sources. For this exercise, let’s think of the mug as connected through a low data rate cellular link. At the server, the location and mug status (volume, temperature) can be combined with other GPS data to now link the mug to the nearest available service provider to deliver coffee. This would trigger a concierge to get a message on his smartphone app telling him to come to the mug and refill with fresh hot coffee. It is the “Uber Coffee Mug.”
Ridiculous premise, right? Of course. One can create the Uber Coffee Mug; it is technically achievable. However, it is unlikely this creates sufficient value to warrant development of an IoT coffee mug. There are many cases where investments are being made and product is being developed for very dubious user value.
The cost/benefit trade-off Is unattractive
There is no way around it. An IoT-enabled device (and service) will cost more than a “dumb” device. While costs for the hardware elements are dropping, they still represent an increase in bill-of-materials cost. In addition, the cost and time required to create an IoT system is significant and is often underestimated. To be successful, an IoT offering requires hardware technology and integration as well as significant software development and data analytic expertise.
If a company is going to make this investment and incorporate the increased product expense, is there enough value to the user to warrant the price increase? Is there sufficient margin in the product for the company to balance against the significant cost and time required to create it? Is the risk/reward high enough to justify the exposure? These are tough questions requiring a thorough understanding of the opportunity and the value proposition to all constituents. In the mug example, the benefit is increased sales of coffee. The question is: Can you move enough incremental coffee to warrant the development and support of such an IoT delivery model? It takes a lot of coffee.
The company is not ready to capitalize on a new business model
Using the Uber Coffee Mug example, one can imagine the local deli delivering coffee as being a company which theoretically could find a value proposition in delivering coffee by the mug. The way to capitalize on the value might be to have users subscribe to the service. For a simple, monthly fee of $20, the deli will deliver you coffee any time during business hours, five days per week. But is the deli ready to accept a new business model? Are they ready to hire concierge delivery people? Will the company adopt a model where coffee is not sold by the cup but by monthly subscription? The same sorts of questions apply to any sort of IoT solution — the company has to be willing to adopt an alternative revenue recognition model.
Not sufficient funding to go the distance
Many companies are not cognizant of the full cost related to realizing lofty IoT goals. Certainly, a small startup of a few people may be able to put in sweat equity for some amount of time, but may not be able to do it for a year, 18 months or two years. The team may also not have all the skills necessary to realize the complete technology and may not have the resources to bring in development partners to help fill the gaps.
Unrealistic vision of the development cost and time
Boards like the Arduino are ubiquitous. They are great for hacking together a quick prototype that “sort of” functions like a real product. Getting a mockup together can be done quickly using such development kits. But a mockup or rough prototype based on such technology is far from a real product. The demo feels like a major milestone, but getting a demo running is 20% of the project. The other 80% is designing it to be small, reliable and cheap enough to be a real product. Too many startups have moved down the path thinking that when the demo was finished in a couple of months, the product would be ready to launch in just a couple more months.
There is much to consider in heading down an IoT-enablement path. Certainly, there are many new product categories and value-added products which can be created using this capability. Careful consideration of the factors involved in successful IoT product development is warranted. The cost of a mistake can be very high, but so too are the rewards.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Clouds are tricky things. It’s hard to tell where the foundations of a cloud reside. You could point at the physical infrastructure. Some of the best side-channel attacks target hardware. There is the operating system that runs everything. And there is the middleware, billing, hypervisors, drivers and web front ends. The potential attack surface of a cloud service provider (CSP) or consumption market platform is gigantic.
The internet of things is even worse.
Building in the trailer park
In part one of this series, I wrote about how a “cloud” project I worked on a few years ago went sideways. Of course, CSPs build their platforms on a master plan. Like a planned community, it is architected down to the minutest detail, and a phased rollout of services are thought out and integrated with their identity management suites. They protect the management plane through layers of proxied services, and deploy open source or open security architecture security technologies.
Unfortunately, IoT devices have none of this. Of course, since the clouds they are connected to are consumption-based they meet some of the basic requirements. But protecting the management plane is impossible because it exists on the same network that people are passing their data through.
Simply put, there is no separation of the data and management planes. Which is a problem.
That’s because cameras, home routers, wireless access points, DVRs and a myriad of locks, thermostats and lighting mechanisms are all accessible through the same connection you browse YouTube through. I call that “building in the trailer park.”
I understand that there is a need for inexpensive automation and services at home. Just like some people need inexpensive housing with convenient access to the city, with the ability to move the building if necessary.
There is nothing necessarily wrong with that. The problem is that tornadoes seem to be especially adept at finding trailer parks.
What can stop a tornado?
I’m writing the last of this blog as we prepare for a tropical storm to hit the South Coast. There are details and preparations and plans that need to be made to defend even a well-made home in a neighborhood of well-made and cared-for properties. Then there is the care and feeding of the occupants that needs to be planned for. The analogy between trying to protect a home network designed around a vulnerable architecture and defending a dwelling that is measured in single and double widths seems appropriate.
Finding an alternative way to mitigate IoT security issues continues to elude most manufacturers. The home firewall market, for example, is its own worst enemy. The code they use is old, unpatched and often contains more holes than the cheese found in Western Europe. There is no concept of separating IoT devices from the computers on the network, for example, which ought to be a fundamental procedure. Modern CSPs separate tenants, and even many functions and applications. But the comingling of the public cloud with the IoT cloud is a huge gamble. When combined, they make up something that has the potential destructive capabilities of a really bad storm cell over the Great Plains.
It is not dividing by zero as much as it is multiplying by zero.
The looming tornado threat found its first form with the Mirai botnet. Mirai is a collection of “hacked” home routers, DVRs, IP-based CCTV cameras and many other small devices that run operating systems that share a lot of fallibilities. Hackers are able to remotely load them with tools that enable them to hack other similar devices. Because these tools have unrepairable vulnerabilities built into them, even if a user reboots a device to clear it, it will likely be reinfected very quickly.
This network of infected devices can then be used to attack webservers, load balancers, DNS servers and all the other parts of the internet people tend to take for granted because they are “just supposed to work.” Like a tornado, these network storms have managed to take down large portions of the internet over the past year. Verisign, an owner of many of the top-level DNS servers, noted that its largest attack was clocked at 680 gigabits per second.
That is nearly the entire bandwidth of long-haul connections for the West Coast.
Preventing this would only require a handful of manufacturers to decide to emphasize defense rather than participating in the race-to-the-bottom, or the exclusive focus on speed and cost. If the public understood that their home routers can be used by criminals to snoop in on them, jump into their unprotected child monitors and grab at bank account information they leave in text files on their computers, IoT manufacturers would be forced to clean up their business.
What other natural phenomena should you worry about?
Did you know that the internet has a weather report?
The internet can experience floods, tornados and even scorching droughts. Each of these can affect adjacent areas. Drought, for example can be caused by gigantic firewalls and government limitations placed on the internet in various regions. As a result, it causes those portions of the internet to become unused or start to die.
Tornados are caused by botnets that can fill areas of the internet that are the equivalent of the American Midwest, like broadband networks. Like the dust bowls of the 1930s, broadband networks discourage the upstream serving of data. Just like over-farming creates infertile land, broadband networks are created merely for consumption and not serving, which means those IoT devices with basic server functionality built in are able to create a problem that can escalate rapidly and without much chance to shut it down.
Perfect storms are rare, resulting from a confluence of the worst possible causes. Tying a home to a cement pad, for example, helps it survive severe weather. Likewise, networks need to be built on a secure foundation. Of course, that means a little more will need to be paid for those devices, and more planning for security will need to take place if a person, for example, plans to automate their home or provide services for their family. In addition to building on a secure foundation, such things as practice, training, drilling and standard operating procedure testing are the only things that can truly prevent a disaster from happening, but that may be a generation away from possible. To start, however, IoT device manufacturers need to commit to better security so that the internet weather can be predictable and sunny, with blue skies every single day.
Even by the hysterical standards of the media, the past few months have seen an absolute avalanche of data breach stories. From hospitals to burger chains, credit bureaus to law firms, the headlines have been saturated with news of personal and financial data being lost, stolen or accidentally publicized. The big question is: Do we really care anymore?
While these cyberincidents certainly have major repercussions — for the breached organizations and their customers, we need to put them in perspective. Because there’s another, potentially much more serious security challenge facing us: The internet of things. And when we talk about IoT flaws, we’re not just talking about lost data. We could be talking about loss of life.
Google the words “data breach.” The results, going back just a short while, are incredible. Wendy’s, Kiddicare, Equifax, infamous Panamanian law firm Mossack Fonseca, Heath-Allen hospital in Iowa, the Ohio Department of Mental Health and Addiction Services, Chelsea and Westminster Hospital NHS Foundation Trust, and, ironically, even Google all suffered a breach or were hit with fines. And that was just a cursory online search.
Even more headlines were devoted to breach-related stories: ID Experts research claimed data breaches cost the healthcare industry some $6.2 billion; a FireEye study revealed incidents are destroying trust in brands; even the British government is getting in on the act, claiming 65% of large firms detected a breach or cyberattack in the past year.
There’s no doubt that raising awareness about breaches is important. The impact of information-stealing cyberattacks on privacy and the economy is undoubtedly huge. But for individuals, identity theft and fraud is not much more than an irritant and an inconvenience. And for organizations, it amounts to an economic and reputational hit, however major. But the fact still remains — none of these are life or death matters.
By contrast, reporting of IoT flaws has been relatively sparse, despite some major research appearing of late which has begun to shed light on the potentially life-threatening nature of security problems in embedded computing systems. The most famous case was Miller and Valasek’s demo at Black Hat which showed how hackers could move laterally inside the computing environment of a 2014 Jeep Cherokee, reflash firmware on a chip controlling the CAN bus and remotely control the brakes and steering wheel. It doesn’t take a genius to work out the potentially fatal repercussions of such a hack if carried out with malice.
There are four key problems at the heart of most embedded computing systems like the one in the Jeep, exposing them to hacking attempts. They are proprietary in nature, connected to the internet, the firmware is not signed — making it possible to reverse-engineer the code, modify, reflash and reboot to execute arbitrary code, and the silicon allows for lateral movement.
A tipping point
Be in no doubt, we’re at a tipping point here. These flaws haven’t been exploited on a large scale yet because they require a great deal of time and effort to exploit. But there are already signs this is changing. Governments in particular have both the time and resources. The power outage attack on the Ukrainian grid just before Christmas involved hackers overwriting firmware at multiple substations, rendering them unable to receive commands. It has been widely blamed on Russian state actors, although definitive attribution remains difficult.
It’s clear these IoT flaws are no longer theoretical. And that’s why we have produced new guidance to help the industry build more secure embedded computing devices. “Security Guidance for Critical Areas of Embedded Computing” outlines our new hardware-based answer to these fundamental weaknesses. The key to securing these systems lies in focusing on the silicon — because security becomes harder to interfere with at that level. So we’re espousing a root of trust anchored in the hardware, which means the firmware becomes tamper-proof. And hardware-level virtualization to keep critical components isolated and containerized, so even if one were compromised, it couldn’t allow lateral movement. The whole premise is based on open source and interoperable standards — to focus on the best quality code possible and force an end to “security by obscurity.”
Let’s not wait for the next major incident involving exploitation of these IoT weaknesses. We don’t want to see an airliner downed by a fleet of hacked and remotely controlled drones. Or key firmware inside a nuclear power station overwritten to carry out the wishes of a black hat group. It’s time to get serious about IoT security. This means changing the mindset at the development and manufacturer levels from “it works, now let’s try and secure it” to “if it isn’t secure, it doesn’t work.” Only then will have a connected world that is safe for all.
Industrial investment in IoT is first and foremost about creating value. Whatever your specific scenarios may be, opportunities begin with the ability to know the state of your machines and the world around them at each moment in time. This data becomes fuel for predictive maintenance, yield optimization and new revenue streams.
Unlocking the value of data
Raw data must be collected, cleansed and transformed into forms the engines of enterprise can put into action. Cars don’t run on pools of petroleum buried miles underground. Drilling, pumping and refinement turns sludge into speed. Similarly, data collected from “things” must be normalized, contextualized and integrated with enterprise ERP, CRM, BI and other systems to create value.
Protecting your network
IoT systems face the challenge of the 3 V’s — velocity, variety and volume. Errors range from the obviously corrupted to the well-formed but highly inaccurate. Unauthorized data and commands from malicious actors create additional risks to the system. How will you track data provenance and verify a chain of custody for all events? Can you ensure only trusted events are pulled into your enterprise systems, machine learning, analytics and other tools? How are you keeping dirty data from polluting your enterprise? These challenges must be addressed upfront for real learning to occur.
Implementing continuous learning loops
As systems collect and process data, organizations must also operationalize what they discover. Each consecutive insight should fuel the next in continuous learning loops. Adaptable data management services can ensure a reliable supply of clean, trusted data for building and training predictive models, turning these derived insights directly into improved system performance. Such systems turn incoming data into consistent, immutable and normalized records for your machine learning, analytics tools, and for data scientists to process and then deliver intelligent, operationalized insights to your enterprise integrations and IoT applications.
This method of data management enables your intelligence tools and teams to train and validate models for machine learning. When outputs of real-world integrated systems don’t match the predicted model, failure data is cycled back into the model retraining and validation process.
Save the humans
With IoT learning loops and predictive models, you can replace “scheduled maintenance” with “likelihood of impending failure,” thereby eliminating unnecessary visits and emergency repair calls. As equipment uptime and customer satisfaction increase, new sales tend to follow a similar growth pattern. Thus, adoption of IoT and automated monitoring of machines does not remove the need for human technicians. While it does mean the number of technicians needed per machine decreases, IoT enables organizations to increase the number of machines in the field along with their overall revenue potential.
Taking advantage of tomorrow
Will the IoT technology you invest in today deliver value over time as the world changes around you? That depends on whether your team assembles an application conforming strictly to the current model, or adopts a flexible interface atop adaptable data management services not bound to specific machines, data types or processing tools. Artificial intelligence and machine learning processes are improving at fantastic rates, faster than your internal systems and teams will be able to keep up with. Your chosen architecture should provide a stable interface for your business and data layer while enabling your organization to reap the benefits from improvements in IoT analytics and infrastructure.
To realize significant value over time, your organization’s IoT offering should be designed to learn and evolve, enabling your team to quickly adapt to new business opportunities and take advantage of higher performing technologies without requiring costly architectural updates.
As more and more ATMs become connected to the internet of things, the need to protect communications between disparate ATMs and bank processing centers is critical. Though the first ATM was unveiled 50 years ago, the basic components that make up an ATM have not changed significantly. Many banks still have 20th century ATMs in everyday use, which unfortunately increases the risks of cyberattacks. The use of outdated, insecure software is widespread, and mistakes in network configuration are common while critical physical components are often not properly guarded.
Search engines for internet-connected devices, such as Shodan, only exacerbate security risks, allowing anyone to find the ATMs that are the most vulnerable. Without properly secured connections, stealing money remotely from ATMs is the cybercrime equivalent of taking candy from a baby.
With the number of touchless attacks on ATMs on the rise, secure remote connectivity is vital for machine-to-machine (M2M) environments. Last year, several banks around the world were attacked by malware that allowed cybercriminals to take full control of cash machines. This technique, known as touchless jackpotting, requires no physical tampering. Instead, it allows cybercriminals to attack poorly protected ATMs remotely via the global ATM network completely undetected by security services.
Mitigating risks with VPNs
Despite some of the strictest regulatory obligations and their attractiveness to cybercriminals, it appears that retail banking is no different than any other sector in quickly moving forward with IoT while comprehensive security measures lag. Older ATMs that have recently been connected to M2M environments are particularly at risk.
Although most bank ATM networks use advanced encryption to protect the sensitivity of the financial data being exchanged, the rise of remote ATM attacks show that many banks still have protective measures to take. The first step in protecting connections between large numbers of disparate ATMs and bank processing centers is to utilize virtual private networks (VPNs), firewalls and MAC-authentication.
Securing ATMs with VPNs is comprised of four essential components: automatic/always-on connectivity, authentication, central management and high availability.
With automatic/always-on connectivity, the VPN client is set to connect to the VPN automatically and remain connected. In the event of a disconnect occurring, due to network downtime for example, the VPN client will reestablish the session as soon as the data connection comes back up.
When it comes to authentication, ATM transactions use two or three human factors such as the customer’s ATM card, their unique PIN and, in some cases, their fingerprint or retina scan. In modern ATMs, the customer’s smartcard, in combination with a smartcard reader inside the machine, provides another layer of security to assist the digital side of the authentication process.
Ultimately, ATM VPN connections should be centrally managed. A VPN management tool allows IT administrators to update configurations, upgrade software and manage certificates remotely. The only alternative is to perform the updates manually using a memory stick or CD, which requires giving someone physical access to every machine. Unfortunately, this can give those with criminal intent an opportunity to gain access to the machine, inject malicious software or attach a device inside the machine and take control over it.
Lastly, since connections between individual ATMs and the main network cannot afford downtime, high network availability provided by a professional VPN system and supported by several backup systems is essential.
As the internet of things starts to permeate every aspect of business, the need to protect both old and new ATMs in M2M environments is urgent. The age of some traditional ATMs and the primitive nature of the software they run on leaves additional security loopholes for cybercriminals to exploit.
The deployment of VPNs, coupled with the prompt patching of every server on the network, is essential to secure interactions between thousands of ATMs communicating with their data centers. Comprehensive VPN software fits easily into existing infrastructure and require no additional hardware. Moreover, data traffic is secured at the device itself so that no unencrypted traffic ever leaves the endpoint.
Financial institutions can also stay protected by ensuring every device accessing their network has up-to-date firmware and by implementing network security technologies, such as intrusion prevention systems and firewalls, within an in-depth defense framework to minimize potential attack vectors.
As analysts predict the number of M2M connected devices will grow from 12 billion to 50 billion by 2020, securing connections must be a top priority. Using a VPN enables endpoint devices to communicate through a secure encrypted tunnel, which makes it nearly impossible for an attacker to access an IoT device and breach a financial network.
I recently had the pleasure of speaking with Prabhu Ramachandran, who previously led WebNMS, the IoT division of Zoho Corporation. He was instrumental in building its IoT platform and strategizing WebNMS for enterprise IoT technologies. With his 18 years of experience in the telecom and IoT spaces, he shares his valuable views with us today.
You were a driving force in strategizing IoT technologies to enterprises all across the globe. By your experience, what do you think about the scope of enterprise IoT in India?
Prabhu Ramachandran: Thank you. I feel the scope of IoT technology in enterprise scale is not exploited well enough in India. One of the main reasons would be the mindset of leaders who still feel that spending on IoT means just spending on devices. There is still some resistance in believing the results.
In your opinion, what is the best strategy to advocate the use of IoT in an enterprise scale and attract customers?
Ramachandran: My theory here is you must make your value addition clear. This gets more efficient if you analyze your customers’ spending patterns more clearly. You have to do your homework right by considering their quarterly revenue. Considering all such factors, you have to draw them a clear path for adoption. If you just showcase your IoT technology like a sci-fi movie in a conference, it won’t work.
What is your advice to practice leaders starting to offer IoT-based services?
Ramachandran: You have to define your market well and design industry-specific offerings. Say you design a general internet-connected device tracking system and now want to get this working. Instead of calling it an IoT product, you can rebrand creatively and call it a “next-generation cloud-based supply chain technology.” This creates more traction than a generic IoT service. Once you penetrate and prove your value addition, you can scale up.
What are the must-have technical know-hows these leaders must possess? What technical background helps these leaders the most?
Ramachandran: I feel developing IoT services is more about connecting the dots. There is no need to reinvent the wheel and build technology from the scratch. Leaders should be aware of latest cutting-edge technologies so that they don’t spend time in those places where the desired service already exists. PubNub is a good example as it provides a complete infrastructure plumbing for real-time apps. Pros and cons of third-party APIs and platforms such as AWS, IBM Bluemix or Google Cloud should be well understood. Moreover, these companies will be adding new features frequently. Leaders who keep themselves updated will win.
Everyone’s busy talking about what the next smartphone will do… when will we have “wireless” charging (that isn’t actually wireless) or an end-to-end buttonless touchscreen with curved glass. Everyone’s hyper-focused on this indispensable phone that they could never imagine a world without, but they’re all missing the point. Ten years ago, the technology world was unraveled when Apple released the iPhone. Many predicted its failure because they couldn’t foresee the convenience of having a computer and an internet connection (even if inferior to a laptop) in their pocket at every moment. Similarly, we’re all so blinded by being tethered to our smartphones that no one is planning for the future.
In the future, data connectivity will be ubiquitous and processing power will be off the charts (literally). You won’t have a super-powerful smartphone or laptop that can do everything because your watch will be powerful, your glasses will be powerful, your fridge will be powerful and who knows, maybe even your kitchen drawers will be fully automated. In a world truly surrounded by smart devices, including roads, automobiles, buildings and restaurants (not just smart registers or smart elevators), we won’t care about buying the latest iPhone the day it comes out because it won’t be our most important technology.
Time and time again, consumers have told us that power and specs aren’t everything. They prefer learning and consuming content from a mobile device that is dwarfed by their desktop because it’s right there. Similarly, as other devices become more intelligent and more connected, their convenience will diminish the utility of your smartwatch. In the same way that it’s easier to ask Alexa when the bus is coming than to pull out your phone, launch an app and wait, it’s easier to open your smart fridge and have it suggest you make broccoli beef because your broccoli is wilting and you bought some meat at the butcher yesterday.
Today, it’s hard to imagine a future where you won’t need your smartphone because most connected IoT devices aren’t really connected. Yes, they have a data connection, but it’s dependent on your Wi-Fi or smartphone. And yes, they collect information, do things “automagically,” and are generally better than their “dumb” unconnected counterpart, but they don’t talk to each other. When you build a connected home today, you’ll probably set it up with Ring, Nest, Hue, Alexa, Sonos or HomePods, a Withings scale. Individually, they all make your life a little better, but when they clash, it can be pretty annoying. This is definitely a first-world problem, but it’s pretty annoying when you’re blasting music on your Sonos, you can’t hear the caller on Ring and it takes you 30 seconds to mute your music.
In the future, your connected home will actually be connected. When your Ring sees someone approaching your door, even before they ring the doorbell, it’ll fade your music so you can hear your phone ring and see who’s there. Your fridge will know what’s in it and when your food is expiring, and your oven will be able to warn you before you burn your dinner. In fact, it’ll be connected to all the other sensors in your home, so it’ll know you’re in your bedroom and give you a verbal reminder only in that room, without disrupting your guests who are throughout the rest of your home. That’s what a connected experience should be.
We’re living in a world with smart sensors and smart devices that are all individually smart, but don’t yet know how to communicate with one another. Yes, you can import your data from Fitbit and Withings and combine it, but it isn’t automated, and it definitely isn’t easy for most people. To create a world with a ubiquity of unique smart devices that actually work cohesively, we need to first empower these devices to communicate with one another more easily. Connected platforms like HomeKit will enable these integrations through single sign-on, just like Apple did with its new TV platform.
As IoT devices and platforms continue to improve, soon everyone will be adopting these smarter devices at home and elsewhere. Each of these IoT devices will individually become more useful than pulling out your smartphone. In the same way many users report their Apple Watch reduces how often they check their phone, each new smart device will reduce the user’s dependency on their phone. In 10 years, this inflection point will begin when users start relying less and less on their smartphones, just as 10 years ago the iPhone brought this on the PC. In 20 years, we’ll think of the smartphone like an iPod today… a legacy piece of hardware that’s forgotten in the back of some drawer.
Healthcare is an area that promises a great potential and is witnessing the growth of AI-based and machine learning technologies. Patient care and medical research and diagnosis are two areas that are ripe for such solutions.
Artificial intelligence to the aid of patient care
Patient care forms an integral and important part of hospitals. Apart from good doctors and facilities, how well a patient is taken care of becomes a huge differentiator. It is, however, a costly proportion for the hospitals as it involves dedicated time from nurses and other medical staff.
According to Bret Greenstein, vice president of Watson IoT platform at IBM, medical staff invests around 10% of their time answering basic questions from patients. The questions could be about lunch, visiting hours, hospital rules, doctors’ details and so on. On top of this, mundane activities such as lowering the lights, adjusting the room temperature, opening or drawing the curtains, adjusting the bed and probably a thousand other things, all add up pretty quickly to make the job of healthcare staff pretty demanding.
AI is stepping in just to take away the mundane-yet-important tasks and to free up some of the medical staff’s time. The staff could use this time more productively elsewhere.
IBM is using its Watson IoT platform to collaborate with Philadelphia’s Thomas Jefferson University Hospitals and Harman to develop smart speakers that respond to a patient’s commands. Upon hearing the patient say, “Watson, dim the lights,” the lights or other features of the room will adjust based on the command. Through voice commands, patients can also control the thermostat, ask the speakers to play soothing music and so on.
AI healthcare systems are evolving to help patients even when they have been released and went home. AI can tell much more accurately things such as how long is the patient going to recover and what is the best course of treatment for a specific patient, and can also prompt patients to walk and exercise more, and so on.
Diagnosing medical conditions with AI
It’s not just the healthcare; AI can also bring major benefits in other associated medical areas as well. Studying the vast information abundance and diagnosing specific cases quickly and as good as a human doctor does is one such area.
Still under the works, IBM’s Watson is a great effort in that direction. Watson’s intelligence hinges on the data it absorbs. It’s a giant with an insatiable appetite for data and information. Watson began as an AI that was learning to read and write, and then was making educated guesses in Jeopardy. It is now gaining a better understanding on cancer, training under 20 top cancer institutes to learn more about oncology and genomics. Watson may be an AI in training, but it can read as much as 25 million papers in a week. Around 8,000 new research papers are published every day — something impossible for doctors to go through even over several months.
AI doesn’t just picks up the information fast, it also learns to make the appropriate analysis based on the information it has absorbed, just like a medical student. A test was done to see if Watson would conclude the same genetic mutations as the Molecular Tumor Board. Based on an analysis of 1,000 patients, the AI could provide the same recommendations in 99% of cases. In 30% of the patients, Watson introduced a new insight that the other physicians didn’t conclude.
IBM’s Watson isn’t the only upcoming major healthcare AI innovation. Big companies such as Apple and Google and healthcare giants like GE Healthcare have made investments in the industry, and their technologies will further bring innovative disruptions to healthcare.
Challenges and future outlook
Everything, however, is not rosy. The leading healthcare platform, IBM’s Watson, still has a long way to go before establishing unquestionable credibility. The prime reason is unavailability of data that can be fed to train Watson. Specific diseases need specific types of data along with thousands of other contextual variables to consider. Gathering such data, which is both credible and exhaustive, is a huge challenge for any machine learning technology.
The challenges are plenty, but the rewards are going to be exponential. Patients could really rely on smart AI-based assistants so that they do not need to depend on anyone to carry out basic tasks. Medical diagnosis, similarly, could help doctors a great deal by churning out useful, quality advice in time.