When you take a train, you expect it to drop you off on time and without being squashed like a sardine in a can. This doesn’t sound like very much to ask, yet it is. The public transport sector struggles to have its trains, buses and ferries run on time, and prevent breakdowns and accidents while keeping travelers well informed and satisfied. Many of these problems could be avoided by better maintenance. Sadly, maintenance done right takes time and money, and maintenance done wrong takes more time, more money and sometimes it even takes lives. If only there was a way to tailor your reparations and speed up downtime. Oh wait. There is. Welcome to the era of the internet of things.
Public transport vehicles are taken off the road for maintenance regularly. This downtime is crucial, as deterioration such as worn wheels and bad brakes can cause delays or even fatal accidents. To avoid any risks, vehicles are inspected every couple of months, even if there’s nothing wrong. Mechanics check them using a standard checklist, clear them for the road and send them on their way. This predictive maintenance method has two major disadvantages. First, lots of time and money is spent on the maintenance of trains and buses that don’t need it. Second, a standard checklist is not always sufficient, as public transport vehicles are subject to many different circumstances. They differ in rides per day, occupancy level and the kind of service they provide. City buses, for example, will show different wear and tear compared to long-distance buses. Moreover, weather conditions have a large impact on the state of vehicles and can differ per region. Regularly scheduled maintenance may help public transport companies in fixing the larger part of the defects; it won’t help them improve their services.
Don’t predict the state of the engines, know the state of the engines.
The internet way of fixing things
There’s nothing wrong with predicted maintenance; there’s something wrong with the way people make the predictions. Today, most decisions in public transport maintenance are made based on earlier experiences and data. But as I pointed out, there’s just too much variation in the way public transport vehicles are being used. If you really want to know about the state of your buses, trains or whatever it is you’re driving, you should look at your data in real time. Don’t predict the state of the engines, know the state of the engines. With this information, you can schedule tailor-made maintenance for the vehicles that need it and leave the rest alone. How do you do it? You simply do what everybody else does when there’s a problem: you include the internet. By attaching sensors to the different parts of your vehicles (think engines, brakes, batteries), you can request information about them wherever and whenever you want. APIs will gather the input and translate it into workable data that you can use to set in motion the follow up. To do so, you determine thresholds per vehicle part, so that when a parameter goes out of the normal range, the vehicle is taken of the road for maintenance.
The long list of benefits
There’s more to this new way of predictive maintenance than you might think. First of all, tailoring your services based on real-time data will help you save costs. When mechanists know what needs to be done beforehand, they can get rid of the standard checklist and move on to the actual defects. This will shorten the downtime of the bus or train and get it back on the road faster. Second, it will improve the reliability of the vehicle thanks to better targeted maintenance and real-time insights. From now on, you will know about the low tire pressure before the tire deflates, which results in less unplanned stops and unpredicted downtime. Another advantage is that you don’t need as many mechanists and parts as you used to because you only need them when there’s an actual defect or crossed threshold. Lastly, you can optimize the way you communicate with travelers. When you use the internet to monitor the state of your vehicles and to schedule maintenance, you can also use it to inform people on available buses, trains and where they can find them.
IT makes it happen
What does internet-based predictive maintenance look like in real life? I personally like the story of Trenitalia. It uses the internet of things, sensors and data analyses to keep its trains in top shape, and was able to shorten downtime, cut costs and make better predictions. It is one of many success stories, and it proves that we’re close to completely changing the public transport sector. And it needs it, as there’s no other sector in which more people go from one place to the other. They want to be transported in the safest and fastest way possible, while having as much information on their journey as possible. I truly believe that IT can make that happen. Switching to such a digital strategy requires a lot of thought and strategy, though. If you want to disrupt your sector by staying one (or two, maybe three) steps ahead of everyone else, you need technology, knowledge and, most importantly, a cultural switch within your organization. Take all that, mix it with IoT and a great (data) infrastructure, and you will rule public transport.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
UPS has saved 10 million gallons of gas, emits 22,000 tons less carbon dioxide, and delivers 350,000 more packages every year — simply by avoiding left turns. Sounds simple, right? The concept was introduced by George Dantzig in 1959, and since then, only 10% of turns made by drivers are left turns, resulting in fewer miles traveled, less time idling and fewer accidents from waiting for a gap in traffic.
Dantzig was able to find a point of friction in his business — left turns — and apply a new approach that saves time, money and valuable gas. Here’s how you can find those points of friction in your business and eliminate them with creative thinking and strategic problem solving.
1. Approach the search with an open mind
Finding areas of waste, whether that’s gas, like UPS, or time, like it might be in your business, isn’t the easiest thing to hear. You want to believe you’re doing everything right already. Checking ego at the door is a helpful step in making this process effective (and being able to acknowledge where some sections might be underperforming, even if it’s not the most welcome information).
Look at the areas of your business that you struggle with most. Maybe it’s parts fulfillment, maybe it’s scheduling, maybe it’s hiring. Can you break that down and approach it in a different way? Then, invite new minds into the conversation. Hootsuite appointed a “director of getting shit done” with the goal of eliminating painful processes (like streamlining the process for sending a promotional t-shirt — before, once everyone on the org chart had approved the purchase, that shirt was costing the company over $200 in time and effort).
Medical device and service company Elekta is a great example of this. The company spotted a few areas of potential improvement around the service of its equipment — and using field service management, Elekta has been able to predict outages and address them proactively. In many cases, repairs on equipment could be made prior to equipment failure, which reduced downtime and addressed patients’ needs for quicker, more efficient service. Saving UPS-scale resources over time requires making some unexpected decisions.
2. Listen to your customers
Break out those NPS scores — customers are good at telling you what they do and don’t like. Maybe they aren’t getting responses fast enough. Maybe the on-site service is lacking. Maybe they hate that it takes forever to get an invoice. See which of those problems you can solve on a tactical level. Think about the one thing your customers complain about that you feel like you can’t fix. For example, maybe your techs are always late — you can’t solve that, you aren’t in the car with them. But maybe they’re late because they have to fill out a long series of pre-repair forms before walking in to look at a machine. Follow the customer’s nose to the problem — that’s the best place to apply “what would UPS do?” thinking.
But don’t ignore the positives. If there’s something customers love, like your knowledgeable account managers, try applying that to other lacking areas of the business through education or training resources.
3. Look to the data
The study of analytics has changed from old-school business intelligence firms to a more democratized view of numbers. Now, every business has data on themselves — it’s your duty to use it. Start with your improvements over time. What new factors (new hires, new tech, new systems) affected the rate of change? Which ones didn’t work out as planned? Anecdotes on these unexpected data points can lead to new kinds of thinking and improve processes.
Another good place to start looking at your data might be the extremes — look at the activities of your best tech and your worst tech, your best site and your worst site. Where are the numbers farthest apart? That’s where there are areas for improvement. If one tech takes two minutes to file a report and one takes 20, consider that an outlier. If one takes two and one takes six, they’re both closer to the baseline. Dramatic differences are the best place to look for new processes and solutions.
And don’t be afraid to look to emerging technology like AI and predictive analytics to learn about and optimize your business. These new technologies can add layers of productivity that can eliminate wasted time or resources. For example, AI allows for companies to invest in remote training, rather than flying people around the country, saving thousands of dollars in flight costs and productivity loss.
The truth is, every business has areas it can improve on. But finding them is a matter of knowing how — and where — to look. Be reflective and unbiased in the scrutiny of your business, and take note of the positives too. Finally, understanding data can be a huge business insight. The internet of things means everyone has access, not just business intelligence firms, so make sure you leverage it properly. You might have to quit making left turns soon — but it will be worth it.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Often, I get the feeling that the future is already here. My father’s 1970 Ford Escort was manual, it didn’t have air-conditioning, or even seat belts in the back seat, because who wore those back in the day. For some reason, though, it never crossed his mind that there was something unsafe about driving around in his brand-new, shiny white Ford without a seat belt on. Who would have thought that 45 years later electric cars with Tesla-grade technology would be on the market, and that the next big thing for automotive would be autonomous vehicles?
The futuristic revolution unfolding in the automotive industry holds great promise, but it also poses significant threats. Not just to the individual, but to entire nations on a global scale. Now, before you accuse me of exaggerating, let me explain. These smart and connected cars are essentially mobile IoT devices that remain an integral part of the automotive manufacturer’s organizational network long after they leave the dealership.
This translates into technological dependence on the manufacturer to confirm that the car is secure, that its software is patched and that there are no ways for hackers to carry out attacks that could potentially put lives at risk. To put some of these concerns into context, in 2015 two hackers, Charlie Miller and Chris Valasek, were able to hack the Uconnect system in a Jeep Cherokee, cutting the vehicle’s transmission and brakes while it was in motion. The duo completed the hacks, which also included remotely commandeering the wheel while the vehicle was in reverse (terrifying!), presenting the severe vulnerabilities they discovered in this “smart car” to the automotive community. The hack was carried out in a test environment, but if the vehicle had been on the road, such a hack could have had severe, life-threatening consequences.
It’s safe to say that the latter example is only a preview for the level of risk inherent in self-driving cars. Because the entire process and skill of driving is automated, there is significantly more room for dangerous hacks. What if the passenger falls asleep? They might end up causing a major accident or tragedy without even knowing it. Could these drivers be held accountable for negligence? Who is to blame is such a terrible situation — the passive passenger or the automobile manufacturer?
Charlie Miller, the same researcher who discovered the vulnerability of the smart car, was tasked with researching the potential security breaches in autonomous vehicles. He found, and we should probably heed his words that, “Autonomous vehicles are at the apex of all the terrible things that can go wrong.” Because autonomous cars are at the mercy of computers, there is even more room for hacking operations, potentially on the scale of a full-force terror attack.
Given the myriad security challenges inherent in automotive transformation, how can car manufacturers safely drive their networks? Protecting cars against these ever-evolving threats must be an on-going and active pursuit. Public key infrastructure (PKI) authentication is one way to address the security of automotive networks. PKI’s role in IoT is to provide robust authentication, using appropriate certificates that systems, devices, applications and users need to safely interact and exchange sensitive data. If connected cars are not developed with proper security measures, they will not stand a chance against the attackers waiting beyond the assembly line.
Connected and autonomous vehicles are here, heralding in the most significant technological revolution since the invention of the automobile itself. However, the thought that they pose significant security threats — not just to information, but to physical people — makes me want to get back in the back seat of my father’s old Ford. True, we didn’t have seat belts back then, but at least I trusted the driver.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.
Mesh networking is key to the success of the internet of things. In comparison to traditional wired counterparts, mesh networks are less expensive to implement, more adaptable and scalable, and offer greater reliability for high-traffic implementations. While niche technologies such as Thread and ZigBee have standardized mesh support, a limited install base and unpredictable performance, reliability, scalability and interoperability have created challenges for an industrial-grade technology to emerge for large-scale mesh networking scenarios like building automation.
In contrast, Bluetooth mesh, released in July 2017, was specifically designed to address the unique requirements of commercial and industrial networks, and offers a wider range of applications than current technologies. In a recent white paper, Ericsson tested the capabilities of Bluetooth mesh and its ability to provide the unique features that commercial industries require. The goal of this test was to illustrate the impact of configuration and deployment strategies on a Bluetooth mesh network to determine if thousands of nodes could potentially communicate with no single point of failure, and guarantee interoperability. Specifically, Ericsson looked at the managed flooding communication model of Bluetooth mesh.
Ericsson describes managed flood as a message relay technique in which “all nodes are asynchronously deployed and can talk to each other directly. After provisioning them, the network simply starts working and does not require any centralized operation — no coordination is required and there is no single point of failure. A group of nodes can be efficiently addressed with a single command, making dissemination and collection of information fast and reliable.” This approach offers flexibility in deployment and operation.
Using a capillary network, Ericsson executed a large-scale building automation test case and full-stack implementation of Bluetooth mesh in a system-level simulator to determine if there should be any concern around high congestion, which may result in packet loss for contention-based access in the unlicensed spectrum. This office automation scenario included a total of 879 devices, including window sensors, occupancy sensors, HVAC sensors and actuators, light switches and lightbulbs, all deployed in an area of 2,000 square meters. The network performance evaluation was based on three traffic setups in the evaluation: a low-traffic case with aggregate application throughput of ~150 bps, a medium-traffic case with aggregate application throughput of ~1 Kbps, and a high-traffic case with aggregate application throughput of ~3 Kbps.
The most crucial performance metric of any mesh network is the quality of service (QoS), which Ericsson defines as the ratio of transmitted packets that reach the end destination within human perceivable time (300 milliseconds in this case, which is a typical requirement for lighting applications). The QoS of the Bluetooth mesh network went up to an expected level of >99.9% for five out of the six tested cases and up to 99.1% for the final dense relay deployment with high traffic. In all cases, all endpoints were reached within the 300 millisecond delay. Overall, Ericsson saw the best performance when deploying six relays every 1,000 square meters, corresponding to roughly 1.5% of the total number of nodes.
Ericsson concluded that, with proper deployment and configuration of relevant parameters of the protocol stack, Bluetooth mesh supports the operation of dense networks with thousands of devices, overcoming any initial concerns regarding high congestion. Ericsson also argued that Bluetooth, with the release of the Bluetooth mesh specification, is a strong candidate to become the dominant short-range technology to connect edge nodes in a capillary network that uses short-range radio-access technologies to provide groups of devices with wide-area connectivity.
As Ericsson explained in the white paper, “Bluetooth mesh is standardized and interoperable by design. Qualification and interoperability testing is rigorous and involves all aspects of the protocol stack, including security. There is no risk of companies developing separate processes for different parts of the stack. Moreover, the specifications are open and can be tested by the community.” This, coupled with the larger Bluetooth footprint, as well as the value-added capabilities of Bluetooth technology for providing localized information, asset tracking, way-finding services and more, gives Bluetooth mesh the potential to be quickly adopted by the market.
You can learn more about the architecture and network configuration options used in the building automation test case by reading the full white paper on the Ericsson site. To learn more about the technical details of Bluetooth mesh, download “Bluetooth Mesh Networking — An Introduction for Developers.” This comprehensive technology overview examines the key concepts and terminology, system architecture and security mechanisms, as well as the unique message publication and delivery technique behind Bluetooth mesh networking.
In the United States, around 200,000 manned U.S. general aviation aircraft have been registered over the last 50 years. By contrast, 750,000 unmanned aircraft systems — aka drones — have now been registered, including more than 40,000 in the last two weeks of December 2016 alone. And that’s just one country. It exemplifies the dramatic influx of “things,” which carries unprecedented opportunity for digital disruption. They’re typically full of sensors, increasingly connected, produce enormous amounts of data and can be the source of newer, smarter business models that touch every industry. For example, in the past decade, wind turbines have quickly evolved from isolated standalone machines to connected, sensor-laden, intelligent devices. One of the largest suppliers of wind energy, Vestas, has 60,000 turbines, and the newest ones have 1,000 embedded sensors each, all capable of emitting real-time streams of data. How can we harness the power of such massive amounts of real-time, streaming sensor information? How do we manage computation at drone scale?
Drone-scale computing requires new thinking. Over a decade ago, sensor-based computing research began through funding by the United States intelligence agencies at MIT; in parallel, academics at Stanford and Cambridge also explored how sensors would change the computing physics of moving data. That academic research spawned commercial and open source stream computing technologies, and some have evolved into streaming AI engines. Just this past year, Forrester included streaming analytics as part of its next-generation business intelligence category, which it calls “systems of insight.” While last-generation business intelligence is focused on putting data on a graph, systems of insight are focused on generating insight and directing action — and are being deployed for transportation and logistics systems, digital customer engagement and intelligent industrial IoT applications.
A system of insight is like your human nervous system: AI is the brain, IoT sensors are your senses, middleware is your skeletal system and streaming analytics complete the autonomous nervous system’s function.
The brain is fueled by sensory input. In a system of insight, IoT sensors are like nerve endings in your fingertips. Signals are captured and distributed throughout the nervous system. But our IoT sensor nervous system is evolving. Today, many IoT applications must rely on decades-old SCADA networks. Thanks to drone-scale requirements and improving networks, that archaic IoT fabric is being replaced by a new infrastructure. AI is now used to produce algorithms, which must be injected into the network nervous system to enhance capability. Firms like GE, Siemens, Rolls-Royce, Syniverse and QIO are all in a sensor network arms race to create smarter IoT networks that can extract and transmit data from sensors in real time. The Vestas implementation captures terabytes of sensory input each and every day from its wind turbines to continuously train algorithms that continuously instruct turbines on how to react to wind and atmospheric conditions and optimize power production. It considers that system of insight one of its most valuable — and secret — corporate assets.
Streaming analytics is analogous to your autonomic nervous system. You don’t think about taking your hand off a hot stove, you just do it. Athletes rely on their nervous system to strike a golf ball, fake out a defender or pass to a teammate behind the back. Similarly, drone-scale systems require autonomic reaction to conditions. Streaming analytics of trained algorithms provide this automatic intelligence in action for IoT systems and guide delivery drones to, say, avoid collision with one another. It doesn’t have to “think” about it, it just has to move. Such prowess is achieved by network operators and businesses combining multiple sources of streaming data — for example, correlating mobile transactions with drone movements and delivery. This creates real-time “game awareness” like that of great athletes and coaches who see the whole picture and make smart choices: Do I run a different play, choose a different receiver or call a timeout? Algorithmic muscle memory and algorithmic game awareness depend on correlated streaming AI analytical models.
Our body’s skeletal system differs somewhat from IoT skeletal systems — for one, our bodies don’t have to worry about hackers and security breaches (yet!). All automated IoT systems face this challenge, as the importance of a connected business nervous system makes for a big target. IoT systems must be as secure as they are smart, from the edge (on a device) to the core (for network-wide visibility). To function at drone-scale, security must be algorithmic and automated, and incredibly diligent as well. In 2012, the automated trading system for Knight Capital lost $440 million in 40 minutes due to an automation flaw. Algorithmic security is one of the most important new fields of our IoT-powered future.
The final question is, with all this intelligence and automation built into the new drone-scale nervous system, are humans still needed? The answer is absolutely yes! But the role of the human shifts. While repetitious, manual tasks become more automated, new, even smarter roles for humans emerge. For example, on Wall Street, where 80% of the world’s trading activity is algorithmic, humans are more essential than ever. The trading pits of the ’90s may be gone, but now humans play higher-value customer relationship roles — even average-sized trading firms manage billions of events a day, so algorithms augment the ability of human staff to decide who to help, who to upsell and who is at risk. Those customer advisory algorithms, like the trading automation itself, is one reason for the dramatic rise in demand for data scientists. So, in a drone-powered world, the role of humans will be more important than ever, but the job descriptions will change to best utilize the new tooling.
The next generation of IoT-powered business will emulate the human nervous system from the edge to the core of the IoT fabric in the cloud. The opportunities to leverage this nervous system are endless across all industries. “Things” that were once isolated — from your coffeemaker to wind turbines to drones — now represent the biggest disruption opportunity of the future. We are entering a world where automated, AI-driven action happens faster than the blink of an eye, embedded in streaming IoT sensor data, and aflight all around us. Are you cleared for takeoff?
The Illinois Technology Association (ITA) has been around for a while, effectively promoting technology in Chicago and greater Illinois. Having been a CEO of several software companies and living in Chicago, this was an organization that attracted my attention a long time ago and I subsequently joined the board. Around 2010, shortly after the sale of my second company, I joined the executive board of the ITA. Around that time, I also became CEO of a company called Infobright, offering an analytic database. The interesting thing about Infobright was that it excelled in storing and analyzing machine data, which meant a lot of time with adtech firms and in networking and telecommunications, but it also pointed me towards the internet of things. This became my passion. The combination of my role on the ITA board and my increasing infatuation with IoT led to championing the ITA Midwest IoT Summit in 2013. On the executive board, we all agreed that IoT was unique in that it was the intersection of information technology with operational technology, so where IoT “happened” was important. As it happens, the Midwest has a great industrial, automotive, medtech, transportation, agricultural and even retail footprint, and was even beginning to gear up efforts around smart city initiatives in Chicago. In other words, this was a prime region where IoT should happen. So we set out to embrace and communicate that to all on a cold, late November day in 2013.
The first Summit had five panels, was held at a law firm and, by all rights, was a big success. Who didn’t want to learn more about IoT in late 2013? It was the cool buzzword and we wanted to know more. So naturally, we did the Summit again in 2014, only this time, it moved to a new venue and doubled in size to 250 (the facility limit, thank you fire marshals). We had seven panels in an all-day IoT binge, but it was great. We were exploring various domains, such as smart connected cars, industrial IoT, smart health, smart homes and smart grids, but were also looking deeper into security and data privacy. This was clearly a thought-leadership conference. No sales presentations. No ridiculous panels where each panelist shows “a few introduction slides” lasting 50 minutes, followed by a discussion for the remaining five minutes. It was healthy, thoughtful discussion and debate by people who knew their craft.
In the wake of the 2014 Summit, we knew IoT was becoming the thing in the Midwest, and we also believed that due to the nature of the industries, not only in Chicago, but in the broader Midwest, that IoT could and would play an increasingly important role in ensuring the robust nature of the region, from businesses to government to quality of life. This realization manifested itself in the formation of the Midwest IoT Council in early 2015. The Council has a board, which includes both smaller and larger companies across a variety of disciplines focused on IoT. We also have working committees that put together a Midwest IoT company inventory and case studies, track Midwest-based IoT research, work to bring capital into the Midwest investing in IoT, analyze the IoT talent and education-related needs of companies and the associated offerings from the university system throughout the Midwest, assess the IoT-related policies at federal, state and local levels, and, at times, provide positions and support of certain initiatives, and run the Midwest IoT Summit each year since the inception of the Council. We have also begun to establish remote affiliate chapters of the Council and will have six affiliates up and running around the Midwest by mid-2018. Last, we are beginning to establish “domain specific groups,” pairing several startups in a particular vertical (for example, smart healthcare) with larger established organizations in that same vertical. This effort will also naturally extend to the affiliate chapters as well. We have had hundreds of companies and over a thousand people get involved in these efforts. The 2015 and 2016 Summits each grew larger each year. We also added the First Analysis Capital Conference as an element of the overall Summit, furthering our effort to attract IoT-related capital to the region.
On October 18 and 19, we will be having the fifth annual Midwest IoT Summit. The momentum is palpable, and there is a growing list of success stories. This year we will look at data ownership and governance and the related shift in the market from a focus on IoT-enabled products to becoming an IoT-enabled organization. We will explore the evolution of IoT security from the chip to the cloud and everywhere in between, as well as the growing recognition that blockchain may well become an integral part of IoT. We will debate the changing communications landscape from 5G, LoRa and more. We will explore the role of machine learning and the status and direction of analytics for IoT. We will discuss the role ROI has in moving IoT from pilots to mainstream. We will chart the evolution of IoT platforms and where they are going in the future. We will look at industrial IoT, connected and autonomous cars, smart cities, smart health, smart homes and buildings, and agriculture. We will look at some of the new emerging technologies like voice and its widespread use. We will discuss Council findings on talent development and policy analysis. And we will look back at the path we have traveled, and showcase numerous success stories of IoT in the Midwest. We will hear from two of the top gubernatorial candidates for Illinois in 2018 on their views on the role of technology in Illinois, and we will close with a discussion on the future, focusing on the democratization of IoT.
This has been a long but gratifying road for me personally. I think I am safe in speaking as well for co-chair Brenna Berman, the executive director of City Digital, and the rest of the Council board in saying the excitement around this conference, and around IoT in the Midwest, has grown more and more each year. The Illinois Technology Association and the Midwest IoT Council has come a long way since we began to explore, promote and develop IoT in the region in 2013. It is truly a collective effort of a variety of companies from a variety of industries working to promote IoT and make the region a better place. And there should be no better place, on the 18 and 19 of October, than the Summit. We hope to see you there.
For generations, flying cars have been a fixture of science fiction and thus a benchmark of the future. And they’ve remained largely just that: science fiction. But if we broaden what we mean by flying cars, then they technically exist, and have for some time.
Take the Terrafugia Transition, which, though a hybrid concept, is basically a private plane whose wings can fold up for driving on the road. Likewise, the Flight Design CT series, with its characteristic tricycle undercarriage, has been around since the ’90s. Most recently, the ultralight Kitty Hawk Flyer V1 has demonstrated its vertical take-off and landing (VTOL) capabilities over bodies of water — a harbinger, perhaps, for more to come.
But these aren’t the enclosed-cabin vehicles that hurdle along three-dimensional highways and seamlessly land in heavily populated areas, as foretold by The Jetsons. Enter Uber Elevate and its endeavors to accomplish all of these feats, more or less.
Bringing the space age to this age
In April, the company unveiled plans to test flying cars in Dallas-Fort Worth, Texas, and Dubai by 2020, as part of its new project, Uber Elevate.
Its skyward initiative hopes to shuttle riders through the air using VTOL technology, and cover about 50 miles in 15 minutes. Unlike the proposed aircraft’s closest proxy, the helicopter, these machines will operate without the associated noisiness and emissions, and far more of them will be in the sky.
It goes without saying, Uber has its work cut out for it. Especially given the fact that the technology necessary to support its efforts doesn’t currently exist. On top of that, the company is faced with other myriad obstacles, including infrastructure, operational costs, required licensing, use of airspace and general safety.
But rather than going it alone, Uber intends to prod at manufacturers and regulatory bodies alike — in effect, catalyzing innovations so that on-demand aviation becomes a reality sooner rather than later. While the deployment of such a system may seem a tad too ambitious to some, Uber is uber-confident in its success.
Will these actually be flying cars?
The Elevate initiative, however, doesn’t actually use the term “flying car” anywhere in its white papers. What it proposes are VTOL aircrafts akin to drones, which employ distributed electric propulsion (DEP) technology, to overcome issues like noisiness and vehicle performance, plus autonomous systems to eliminate driver error and enable safe air-taxi operations in dense urban areas.
As of late, VTOL and DEP technologies have become all the rage among developers, many of whom are already flying early prototypes. Such an emerging ecosystem of electrical VTOL tech — which includes the likes of Lilium, Air A3 from Vahana and the Joby S2 — may inspire a degree of optimism in the face of naysayers.
But isn’t it expensive?
In short, yes. Uber is working with its partners to find ways to use VTOL aircrafts that are safe, efficient and, of course, affordable. In addition, it has joined forces with the Dubai Road and Transport Authority, conducting studies on how to best optimize traffic flow and establish pricing models.
The company admits that it’ll be costly to get the ball rolling (or, in this case, soaring), but reason that its proven rideshare model for ground vehicles will similarly assuage initially high expenses for air taxis. It surmises that once the ride-sharing service begins, the resulting “positive feedback loop” should help reduce operational costs and thereby air-taxi fares for customers.
The Elevate project also boasts the potential cost-effectiveness of VTOL networks as compared to traditional roadway infrastructure. Such a system, the company argues, wouldn’t require roads, bridges, rails, parking garages or sound buffers.
Instead, it raises the idea of simply repurposing, say, the tops of parking garages, helipads or other similarly unused real estate or land, as vertiports. Roads, tracks and all else wouldn’t be necessary to get from point A to point B.
Won’t they be loud?
Noisiness has always been a big concern for aviation. Both airplanes and helicopters are inherently loud, and property values in neighborhoods that are in close proximity to airports are typically lower due to incessant sonic disturbance. The prospect of aircrafts assaulting the skies a hundredfold may feel rather daunting.
Fortunately, DEP also exhibits promise as a noise buffer. The biggest issue with helicopters — as yet the closest existing equivalent to Uber’s intended aircraft — is the rotor designs, which together produce loud noise. DEPs enable alternative design considerations, which still bolster vehicle efficiency while drastically reducing noisiness.
One objective is to set a minimum cruising altitude of 500 feet, at which sounds should be “about one-fourth as loud as the smallest four-seat helicopter currently on the market.” But given the sheer number of aircraft in the sky, it may not suffice in reducing noisiness.
Luckily, there are existing methods for measuring sound “annoyance,” established by the FAA, which could be leveraged to analyze and tailor aircraft and vertiports around “acceptable operational noise levels.”
Further, Uber is teaming up with Siemens to conduct noisiness tests between electric and combustion engines. Siemens recently unleashed an electric plane that’s not only significantly quieter than a standard combustion engine, but also broke the world speed record for battery-powered aircrafts at over 200 mph.
Is it safe?
In order for any of this to work, regulations regarding vehicle design standards, maintenance, pilot licensing and inter-jurisdictional travel will need to be established. Not to mention, en masse driving from yon high necessarily requires reducing driver error to near-zero percent. And, of course, the only way to eliminate driver error is to, in fact, eliminate drivers.
As mentioned earlier, a key ingredient of the Elevate initiative is autonomous technology. Similar to how Tesla’s developing self-driving tech in phases, so too would VTOL autonomy be integrated over time. Compared to cars on the ground, however, urban airspace is largely open and unobstructed, which could make the process faster.
When it comes to urban airspace, Uber maintains that existing air traffic control (ATC) systems would be able to accommodate hundreds of aircraft. Still, to operate at a much higher frequency, as it intends to do, ATC systems would need to scale up significantly plus implement new systems.
Not only is airspace density a hurdle, there are also other things like bad weather, which could erratically disrupt takeoff times for large fleets. Given that Uber’s value prop is “saving time,” that’s an area that needs heavy consideration.
Will this really happen?
Uber plans on having air taxis in operation by as early as 2023, provided all critical elements successfully come together.
And if massive VTOL networks — or more colloquially, “flying cars” — do become part of our daily lives, the results could be staggering. After all, Elevate’s main goal is to alleviate congestion on the ground, which, in turn, may alleviate pollution, environmental strain, highway accidents, blood pressure and even auto insurance premiums. These, in addition to a slew of other unforeseen effects — positive or otherwise.
Unlike the 1950s, the Golden Age of Futurism — whose optimism was both hopeful and fanciful, Uber has laid down a serious blueprint for how to make flying cars a real thing.
Next steps are to get key players on board, from regulators and cities, to designers and network operators. To be sure, it’s a project rife with excitement and possibility, exhibiting promise as a major game-changer alongside the self-driving car. For now, though, it all remains to be seen.
Last year, we saw the Mirai attack on IoT devices and digital cameras used as a distributed denial-of-service (DDoS) attack on the DNS servers on the East Coast. This attack caused services from Netflix, Amazon and others to be disrupted for several hours. The example of Mirai should be a wake-up call to our businesses and central commerce systems of the U.S. If devices can be compromised and turned into a botnet army of robots causing systems to stop functioning, we can start to imagine what type of damage can be triggered by a systematic attack on businesses and U.S. commerce. These botnet attacks would be as costly as the natural disasters we have seen with Category 5 hurricanes attacking the U.S. economy today.
As we think about cyberattacks in the IoT space, markets and verticals all have the same vulnerabilities. The attacker has the intention to cause the behavior of a platform to act different than it was intended. If it’s a virus similar to Mirai, causing a DDoS or chipset vulnerability, then the attacker can remotely control functions of the device, such as Wi-Fi, to an exfiltration of the network through malware deployment through the IoT device.
The technology industry has a few areas of concentration that are priorities: device design and device monitoring. In device design, security by design must be a priority in all IoT devices, regardless of value. As an example, an IP camera produced in large volume for mass distribution and a sensor for our critical infrastructure should both, in theory, have the same security objectives — prevent third-party attacks. To be fair, a sensor monitoring a nuclear power plant may be subject to both manufacturing compliance and security evaluation. However, the basic concepts of device protection and prevention of attack should be commercialized in any IP connected device. We can argue that IP-connected lightbulbs are just as big a cyber-concern as a heart monitor. The attackers are looking for the most vulnerable connected device from which to deploy their attacks.
For example, an article this week mentioned that an infusion pump used in critical care and neonatal care was subject to a communications attack. The attack allows the pumps’ communications to be hijacked. The responsiveness of the manufacturer for a real remediation was lackluster; the corrective action was network segmentation, static IP addresses and complex passwords. Similarly, there was a recent recall of pacemakers where the patients were asked to visit their physicians for a firmware update to correct a life-threating flaw where an attacker could give malicious commands to the device.
These examples are as serious in our businesses as they are in medical devices. Another concerning example is in the GPS communications segment. Cyberattacks of ships’ navigation could intentionally cause maritime accidents. Whether these are military, shipping lane or commercial targets, they are all extremely damaging. There are examples of GPS hacks for toll way fee evasion, hacks to employee tracking in telematics systems, and even more complex nefarious attacks in GPS jamming or GPS spoofing. Attacks on radio frequency signals are different than those of IT systems; however, the attack on GPS devices is similar to IT systems issues whereby assessment and monitoring are good tactics.
Attackers can issue malicious commands or enter networks to exfiltrate data. As mentioned above, design of IoT is critical, including updating patches and continuously monitoring IoT devices. We watch device behavior to understand when they are acting differently as this is a sign of a cyber-event.
A recent report by the American Public Transportation Association (APTA) found that many public transit systems in major cities suffered a decline in ridership, with metropolitan areas such as Atlanta and Baltimore seeing decreases of over 10 %. Most cities — with the notable exceptions of Boston and New York — have fewer people riding trains, subways and buses, the transportation mode that experienced the largest decline. In comparison, ride-sharing services like Uber and Lyft are seeing significant increases in ridership. Lyft reported a “breakout” 2016 with its ride-hailing service tripling rides from 53 million in 2015 to 160 million in 2016. The question that stands before transit agencies and city officials is now a critical one — what can cities learn from innovative technology-enabled transit alternatives like Uber and Lyft to reverse the decline in transit ridership?
Start with the rider experience
The rise of apps in the transportation sector means that riders are increasingly able to make decisions based on the mode of transit that provides the best experience. Uber and Lyft put an emphasis on customer experience with their apps, enabling riders to seamlessly plan trips, pay, share arrival times with others and rate drivers. Short of being able to control traffic, these apps have eliminated many sources of transportation friction and anxiety for riders — a goal that every transit agency needs to strive for. By enabling new technologies, transit agencies have the opportunity to offer riders an experience comparable to that of ride-sharing services without being restricted to the roads, which are sometimes subject to uncontrollable conditions like traffic or construction detours.
Rather than viewing ride-sharing apps as a threat, transit agencies should use them as inspiration to increase transit ridership. In its “Shared Mobility and the Transformation of Public Transit” report, APTA found that the more people use shared modes of transportation, the more likely they are to use public transit, own fewer cars and spend less on transportation overall. By integrating transit services with alternative modes of transportation, such as car or bike-sharing services, and streamlining ticketing through mobile applications, transit agencies can offer smooth and uninterrupted experiences to riders. Transit agencies should replicate behaviors that have made ride-hailing services successful and integrate technologies that streamline the rider experience by using software development kits (SDK), or by linking services via deep linking, creating seamless experiences using peoples’ mobile phones. Through an SDK, transit agencies can integrate functionality from another app, like mobile ticketing, directly into new and existing applications already used by riders in their city ecosystems. This means that riders would be able to plan their routes using an agency-supplied or widely deployed journey-planning application and then purchase and validate tickets all in a single place. Linking services in this way will allow transit agencies to adapt to the evolving transportation landscape and provide a streamlined travel experience for riders.
Bet on innovative technology
The appeal of services like Uber and Lyft stems from streamlined digital processes, technology and innovation. In March, the popular app Transit, which shows upcoming arrival times for various public transit modes, announced a partnership with Uber. It will allow riders to view public transit options and wait times near their destination directly in the Uber app while they’re riding — eliminating the need to toggle between apps. Technology advances such as open data and integration through open APIs is giving rise to transport innovations that promise a brighter transportation future for all — agencies, commuters and private tech innovators.
Embrace best-of-breed mobile-first technologies
Agencies must adapt to and adopt new technologies that allow users to have a more streamlined travel experience. Strides have already been made in places like New York City, where underground Wi-Fi capabilities were introduced on the subway system, allowing its commuters to check their email and send messages in previous service dead zones. Other cities have also updated technology to make the daily commute a better experience. Boston’s commuter rail introduced mobile ticketing to remove the need to wait in line to buy your ticket at ticket vending machines to catch the train. Kansas City is in the process of implementing a data collection system for their streetcar line, updating it with information about things like utilities and current conditions. These types of improvements make a big difference in terms of boosting ridership.
Recently, Oakland-based transit authority Alameda-Contra Costa Transit (AC Transit) awarded a contract to Iteris, which includes the implementation of transit signal priority, improvements for bus stops, an adaptive signal control technology system and a passenger information system updated in real time. These updates aim to improve the San Francisco transportation system by reducing delays and average travel time along with improving the reliability of schedules. AC Transit prioritized these technological improvements understanding that they would lead to improved rider experience. Cities experiencing a downward trend can learn from San Francisco, Boston, New York and Kansas by leveraging mobile-first, best-of-breed technologies that address the end-to-end needs of their rider and promote seamless public transit use — something essential for smart cities to flourish.
Local governments must be involved
In conjunction with innovative transit agencies, it is crucial to have government officials who are equally as attentive to the needs of their city’s riders. Understanding where opportunities for improvements lie will allow governments to be at the forefront of strengthening their transportation systems. Recent successes of various transportation systems have stemmed from the support of government agencies, Kansas City being one example. A $15.7 million 2016 initiative allowed Kansas City to become a smart city leader. Surrounding its 2.2-mile streetcar line are 328 Wi-Fi access points, 178 smart lighting video nodes and 25 smart kiosks, enabling the ongoing collection of data. In February, Kansas City released its open data platform, giving citizens live access to streetcar arrival times, traffic flow and parking availability. Through this data collection, the local governments can not only analyze trends, but also drive innovation. The collaboration between transit systems and local governments is imperative to the success of both entities, and the continued satisfaction of the riders themselves.
A successful city is one that moves its people effectively and efficiently. One way cities can do this is by integrating apps and services for a more seamless commute. It is also crucial to have local governments and transit authorities working together to improve the rider experience. The downward trend in ridership does present a challenge, but also an opportunity for cities to become smarter, innovate and revolutionize rider experiences.
Only 35% of millennials own homes. Millennials are also the ones most likely to adopt new IoT technologies. Still, the apartments in which they live aren’t equipped for this new wave of connectivity. As the on-demand and digital economies continue to boom, millennials will continue to demand greater services from their apartment communities, and to meet these demands, building designers need to change the way they approach multifamily housing.
When you tour a new apartment building, you see similar amenities: a gym, bike storage and maybe even a dog washing station, like many buildings here in Portland, Ore., have to attract canine-loving renters. While these are all great, they rarely meet the needs of renters’ other daily activities. Building developers need to look at what a millennial typically relies on, such as on-demand services like Amazon Prime or Seamless, and shared services like Lyft. With applications like these becoming more and more popular, renters will no longer be enticed by a gym as a selling point. They’ll be looking for an Amazon drop-off location or a designated ride-sharing zone.
To continue accommodating this new market of renters, building designs will need to change. Some of the biggest amenities impacted will be traditional mailrooms and parking garages. If, in the future, I’ll be receiving packages from Amazon Drones, does that mean an IoT-enabled dumbwaiter will bring my package to my door? Or, if ride-sharing companies truly are focusing on driverless fleets, how long will it be until my parking garage is filled with autonomous Lyft rides awaiting passengers?
The first adaptations for IoT we’re beginning to see may seem minor, but represent extensive projects for building developers. These come in the form of updated smart-enabled door locks, thermostats and light switches, as well as the installation of sensors. Each of these applications takes a major step in enabling the smart apartment.
Say you’re leaving on a business trip and realize you forgot to lock your door. With a simple command, you can quickly tell your smart lock to secure your apartment — anytime, anywhere. With the addition of sensors, your at-home experience changes dramatically. Lights will turn on and off as you move throughout your apartment, or the radio will turn on as you enter the kitchen. Over time, these sensors learn your habits and cater to your preferences and may even move furniture and appliances based on your mode. This creates some fantastic conveniences, including energy-savings benefits, eliminating electronics being left on when you’re not in the room, or managing a comfortable temperature for your pet while you’re away at work.
It may sound far-fetched, but it wasn’t all that long ago that the idea of controlling your thermostat from your smartphone seemed more like an episode of The Jetsons than reality. As IoT continues to make its way into our lives, multifamily housing developers need to have the infrastructure in place to meet the demand of renters. It’s only a matter of time before your leasing agent tells you the prospective renter was looking for her unit’s smart thermostat, instead of the building’s dog washing station.