Last October I talked about how I think of The Internet of Things. Today I want to explore what happens when something goes wrong in technology we depend on.
My first post mentioned a few attributes and a way to define these things that for the most part, have become a part of our everyday lives. Right now, IoT is mostly a consumer novelty section full of watches that will tell you how far you’ve waked in a day and web connected refrigerators. Plenty of people use the devices, but the functionality isn’t exactly mission critical. Lots of other, more complex things are in the works though – medical devices, and cars.
Up to December, it has been a pretty mild winter here in Nashville. It got cold a few times, lows in the 30s, but nothing too crazy. It is officially not a mild winter anymore. It was 3 degrees Fahrenheit when I woke up last Thursday. Luckily, our house is pretty well insulated.
But that insulation doesn’t matter all that much when the heater stops working in the middle of the night. Sometime overnight on Friday, our Nest thermostat stopped working. I woke up and noticed it was abnormally cold, so I leaned over to grab my phone to turn up the heat with the Nest app. All the app had to say was that the thermostat was not online.
After a few internet searches, I had learned that there was (probably) a software bug in the Nest that caused it to get into a loop of chatting back and forth with our wireless server. This is probably what ran the battery down and left us with a cold house.
Sometimes Things Go South
Sure, the example I gave is trivial. It’s not that big a deal (at least in my climate) when a thermostat goes off line, or that a watch stops monitoring the number of steps a person walks, or if your fridge doesn’t tweet you when you run out of milk.
But what if it was something that does matter?
Google along with a few car makers are moving self-driving cars from a concept out of a scifi novel to the streets. Mostly the streets of California. These cars get around by looking at their surroundings and creating models of that world. This includes obvious things like lanes of traffic, all of the traffic signals, as well as things that change really fast like all the other cars moving around and driving on the same road.
A Practical Philosophy Question
Here is a slightly dark scenario. Place yourself sometime in the future when self-driving cars are a normal part of life. Here on earth, not a mars colony, this will happen faster than that. Anyway, your car is driving along and something weird happens on the road. The result is a scenario where either you, the passenger of the self-driving car, will likely die, or multiple pedestrians.
The car is driving, you don’t have control over it at this point.
Who should the programmers of the car give favor to? Should they err on the side of self-preservation and protect the car (and you the passenger)? Should they consider the number of fatalities instead?
I don’t know the answer to this question. Who would want to be passenger in a car that didn’t strive to protect the lives inside? Who would want to live around a car that wasn’t ‘aware’ of the humanity outside?
This example is complex in some ways and dead simple in others. People driving have to make important decisions multiple times a minute at any given point during a drive. But, we don’t have to worry about moving into a cellular dead zone where we lose all information about our environment or unfortunate software bugs (maybe the occasional biological bug) that causes us to slam down on the accelerator or brakes randomly.
Don’t get me wrong, I love that this technology is developing. I just won’t be one of the people on the forefront of the consumer wave.