Uncharted Waters

Jan 12 2015   9:35AM GMT

The Downside of The Internet of Things

Justin Rohrman Justin Rohrman Profile: Justin Rohrman

Last October I talked about how I think of The Internet of Things. Today I want to explore what happens when something goes wrong in technology we depend on.

My first post mentioned a few attributes and a way to define these things that for the most part, have become a part of our everyday lives. Right now, IoT is mostly a consumer novelty section full of watches that will tell you how far you’ve waked in a day and web connected refrigerators. Plenty of people use the devices, but the functionality isn’t exactly mission critical. Lots of other, more complex things are in the works though – medical devices, and cars.

Up to December, it has been a pretty mild winter here in Nashville. It got cold a few times, lows in the 30s, but nothing too crazy. It is officially not a mild winter anymore. It was 3 degrees Fahrenheit when I woke up last Thursday. Luckily, our house is pretty well insulated.

But that insulation doesn’t matter all that much when the heater stops working in the middle of the night. Sometime overnight on Friday, our Nest thermostat stopped working. I woke up and noticed it was abnormally cold, so I leaned over to grab my phone to turn up the heat with the Nest app. All the app had to say was that the thermostat was not online.

After a few internet searches, I had learned that there was (probably) a software bug in the Nest that caused it to get into a loop of chatting back and forth with our wireless server. This is probably what ran the battery down and left us with a cold house.

Sometimes Things Go South

Sure, the example I gave is trivial. It’s not that big a deal (at least in my climate) when a thermostat goes off line, or that a watch stops monitoring the number of steps a person walks, or if your fridge doesn’t tweet you when you run out of milk.

But what if it was something that does matter?


Google along with a few car makers are moving self-driving cars from a concept out of a scifi novel to the streets. Mostly the streets of California. These cars get around by looking at their surroundings and creating models of that world. This includes obvious things like lanes of traffic, all of the traffic signals, as well as things that change really fast like all the other cars moving around and driving on the same road.

A Practical Philosophy Question

Here is a slightly dark scenario. Place yourself sometime in the future when self-driving cars are a normal part of life. Here on earth, not a mars colony, this will happen faster than that. Anyway, your car is driving along and something weird happens on the road. The result is a scenario where either you, the passenger of the self-driving car, will likely die, or multiple pedestrians.

The car is driving, you don’t have control over it at this point.

Who should the programmers of the car give favor to? Should they err on the side of self-preservation and protect the car (and you the passenger)? Should they consider the number of fatalities instead?

I don’t know the answer to this question. Who would want to be passenger in a car that didn’t strive to protect the lives inside? Who would want to live around a car that wasn’t ‘aware’ of the humanity outside?

This example is complex in some ways and dead simple in others. People driving have to make important decisions multiple times a minute at any given point during a drive. But, we don’t have to worry about moving into a cellular dead zone where we lose all information about our environment or unfortunate software bugs (maybe the occasional biological bug) that causes us to slam down on the accelerator or brakes randomly.

Don’t get me wrong, I love that this technology is developing. I just won’t be one of the people on the forefront of the consumer wave.

3  Comments on this Post

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.
  • Radagast95
    Sharpen the philosophy question to make it a bit more practical.  Let's say that after two years of studies with a state full of self driving cars brings total fatalities in that state way down.

    Let's say the stats indicate that cars that privilege occupants of other vehicles and/or pedestrians contribute to a higher fatality rate for their *own* occupants, but result in the lowest overall fatality rates.

    On the other hand,  cars that privilege the lives of occupants have extraordinarily low fatality rates for their occupants, but result in an overall fatality rate somewhat higher than the previous scenario, but still a "win" over human-driven cars.

    Which car would you rather be in?

    What is the goal of the self-driven car project?  if it definitely reduces fatalities, should we buy into it no matter what?
    75 pointsBadges:
  • UseCaseMaven
    The basic political philosophy question concerns the protection of the rights of individuals, versus the benefits to the commonweal.   The idea of American democracy was that unless we protect individual rights, the state will ultimately start to protect itself even against the commonweal. 

    The more fundamental philosophy question is the source of truth on which these matters are based.  Is it whatever some computer says, or is it based on complex histories of events that includes what people have said and done and what many computers have said.  For example, does a customer service representative say "I'm sorry sir, but we do not have a record of an insurance policy for you in our system." or "You do not have an insurance policy with us."  (See, for example, the movie Brazil.)

     When this is applied to the internet of things, matters get much worse.  Does the (always) imperfect software have absolute say in  what really happened in that accident, or is there some court of appeals that might take into account information collected in other ways.

    Is there not a need, in this new world, for chains of trust between sequences of actions among various cyber systems and people and their institutions, and a way of establishing relative non-repudiability based on these chains of trust?

    20 pointsBadges:
  • PerpetualRocket
    My concern is more about flexibility to future change

    The cycle of business is centralisation to facilitate growth and efficient delivery of a known (currently successful) item followed by decentralisation as the working to an integrated efficient system based on an old product becomes uncompetitive and agility and tailoring to the customer becomes more important
    which then in turn cycles to centralised systems to better integrate and make the new norm more efficient/effective  ...etc 

    Integration and centralisation by its nature devolves to the lowest/simplest denominator. Critical systems by their nature also tend to sit at the 'tried and true' end of the innovation spectrum.
    We would need 'loose linkages' between IOT's to enable development to happen with different items at different speeds. 
    Typically the 'loose linkage' in systems development is the enablement of some element of human intervention and decision making. Without loose linkages we not only feel that we lose our locus of control but we also lose our ability to affect local and incremental change.

    Radagast95 raises a really good/valid point. Govts, local authorities and insurers will (quite rightly) be focussed on the 
    greater good/ lower cost/ most reliable options but Govts, local authorities and insurers always represent the late adopter and majority interests of their constituents

    Most of us 'dont' innovate driving but it is our individual approaches (safe and unsafe) which lead to change in automative designs. 
    Could standardised and very reliable IOT's lead to a slow down in change?
    40 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: