I’ve been enjoying the series on career directions, but it’s time for a bit of a break — so let’s take a coupe of posts for something completely different.
Have you heard of the Singularity?
It’s an interesting sort of technological apocalypse — you can see it’s ideas on the big screen in The Terminator or The Matrix; that eventually machines will become self-aware and rise up to rule over their human overloads. To introduce the subject properly would take a book; Amazon tells me that there are thousands of books on the subject, so I’ll make mine very brief.
It is also easy to dismiss Singularity as Science Fiction inspired by an Arnold Schwarzenegger movie from the 1980’s, but let’s dig in a little bit.
The most compelling observation I know of for the singularity is that Computer Clock Cycles and Processing Power keeps increasing, roughly doubling every few years in accordance with Moore’s Law, while we humans are at best, facing small gains over millions of years through natural selection.
The thinking goes like this: Eventually computers will catch up, have some sort of “divine spark” event, and go past us.
That thinking has been smacked down in the media: Right now, today, we could create a three-dimensional grid, 100x100x100 of the strongest CPU’s in the world together, something like the Department of Energy Project ASCI Red, and get a computer that strong, but we know what it would do: Execute instructions it was fed in serial order. The “divine spark” is a question mark, a huge one, one that can’t be ignored. Without it, we have no singularity.
But here’s the thing: We don’t need one.
The Turing Test
The classic test for artificial intellegence is something proposed by Alan Turning called the “Turing Test.” According to Turing, if you are corresponding with someone by, say, teletype or instant messenger, and you can’t tell the human from the computer, well, then, that’s artificial intelligence. For a one-sentence litmus test, I’d say that’s pretty good.
The first computer program that approached the turing test was probably Eliza, the program that simulates a psychotherapist. Eliza was good at taking any statement you make and asking you about it, thus “I feel sad” became “Why do you feel sad?” or “Do you often feel sad?” – Yet it was easy enough to trick Eliza into revealing itself as a computer program by using common idioms or terms it did not understand. (There are several free web-based versions of Eliza; play around with it yourself.)
We’ve come a long way since 1966. Today, one of the better-known applications of AI is to ask a question to thousands, millions, or tens of millions of people, and have the computer develop a set of connected ideas, called a neural network, similar to how the human brain works. Researchers have done this with Twenty Questions, and, under a very specific domain, you can get the computer to play as competently or better than a human. You can do the same thing with Chess, Jeopardy, and, if you have all the domain information right, medical diagnosis – though I can’t help but notice that a human still needs to sign off on the prescription.
And, with that, we have the makings for something very much like the coming singularity.
Domain Specific Languages Are Enough
You know that strange email you got from the person in Nigeria? You probably didn’t fall for it because it was just a little bit … off, right? Likewise, computer programs haven’t been good enough (yet) to create free accounts on websites because the websites use a CAPTCHA – The thing that scrambles words and letters so that only a human can recognize them.
But computers are getting faster and neural network programs more precise.
To have wreck major havoc, computer programs don’t need to become “self-aware”; they just need to get good enough to seem like a human enough to create logins or to send you spam email pretending to be a friend asking for confidential information.
In an era where out friendships and networks are increasingly public, publicly available, and available to programmers directly through APIs, that possibility looks increasingly likely.
Long before the singularity comes, long before the computer can be self-aware, I expect to have a computer program that can recognize words on a page better than a human, or fake being human enough to do serious damage to our technological infrastructure. Moreover, I expect a bunch of open-source kits to evolve that will allow people to run tests and target specific networks. (I don’t want to be too specific, but here’s one: Target a bank’s website by URL, simulate it, put that up on a server, send a bunch of emails in a phishing attack, then use the response to create more-effective emails. I know. It’s not good.)
The first good news is, if those things haven’t happened yet, well, we don’t have to worry about the singularity any time soon.
The second good news is, when those things happen, we will have an increasingly complex set of security techniques to offset them — often cribbed from the same technologies.
Your spam filter, for example, is probably a neural network, collecting similar emails in a “bad” list and strengthened by humans who decide what goes where. CAPTCHA’s aren’t cracked/not cracked in simple binary fashion, instead they are constantly evolving and improving. When someone develops and algorithm to break words on a screen even humans barely understand, I expect that we’ll move to idioms and audibles.
While it appears at first, that humans will be overcome with techno-savvy, I expect that the good guys will deploy their own tools and we’ll keep going, neck and neck. Your great aunt sally might get taken in by a phishing attack, sure, but that’s the same category of person that might fall for a nigerian scam email right now.
Today is 2012. It is unlikely that the robots will rise up to overcome their human overlords anytime soon … but keep your eyes on the bad guys who want to drive the robots. Those are the dudes to look out for, at least for the time being — and the career outlook for the people who do the look-out-ing are surprisingly good.