Professional training in computing.
I’m back home now, but happen to have spent the last three weeks in India, visiting a large multinational corporation based in India, and with offices all over the world. It is called Infosys. The home office is in Bangalore, but I was in Mysore, which is a couple of hours away. This Infosys site is the home of their Global Education Center, where they bring thousands of young university graduates from around India to be trained in advanced computing skills. The scale and quality of what’s going on here is very impressive, enough to lend credence to the often repeated refrain that India will soon surpass the West in its development of cutting edge software technology.
Today: universities in the US.
U.S. companies are far less likely to make this sort of investment in their young people. I am a professor of computer science, and over the years, I have seen a wide gulf develop between the narrowly-scoped, somewhat formal computing courses offered by universities and the vast world of complex software components that a modern programmer must master. Very little is built from scratch today in the software world, but university students see little of that vision. They also don’t really learn that very few development efforts sit nicely inside a traditional computing areas like programming languages, databases, algorithms, or distributed computing. They don’t see the interdisciplinary nature of emerging computing applications, in areas like medicine, science, and engineering.
In sum, computing students certainly need a strong, conceptual foundation so that they can develop an intuitive understanding of just how to attack computing problems, but academic computing continues to move further and further away from the real world.
Yesterday: my training.
When I was a young college graduated, long ago, I had a BA in Mathematics, with a sprinkling of computing courses, and had very little in the way of marketable skills. I was hired by EDS (Electronic Data Systems), which at the time was still owned and run by its founder, Ross Perot. I went through an intense training program, something that was admittedly too applied and lacked a sound, broad-based abstract substrate. I learned to program, not to understand the world of computing technology as a whole. I lacked the big picture. It was the opposite problem of what we see in universities today.
But that applied training at EDS was still critical in getting me started in computing, and later when I went back to grad school to finally get some formal training, my applied experience, knowledge, and intuition developed in the EDS training program served me very well as I entered the research world.
Today, U.S. companies are finding it too expensive to make substantive investments in training their young people. Graduates from undergraduate computer science programs are expected to hit the ground running – or rather, coding. Coding, coding, coding, attacking real problems and building real solutions, often in team environments and using a broad swath of existing software technology they often have never seen before. My students come back from job interviews telling me they were grilled on software technology, and often expected to sit right down and build a solution to a real problem – as part of the interview process itself.
But Infosys is doing what U.S. companies are finding it hard to do. Here’s the background. India is covered with engineering schools. Hundreds of them. Some say a thousand or more. It’s an industry there, with young engineers being pumped out by private colleges far faster than India can make use of them. These students are bright, determined, respectful, and extremely hard-working.
But no one is training Indian students to be software people, at least not in the universities. The engineering schools don’t have the faculties to teach computing.
So Infosys has decided to recruit the top graduates from the top engineering schools in India. Then Infosys sends them to Mysore, to the GEC, as it’s called, where they are given several months of nonstop, day and night training. I was there helping to train some of their instructors on the process of teaching database management and related technologies.
The GEC facility sits on a many-acre campus that is beautiful. It is densely landscaped with countless species of tropical trees and bushes. The grounds are manicured nonstop. The building I taught in is reputed to be the largest single building built in India since it got its freedom from Great Britain. The ceilings are vaulted, the floors are polished granite, there is a several stories high atrium in the center, and the outside has arches and pillars and a dome. Inside and out, it looks like an oversized palace. It is elegant, and not at all garish.
Again, Infosys is doing things that aren’t done that much anymore in the West. They have built a campus and a building that are truly works of art. I lived on campus in a beautiful room, ate a couple of hundred yards away at a fine restaurant, went to an equally close gym every morning, and rode golf carts to work. The students live and work in this same environment. Okay, they don’t get to ride golf carts, I admit it.
Visitors are blown away by what has been built there.
Infosys has tapped a large, impressive generation of young people, and as a result, India is building up a vast, powerful human technology machine. Meanwhile, in the U.S., we have much better computing programs in universities, but we are struggling to get students interested in computing and to enroll in our classes. And we neglect the hands-on side of software education. And, compared to Infosys, U.S. companies are not investing anywhere near as heavily in grooming the next generation of computing professionals.
Imagine what will happen when Indian engineering colleges start adding true computing curriculums. Combined with Infosys’s efforts, these will be extraordinarily skilled young people. And there are thousands and thousands of them.
India will do big, big things.
In this posting, we look at the Duct Tape Phenomena.
As a researcher, I have worked with biologist in the past. Big biologists, not microbiologists, the folks who tinker with DNA. The folks I worked with study macroscopic things mostly, species, in particular. They search for as-yet undocumented species. They tend to have appointments at major universities around the world, and then take extended field trips to study life. Most of them go to rain forests because that’s where biodiversity is its greatest.
Each scientist has a chunk of the world and a kind of animal they specialize in. I know the butterfly man of Costa Rica, a fellow who has documented several thousand varieties of butterflies, some of which have wing spans of several inches. I know the bug man of the Amazon, who builds long tunnel-like things from the floor of the forest up to the canopy, fills the tunnels with bug killer, and then looks among the dead for bugs that are yet unheard-of.
Here’s the interesting part, at least from a computing perspective: a lot of the scientists I came into contact with store their data in Excel. This is a phenomena that crosscuts the entire spectrum of computer users. They had to learn Excel at some point, maybe in school or at some workplace, and the next time they needed an application to do something, they found a way to make Excel do the job. For most people, learning the “right” application to use is far too much work, even if it’s hard to query Excel the way we would a database, even if Excel spreadsheets get way out of control size-wise, given the large amount of data many of us collect.
Excel, in many ways, is the duct tape of desktop and notebook computing.
Firefox (or your favorite browser).
But what about developers of desktop apps? What do they use as a design paradigm when building the interface to an app, even if it’s not meant for the Web?
Indeed, there is a merging of desktop GUI and web app interface technologies, and now you could sit down in front of a running app and not be sure which of the two you are seeing. In fact, the design impact is not the end of it. We actually use browsers now to interface with some desktop apps, but not often, not yet. However, at least as a user interface paradigm, the browser is becoming the duct tape of GUI design.
For developers of interfaces, Firefox has become a sort of duct tape.
The new Web.
These are the two things that underly much of computing: the need to store and compute (as with Excel) and the need to interface (as with Firefox). But when the new Web, (in the form of the Semantic Web and truly advanced Web 3.0 apps), begins to arrive, will a new paradigm emerge?
Perhaps they will be extra smart browsers that can process code written with xml and namespace and other semantic technology, so they can do more than just look for pages according to the English keywords on them.
In other words, we could imagine them as extensions of what our browsers do for us now. They’re very stupid now, really. They’re not at all smart like Excel.
How does it work now? Crawlers commissioned by search engines like Google constantly search the Web and “invert” every static page they find by building an index on every word in them. And then later, we can search this gigantic index store according to the words that appear on the pages that the crawler has found. Once we find URLs of interest, we click on them and go visit the actual pages. These searchers are far, far less than “semantic” in nature.
Our smart browsers will also have to let us build up organized libraries of specialized web content we have found, including documents, images, video, sound, animation, and such specialized data as medical treatment advice. We might maintain these in virtual space, or we might download frozen copies of pages to store on our machines. Our smart browsers could constantly look for updated versions of pages we have copied and downloaded.
These smart browsers will also have to interrelate data of a wide variety of sorts, so that a description of certain symptoms can be accurately hooked up with the specifics of a diagnosis and a medical treatment plan. Our browsers will have to isolate conflicting information, as well.
So, in the future, we’ll need browsers with smarts. We’ll look at this much more carefully in a future posting of this blog, but for now, here’s the lesson: thats the two things that applications do for us, they let us store and search things, and they let us compute things.
And what about viewing all this information? How will so much complex, multimedia information be presented? Not as simple webpages with images, text, and things you can click on. Perhaps the new browsers will lay out multimedia presentations of complex, integrated information that has been synthesized from many, many different sources.
So, what does this imply? That these two things underly computing apps of almost all sorts: 1, storing and searching, and 2, viewing and manipulating.
And they will underlie the most complex and sophisticated end-user applications of the future.
In a vague, somewhat analogous fashion, most apps are a blend of Excel and Firefox.
Things change radically over time. And things never really change at all.
Ambient Intelligence: A Powerful Enhancer of Advanced Web Technology.
In this blog entry, we’ll look at another new technology and how it might dovetail with the new Web. It’s called “ambient intelligence”. Like other software advances, although it is not directly related to the Web, it will dovetail beautifully with new Web technology.
We consider how ambient intelligence will make the Web radically better at serving individuals.
Ambient Intelligence: Just What Is It?
The term refers to computerized devices that tailor their behavior according to the nature of each user. First of all, though, we should make it clear that this is not a particularly new term, that it does not have a highly specific definition, and there are lots of other terms that have been used to describe similar concepts. But there is something focused that is emerging under the banner of this name.
Ambient intelligence is commonly discussed in the context of embedded devices, machines that have processors in them and that perform specific information-based tasks, as opposed to being general purpose programmable computers. Embedded computers are in cell phones, our automobiles, and “smart cards”. Sometimes, they can indeed be programmed to do almost anything, like the ones inside cell phones. But even then, it’s assumed that very few people will do so. The point is that they generally do not have displays, keyboards, or mice dedicated to their use. They are found inside small and large devices, as well as in the smarts of complex systems, like assembly lines. Mass produced, but sophisticated items like insulin meters have computers in them.
As an example, you could imagine that the vending machine you put money into tomorrow might already know that you drink nothing but 20 ounce Pepsis. Maybe every vending machine in your complex at work knows your habits. Maybe if you switch to Sprite on one machine, it will tell the rest. Maybe the machines will offer you one or the other until a new pattern seems to emerge and it appears that you will never again drink Pepsi. Or you might be able to enter your “favorites” on the corporate website, and declare what you prefer to drink. The machines will know – and so will the company that services those machines. All of this could happen without human intervention.
Ambient devices don’t have to specifically target individuals. You could imagine a computing system in an airport that can smoothly transition between human languages, customs, and regulations, to better serve a global audience. We’re very close to this sort of thing right now, actually.
Ambient Intelligence at the Fingertips of the Web.
But wait. Let’s get back to that vending machine. How do they communicate with each other to pass on the critical news that you’re a Sprite person now? How do you enter your favorites? How does the vending machine company get the news so they know what to order?
The Web. Those ambient vending machines use the Web.
On the Web, embedded devices can be engaged by web applications and
web services. (Remember that web services are programmatic interfaces to services;
i.e., they don’t have to be activated by a human using a browser.)
Embedded machines can also initiate web services, as well as trigger “push” tasks,
whereby a user on a client machine somewhere is told that something is happening and itʼs time to get to work. The embedded device and the user could be on opposite sides of the world, thanks to the Web.
RFID Technology: Tracking Things.
We’ve already looked at RFID technology.
As a reminder, the goal of RFID-based systems is help us coordinate and carefully control the use of various objects. Of particular interest are mobile objects. One of the key components behind this idea are RFID tags. RFID stands for “radio frequency identification”. A tag can be attached to almost anything. After they are deployed, an RFID reader can send out a signal, which is picked up by the RFID tags, and then respond. As things move around, as things are used in concert to perform tasks, they can be carefully tracked and managed.
There’s another aspect of ambient intelligence. When people talk about a device that has ambient intelligence, often they are referring to a dedicated devices with a simple display, not a general purpose computer. By this quality, the soda machine example is a bit rudimentary, in that it probably doesn’t have any true native display at all, and the indirect way of accessing it, at least according to our example, is too general purpose – a website that is accessed with a full blown computer.
Consider something that is a major topic of discussion now, and a subject we will return to in this blog in the near future: electronic health records. The idea is that we would have life-long electronic medical information bases that would be accessible to medical providers (with our approval). This way, the fact that I had some disease as a child that makes us vulnerable for some other disease
later in life would become apparent to my family doctor, and the necessary screening exam would be scheduled periodically. Otherwise, how am I supposed to know about the consequences of something that happened when I was a toddler? My “EHR” would also hold prescription records, imaging data, and anything else related to my health. It would, of course, be a web-based app.
But various sorts of doctors – not to mention non-medical types like me – need information displayed and abstracted in special ways. My family doc might want to see everything is its raw form, if for no other reason than my doctor would be expected to know my medical history, if it were readily available. (And yes, if I had a chronic disease or were the caregiver for someone with a chronic disease, the immense size of the EHR would be truly overwhelming. I imagine that doctors might be afraid of being expected to process huge EHRs belonging to new patients.)
Now, consider an emergency room doctor. If I was lying on a bed in an emergency room, not conscious, having just collapsed and complaining of a terrible pain from a horrendous headache, and from nauseous, and unable to answer questions, the doctor needs data fast. The display that the ER doc uses would not be on a general purpose desktop computer, would not provide that massive raw data view, and would present information in a highly readable form.
Most importantly, that computer would have to be instantly adaptable to suit the needs of an emergency, and then later, go back to a non-emergency mode, to be of help in further treatment.
Or, it might be that the web server and not the machine in the ER, contains the ambient software. The machine in the ER might be a very simple client. But either way, the combined web application and local client would have to be capable of searching my online EHR, to look for possible problems, and to display them. It might deliver up the fact that just this morning, I had minor surgery on the baby finger on my left hand – and since I was so squeamish, I was given general anesthesia.
Boom. The ER doc figures out that my headache is from high blood pressure, which, along with nausea, is a common side effect of anesthesia, and it can hit hours later. The doc now knows that if I’m given a blood pressure reducing drug, I’ll be fine. But I might have to first be given an anti-nausea drug, and obviously, I wouldn’t be able to swallow that and keep it down, and so it would be administered at the other end of my food processing subsystem.
Wait, one more thing. What about RFID tags? Maybe I have one around my neck, and that’s how the doc figured out who I was in the first place, since I was stumbling around with no driver’s license. The machine in the ER scanned the tag – and voila.
The Reach of the Web.
If you think about it, by leveraging the Web, ambient devices can be empower in incredible ways – and in the years to come, we’ll see a new generation of such web applications emerge.
(Finally, if my medical scenario is ridiculous, and you are a medical professional, then I’m sorry.)