Making faux-intelligent searches more effective.
In this posting, we look at a key method for making these simulated intelligent searches more accurate, and that is by using human experts to train the database search facility. To do this, we need four main components: a media base for which we want to develop an effective search facility, a feedback cycle involving skilled experts, a body media artifacts to use during the training process, and an initial search facility that we want to train. We don’t need the the first item in order to engineer our first cut at a media search facility.
The initial search facility.
There is a wide class of techniques that are used to search advanced media, like images, sound, and video.
One approach is to base the search facility on a hierarchy of classifications. Consider a database of digital photographs. We might have two main categories: inside and exterior. These might form the first two branches of a hierarchy. Exterior shots might be subdivided into shots in sunlight on land, shots in sunlight on water, shots at night on land, and shots at night on water. These would be further divided, and clearly, the categories would be more sophisticated than the somewhat silly ones I am suggesting here.
Importantly, this hierarchy might be very broad and very deep, thus forming a huge inverted tree, with the top node being called photographs.
Also importantly, the words in this hierarchy are likely to come from a namespace shared by professional photographers.
It isn’t enough to simply have a nice, standardized hierarchy for classifying photographs. We need to be able to automatically place photographs in their proper categories in the hierarchy. Each one will be assigned a term that comes from the photographer namespace and appears on a leaf (a node with no branches below it) of our inverted tree. That term and all the terms down branches of the inverted tree leading to that leaf would apply to a given photograph.
How do we do this? We do it with image processing techniques, something we will discuss in a subsequent blog posting. For now, we’ll just say that there is a large body of existing software that can classify images and video and sound, using a variety of heuristics. This software can judge the amount and nature of light in a scene and use it to decide if a photograph was taken indoors our out of doors, for example.
A body of training media artifacts.
This might be a subset of the media artifacts that we want to put into our database when it is deployed for use by non-experts. Or this might consist of a well understood set of test media artifacts with which our experts are familiar and is used specifically for training our system. (Again, in our example, these are digital photographs.)
The feedback loop.
This is often called “learning”, and it refers to the process of allowing experts to provide accuracy feedback on the results of search attempts. Essentially, the feedback loop provides a way for experts who are familiar with our photography namespace to reclassify a photo if the search facility has it associated with an inappropriate or non-optimal leaf in our tree.
During the training process, we let the system automatically classify the photos in our training set, but every one of them is carefully analyzed by our experts and reclassified as necessary. The search facility doesn’t simply accept the new classifications. It responds by altering (and perhaps extending) its method for deducing the proper classification of a given photo. We will look at this again, as well, in a subsequent posting of this blog.
The (always growing) database of media.
Once the search engine has been put into place, and once it has been trained (at least enough to use it in production mode), it’s time to load the entire body of animation artifacts. Quite likely, the feedback loop will be left alive and the training process will continue indefinitely on a selective basis, depending on how well the search facility seems to be performing. But the larger body of media artifacts will be classified automatically from here on out. This is the only way to create a media search facility that scales to truly vast libraries of media artifacts.
More to come…
Searching by semantics.
Here, we look closely at one specific issue related to managing complex media: How to categorize and search advanced forms of media by their “meaning” or “semantics”. This is extraordinarily difficult, and in fact, in general, it is impossible. This is why we usually rely on relatively low-level heuristics and can only simulate search-by-semantics in simplistic ways.
Consider a library of soundless video clips. Let’s assume there are many thousands of them, and they vary in length from seconds to hours. First of all, the only clips we can afford to download and actually view in real time are the ones that are only seconds or minutes in length, and we can do this only if we are somehow able to limit the search space to a small handful of candidates. Keep in mind that a video can consist of twenty to forty images per second.
So what do we do?
We could search tiny samples of our video clips, perhaps taken from the beginning, the middle, and the end of each clip, but this doesn’t actually well, either. We need something that can scale, that is automated.
The dominant technique is to extract information concerning low level attributes of the video clips (such as their format and pixel count) automatically, and then have experts add more tagging information by using widely adopted, formal namespaces. We might use a geography namespace to mark clips as having rivers and mountains in them.
These two forms of tagging information might be encoded together using the very popular MPEG-7 language. This creates a very indirect way of searching video clips. We don’t actually search them. We search the hierarchically constructed MPEG-7 tag sets that describe the videos. This at least allows us to use SQL in a reasonably straightforward way to do the searching.
Searching for specific images.
There is very good technology for processing images for fixed pixel-based subcomponents like individual faces. We can also search for video clips that have any faces at all in them.
In general, it’s easier to search for things made by people because they tend to be more angular and regular in shape. These include specific buildings and types of aircraft.
Searching for colors and shapes.
We can also search for more abstract subcomponents of images, like polygons, circles, and the like. Despite the fact that video images are pixel-based (or “raster”), there is good technology for isolating the lines that form the boundaries of subcomponents.
And we can look for colors and compare the relative location and dominance of various colors, like images where 63% of them are a particular shade of orange.
Searching for change over time.
We can also search for pattern changes in the series of images that make up a video clip.
But none of this has much to do with the real meaning or semantics of images and the video clips they form. Taking this next step is huge challenge.
How can look for a setting sun or a ball moving across a tennis court, without knowing the details of the sunset or the particular tennis court in advance?
We can use the colors and shapes approach to look for a big orange ball descending below a possibly-jagged horizontal line. We could look for a small, white or yellow spherical object move across a big green rectangle.
One way to raise the bar a bit is to use domain-specific knowledge about the images being processed. It’s a whole lot easier to spot that tennis court if we know that’s what we’re looking for. Then we can fill our searching software with lots of detailed information about the various sorts of tennis courts. We can also more easily isolate the tennis court in a larger image if we know it’s there somewhere. This gives us an extra edge, so we can perhaps find the court, even if it turns out to be brown and not green, or if the surrounding terrain is almost the same color as the court.
We of course never get away from searching by heuristics that only simulate the process of determining the true meaning of a series of images. We can never truly search by semantics.
But we can do something else: we can get humans into the loop and train our software to do a better job. We’ll look at this next.]]>
Managing advanced forms of media, such as images, sound, video, natural language text, and animated models have been discussed a number of times in this blog in the past. Traditional information systems, such as relational databases, have been engineered largely to handle the sorts of data we have in business applications, primarily simple numeric and character string data. To the SQL database programmer, the nice part is that the data speaks for itself. If a field is called Name, and the value is Buzz King, the semantics of “Buzz King” is pretty obvious, and it can be processed in a largely automatic fashion. The same goes for a field called Age, with a value of “97″.
Searching advanced media: far, far more difficult.
But modern media is far more complex than this. ”Blob” data like images, and continuous data, like sound, video, and natural language text, are very difficult to search and interpret automatically. There are two approaches that have been taken to resolve this dilemma.
Tagging: the simple approach.
The first is tagging. Descriptive terms, often taken from large, shared vocabularies, at attached to pieces of media. These vocabularies can be very domain-specific, dedicated to areas like medicine, law, and engineering.
Intelligent processing software: the second approach.
The second technique is the automatic processing of pieces of media using image processing, natural language, and other highly intelligent software. These applications are very sophisticated and understood only by experts. And, these applications often demand a lot of processing time, and this makes bulk processing impossible. It’s also true that the results can be haphazard. Some pieces of media can be interpreted precisely, others not so precisely – and dramatic mistakes are frequent. A tennis court might be mistaken for an airplane runway. There’s a huge trust factor involved in cranking up image or sound processing software or natural language software.
Often, we can provide feedback so that these applications can learn, over time, the way we want media to be interpreted. We can help the software learn the difference between a tennis player and a member of a ground crew on a small runway. All of this is hugely expensive, in terms of the cost of developing the software, and in terms of the physical resources needed to run the software.
A middle ground? Not really.
So, is there some middle ground? Something simple, yet more “intelligent”? Yes, and the answer is to take a sophisticated approach to what otherwise might be very simple tagging techniques. However, the core problem with tagging remains: we search and process tags – and not the actual data. It is an indirect, but fast process. The goal is to come as close as we can to simulating the results of such things as image processing, but to do it with a simple, yet comprehensive, accurate tag-based technology.
We’ve looked at some of the solutions that have been proposed. They include Dublin Core, MODS, and MPEG-7. The first is very simplistic. The second is more sophisticated, in that the terminology used is broader and far more precise. The third is very aggressive in that it supports the complex structuring of tag data elements.
So, what are we really doing?
In essence, we build a hierarchy of metadata and then instantiate it for every piece of media we want to catalogue and later search. What we are doing is creating a parallel database, one where every piece of blob or continuous data is accompanied by a possibly very large tree of structured tagging information. The parallel database has its own schema and an instance of it is created for every piece of media in the original media database.
The end result? Instead of creating some sort of media-centric query language, like an SQL-for-video, we give up on trying to search the media database itself. The query language remains largely ignorant of the nature of blob and continuous media. We can continue to refine and expand the schema of the parallel database until search results are satisfactory.
The Semantic MediaWiki.
In this posting, we look at the Semantic MediaWiki, something else that Greg told me about. It is an extension of MediaWiki, the application that the Wikipedia is built out of. You can learn all about it at the Semantic MediaWiki website. The idea behind Semantic MediaWiki is to provide a more powerful wiki tool, namely one that supports more than just human-readable things like text and images.
RDF and namespaces: creating machine-readable, web-based information.
The idea is to allow entries in wikis that contain machine-readable information, so that searching can be performed in a largely automatic fashion. Specifically, the Semantic MediaWiki allows users to export information from a wiki in RDF format. An RDF specification consists of “triples” that form “assertions”. Consider the following
Assertion 1: Joe is tall.
Assertion 2: Tall People should try out for Basketball.
The idea is for terms in triples (“Joe”, “tall”, “is”, “Tall People”, etc.) to be taken from predefined and globally accessible namespaces. This would ensure that everyone who uses a given term (like “tall” or “Should try out for”) will have the same meaning in mind. In this way, rather than having to painfully search for information that pertains to Tall People, for example, a smart search engine could do the searching for us.
Building locally, growing globally.
There is more to this. These namespaces can be available on the Web, and RDF statements can point to the relevant namespaces. This means that software searching the Web, and processing these triples, can easily find the relevant namespaces.
Also, the things in the right and left side of a triple (like “Joe” and “tall”) can themselves be Web-based resources. This means that information scattered around the Web can be interconnected – but all the work can be done locally. No one has to manually integrate millions of websites. The job can be done little by little, in a quiet way, as people start to store their information in an RDF compatible fashion.
This is how the Semantic Web will scale. Everyone will use shared namespaces and shared protocols like RDF. This will, in essence, turn the Web into one big website that can be searched in a partly automatic fashion.
SPARQL: querying RDF-based information.
How will we interrelate data scattered around the Web?
There is a query language out there, called SPARQL, that can be used to search the Web. SPARQL can follow RDF connections around the globe. How is this done? It has to do with being able to “infer” new things. Consider a fact that can be automatically deduced from the two assertions above:
A new inference: Joe should try out for Basketball.
Assertion 1 could be on a server in Detroit, and assertion 2 could be on a server in Miami, and SPARQL could do the job of making the leap that leads to the new inference.
This means that we could figure out what Joe should be doing right now without having to find the two pieces of information manually (the fact that he is tall, and that tall people should play basketball), and without having to make the inference ourselves.
This is a big deal. This sort of automation is what the Semantic Web is all about.
So what do real people do with the Semantic MediaWiki? We’ll look at this next.]]>
It’s called DBpedia. A former graduate student at my university, Greg Ziebold, pointed me toward it. The goal of the DBpedia is to transform data from the Wikipedia into a chunk of the Semantic Web. To do this, DBpedia is using RDF technology, something we have discussed is past postings of this blog. Behind RDF is an extremely simple concept, but one that has proven extremely powerful and versatile.
The general idea is to break knowledge up into “triples” that describe relationships between pieces of information. These triples can be chained together to discover new relationships. And, importantly, triples must make use of widely shared sets of terminology, called namespaces, in order for knowledge from different places on the Web to be properly chained together.
RDF, triples, assertions, and inferences.
A thorough example can be found in a previous posting of this blog.
Here is a very simple example of triples (also known as “assertions”) and how they can be put together into “inferences”.
Assertion 1: Joe is tall.
Assertion 2: Tall People should try out for Basketball.
A new inference: Joe should try out for Basketball.
Keep in mind that we would want to make sure that the words used in these assertions have precise, global meanings. We might take the terms in these two assertions from a basketball namespace, one that would carefully dictate exactly what “tall” means in the basketball world. Certainly, it would be quite different from the meaning of “tall” in a kindergarten namespace.
More on DBpedia.
There’s a fancy word for sets of triples that use namespaces and represent various areas of knowledge. They are called “ontologies”, taken from the term used by philosophers to argue about the existence of various things, like God. The DBpedia is essentially a vast ontology, formed from triples and namespaces. Most of the knowledge defined by this ontology comes from the Wikipedia. The folks behind the DBpedia have been given direct access to the flow of information into the Wikipedia, so that the DBpedia can stay current.
One way to look at the DBpedia is that it takes the Wikipedia and reforms it into something that can be searched far more effectively. Right now, to search the Wikipedia, most of us simply type in terms (either into Google/Yahoo or into the Wikipedia search page). We try various terms and follow links inside the Wikipedia until we find what we think we are looking for. With the DBpedia, users can search with SPARQL, a language based on the structure of SQL and engineered specifically for searching large bases of triples. SPARQL allows us to traverse networks that consists of triples linked by inferences.
That way, if we were a coach looking for promising candidates for our team, we would use SPARQL to make the connection between Joe being tall and the fact that tall people should try out for basketball. This is clearly much faster and more accurate than googling things like “tall”, “basketball”, etc, until we happened to find Joe in one of the web pages that pop up.
The DBpedia website, by the way, claims to have a triple base that consists of 274 million RDF triples.
More on this in the next posting.
Assertions and Inferences.
A key concept is that of an “inference”, a fact that is created by putting together two or more pieces of information that we might call “assertions”. We used the following example in the example in a previous posting. The two assertions might be posted on the Web somewhere.
Assertion 1: THE BALL is ORANGE.
Assertion 2: ORANGE is an UGLY COLOR.
An inference created by putting the two assertions together: THE BALL is an UGLY COLOR.
We have also discussed the fact that terminology used in inferences must be very carefully defined and widely shared.
What is a Surrogate?
The word surrogate, in the programming world, refers to a measure or model that is being used to approximate the “real” measure or model. If I am trying to estimate the depth of the ocean at some point, but don’t have a direct way of measuring the distance to the ocean floor, I might judge the depth by using a table that associates the distance from the shore to the depth of the ocean. The assumption is that all points that are a particular distance from the shore will have the same depth more or less.
Here’s the important point for us: The Semantic Web will make very heavy use of surrogates. Let’s be precise about this. We’re not talking about approximations. We might search the Web for all banks that provide accounts that earn 5%, and our smart search engine might point us to banks that on the average, over the past two years, have paid at least 5.0% on their accounts. A surrogate is something different. Suppose we wanted to find all banks that never cheated their customers. This might be impossible to answer precisely, so we might look for banks that are in the bottom 10% when it comes to the number of formal complaints filed against them. That would be a surrogate.
Surrogates on the New Web.
Now, let’s consider the Web. It doesn’t matter if we are talking about the Web today or the emerging Semantic Web.
In fact, what we are concerned with here is global to computing in general: when we take a chore normally performed by a human using an interactive interface and turn that chore over to a computer program, we often turn a real world decision into a decision based on very simplified surrogates. A human can look at a bunch of information and, although it may take a very, very long time, make a “perfect” decision based on that data. But computer programs cannot think like a human. We can only crudely simulate with software the process of thinking that goes on in the mind of a real person.
Now, back to the Web, the new Semantic Web. Suppose we build a next generation website and use an official namespace (which is a structured set of terms) to specify assertions using terms from this namespace. What we’re doing is providing a surrogate for the smart search engine to use so that it can do the filtering of URLs and the integrating of information from multiple sites.
Consider our two assertions from above, along with the inference derived from them:
Assertion 1: THE BALL is ORANGE.
Assertion 2: ORANGE is an UGLY COLOR.
An inference created by putting the two assertions together: THE BALL is an UGLY COLOR.
Maybe we are shopping for a ball online. We mght have to follow hundreds of URLs and search hundreds of websites to find just the right ball. But who said the ball is orange? It’s an approximation made by the vendor of the ball in question. It has been labeled orange. But maybe it’s a shade of orange that we would actually have liked if we had looked at the picture of the ball ourselves instead of leaving it to the search engine.
Well, we might argue that the word orange, if it is precisely defined, won’t be confused with some other color. We can be confident that our notion of orange is the same as the vendor’s notion of orange. We do know how to express colors very precisely by using numbers.
So, let’s change the assertions and the inference a bit:
Assertion 1: DOROTHY THE DOLL is PRETTY.
Assertion 2: WE want a PRETTY DOLL.
An inference created by putting the two assertions together: WE might want DOROTHY THE DOLL.
Now, how could the notion of pretty ever be globally and uniformly defined?
Maybe we should shop for our own dolls and not leave it to a next generation search engine.
The Semantic Web will trade speed for accuracy. No way around it.
The Semantic Web.
In this posting, we will focus on the Semantic Web, which is a global effort at radically improving our ability to search the Web.
Currently, to search the web, we type in keywords into a search engine like Google, which then searches its vast index of webpages for pages that have these keywords in them. Because this sort of search is very low-level, and not at all tied to the true meaning or purpose of the information stored in webpages, searching is painfully iterative and interactive. A user must chase down countless URLs returned by a search engine to see if any of them are relevant. Quite frequently, they are not. And so, the user must refine the set of keywords and tries again. It might take many attempts before a satisfactory result is obtained.
One of the primary goals of the Semantic Web is to automate the process of searching the Web. There are two stages to this. First, people who post information on the Web must capture knowledge about the meaning of their information; this knowledge is commonly called “metadata”. The metadata is then store with the posted information.
The second stage happens when users search the Web. Rather than using the low level keyword search approach, the search is at least partly automated. The iterative process is sharply reduced by employing a smart search engine that knows how to find relevant information by searching for metadata that pertains precisely to whatever it is that the user is seeking.
The bottom line.
The Semantic Web would be able to ease the burden of searching for information, as well as find vast stores of “hidden data” that reside in databases that are accessible via webpages, but whose contents right now are not seen by search engines.
Ultimately, we would want the Web to be entirely searchable by software, without any humans guiding the process. This would be the true Semantic Web.
Namespaces and triples.
In past postings of this blog, we have discussed a handful of key approaches to implement the Semantic Web. One idea is to tag information with standardized sets of terminology called “namespaces“.
We have also looked at the idea of embedding these tags in things called “triples“. In this posting, we look at this concept more closely and consider an existing language that would allow people to specify these triples.
RDF and SPARQL.
The most well-known standard for specifying triples is RDF, which stands for the Research Description Framework. SPARQL is a query language, heavily influenced by SQL, that can be used to search data that has been structured using RDF.
This is the first of a series of blog postings in which we will first look at RDF, and then at SPARQL. Then, we’ll consider the big issue: will RDF and SPARQL enable the development of the true Semantic Web?
So, what is RDF? At its highest level, RDF is used to describe anything that can be found on the Web. RDF has an XML syntax; in other words, RDF can be written as an XML program, using a set of predefined “element” and “attribute” tags. (XML and XML languages were discussed in an earlier posting of this blog, as was XML and declarative languages.)
We might remember that on its own, XML is impotent. It is not in itself a programming language. It is simply a language standard for taking a set of tags and using them as “elements” and “attributes” in a declarative, data-intensive languages. A good example is SMIL, which is used to define multimedia presentations.
Here is a fragment in RDF, using its XML syntax. Note that XML languages are embedded languages, with opening tags beginning with <> and closing ones ending in </>
This looks complicated, but it’s not. This simple example illustrates the power of RDF. It uses a set of standardized RDF-specific tags, and the second line of code tells us where these tags come from: the w3.org site, which contains a vast store of information about advanced web technology. In other words, we can go to w3.org to find the precise definition of RDF specific tags.
RDF is engineered to also use other sets of tags, in particular, domain-specific tags. In this example, these tags come from a (non-existing) url called someurl.org. The tags themselves are prefaced with “zx:” in the rest of the code, so we know which tags are native RDF and which come from a domain-specific set of tags (called a namespace).
The xml “element” called Description is an RDF-specific tag that tells us we are giving the description of some resource on the Web, namely one at a (non-existing) website called awebsite.org.
The whole piece of code is one triple: It says that the topic of the resource at www.awebsite.org/index.html is funstuff. Here it is as a triple, with all the xml syntax and the namespace information removed:
www.awebsite.org/index.html <topic> funstuff.
Let’s overview this again. RDF is an XML language, so it uses the syntax of XML. One of the primary concepts in XML is that of an “element”, and Description is an XML element, one defined in the RDF standard. The piece of code begins with two namespace statements, one telling us which RDF specification we are using, and the second telling us that we will also be using some tags from another, domain-specific specification, which includes the tag “topic”. Then there is the guts of the triple, telling us that we are listing the topic of a Web-resident resource.
More on this in the next posting…]]>
The Semantic Web – a primary topic of this continuing blog series – will help us search the web with greater ease. One of the things it will (hopefully) do is expose a vast sea of information that is currently invisible to our web browsers. In fact, some say that right now, we can see less than 1% of what’s out there. I cannot vouch for this number, but I can say that what we cannot see right now includes large volumes of extremely valuable data.
Perhaps you have heard of the mysterious “Hidden Web”? So, what is this stuff and where is it?
Forms, Databases, and Interactive Interfaces.
The Hidden Web refers to data that is out there on the web, publicly accessible – but only via webpage interfaces that are opaque to the indexing software of search engines like Google.
Let’s step back for a moment.
The way search engines work, in case you don’t know, is by constantly searching the web, looking for new webpages. When a new page is found, it is added to the search engines index, meaning that now, when people search the web with Google, they might get the URL for that page in their search results.
The important thing to note is that the primary source of information that Google uses when it indexes a page is the page itself. What words are on it?
This sounds great for static webpages that are stored as-is on websites and delivered as-is to the Google user.
But suppose we want Google to find dynamic pages? A typical dynamic page has content that isn’t known until an interactive user types some words into a web “form”. A web form is a page where the browser user fills in blanks and then lets the browser send the completed page back to the server. There, the information in the form is used to select other information, which is plugged into a “dynamically” created page that is sent to the client machine and viewed by the browser user.
So, I might visit Amazon. I navigate to their search page, which is a form, and I type in the title of the book I want. That information goes back to the server. A description of this book, including its cost, is plugged into a dynamically created page, which is then downloaded to my machine so that I can read the material with my browser.
Indexing Dynamic Pages.
So, if I have information that is not sitting in static pages, how can I get Google to index this information? There are multiple ways. For example, if the primary job of your website is to create large volumes of dynamically created pages, you might want to create a special directory page for your site – a static page – loaded with all the right words, and that contains links to the pages and forms you want the user to discover.
On the future Semantic Web, you might want to make sure that those magic words come at least in part from globally accessible namespaces, so that people who are using next-generation browsers, and who will be using these namespaces as a source of search keywords, will find your static page. As we have discussed, namespaces will provide us with detailed sets of terms, which will be tied to specific domains. This will make the search for static pages far more efficient than it is now.
As an example, a namespace concerning books might have words like ISBN-10 and ISBN-13. If the web designer uses these terms to describe static pages about books, and if the user of the browser can specify that they are looking for ISBN numbers, the browser will have a much more detailed idea of what is meant by those 10 and 13 digit numbers the user types in.
Here’s the critical part. Right now, Amazon lets you search by the these numbers on their specialized web form page, but imagine if you could at any time tell your browser to look for ISBN numbers on whatever webpages it searches.
An example of a namespace that is used to describe documents on the web is the Dublin Core, by the way.
So, that’s one way to make your dynamic pages somewhat visible. Create a web page that is static and leads to the pages you want users to see, and to make it all the more powerful, use terms from a globally accepted namespace like the Dublin Core. This is something that is already partly doable. The Dublin Core, along with other namespaces, are in wide use.
Where Does that Information Come From?
Is there a better way, though? This technique will only point users to our static web directory, which will then enable interactive users to find our web forms. The users must then use our forms to get detailed data. Could the searching for dynamic pages be made more automatic?
Well, where does data in dynamic pages come from? Often from large databases built with such database management systems as Oracle, SQL Server, MySQL, PostgreSQL, and DB2. This is why some folks conjecture that the amount of information in the Hidden Web is vastly bigger than the web we see today. Databases can be BIG.
Imagine all the information on the ancient Pharaohs, genetic diseases, investments, philosophy, and countless other topics is sitting inside databases that right now are only accessible via web forms. Right now, we Google keywords like “pharaoh” and the first things we see are static, highly condensed Wikipedia pages, and perhaps some static pages posted by museums and academics.
What Will the Semantic Web Do?
The Semantic Web will have as a primary challenge the ability for us to ask for information, and know that the search space will contain information tucked away in databases dotted all around the globe.
This is a very complex problem. Right now, we need a human sitting at the keyboard of the client machine to navigate to the correct URL and then type terms into a web form. In the future, web designers will need ways of capturing information about what is contained in databases, and to specify that information in a fashion that browsers can access. And this information will have to be very detailed, sometimes very intricate.
The browser will also have to take information specified by the user and match it up with the information that describes databases on the web. This means that we will need some automatic way to search databases without a user interactively and incrementally screening tens or hundreds or thousands of URLs. In an earlier blog posting in this series we described one possible technique called “triples” that might, combined with namespaces, provide a partial solution to this problem.
We will look at this again, more closely, in a future blog posting.
This blog concerns advanced Web technology, in particular,Web 2.0/3.0 and the Semantic Web. Each blog entry should be fully understandable on its own, but the blog as a whole tells a continuing story.
Very roughly, we’ve defined the Web 2.0/3.0 as the class of emerging web applications that are highly responsive, to the point of being competitive with desktop apps. Another characteristic is that they can manage large volumes of very complex media, like images, sound, and animation, as well as interconnected forms of media. We’ve looked at some specific advanced web applications.
Our concern here, in this blog entry, is the Semantic Web, which we have also roughly defined. The Semantic Web is something that does not yet exist, but would meet the very aggressive goal of supporting largely automatic web searches, freeing us from excruciatingly interactive, manual Google and Yahoo sessions. And we’ve seen that we would use such things as shared namespaces, intelligent full text searching, and XML-based markup languages to embed information in websites that could be used by smart browsers to perform far more accurate searches.
Web services would help a lot, too, by taking humans out of the loop when providing powerful web-based capabilities; one website can now provide a vast amount of information, for example, by silently using web services to collect information from many other web-based sources.
(By the way, we have also looked at precisely what we mean by “semantic” in the Semantic Web.)
The way we pay.
This all sounds very good. The Web would be far more useful, with automatically searchable Semantic Web-sites. But there’s a bad side to all of this, and it has to do with how we often pay for Web use.
The problem is that we often do not pay at all. At least not directly, with money. We pay by putting up with ads. Free email services, such as those hustled by Yahoo, Hotmail, AOL, and Mail.com, are generally accessed via web browsers, and we find the main pages of these email accounts stuffed with ads.
Some free email accounts even stick ads in your outgoing mail!
Often, the only way to get the ads stripped from a web mail interface is to pay a fee. We might also get more than just ad-free web mail pages; paying sometimes allows users to access their email with POP or IMAP protocols, via desktop clients (like Outlook and Apple Mail), thus avoiding ads in another way.
(As an aside, there are free email sites that either have no ads in them, or only very subtle ones. Try Gmail.com and Inbox.com. My favorite, with its clean interface and growing set of accompanying capabilities, is GMX.com.)
As it turns out, folks looking to buy ad space online find that they have a vast array of choices, and this drives down the cost of ad space. But these two things, an ever-growing list of free online services and cheap ad space, are related. This is because it is all too easy to build useful web applications. Like browsers, bulletin boards, calendar apps, blogging services, and stickies applications, email servers are cheap to build and maintain. Venders can use canned, largely free software components.
And, transmission costs on the Internet are effectively free, and the bandwidth is huge. Free email accounts often offer a gigabyte or several gigabytes of storage, because disk space is dirt cheap, too.
There is a lot of rebranding going on, too, where someone seems to be offering free email (or some other service), but it is actually being provided by a large email provider.
So, the way things have shaken out, is that free web apps like email servers look like NASCAR racing cars, covered with colorful ads. Many of these ads consist of video, and so we have to battle distracting, flashing colors so we can focus on our mail.
The trick behind online ads.
There is something happening in the online ad world: folks who provide these free, pay-for-it-with-ads services are learning to carefully target ads. There is specialized software available for this, and by plugging in some smarts, folks can make the ads that appear on your screen far more likely to be of interest to you.
How is this done? By watching what you type into search engines, by taking advantage of personal information you supply when you sign up for free email accounts and other services, and by carefully examining the content of the messages you send and receive, that’s how it’s done.
It’s important to point out that this works. The “click through” rate on ads can be radically improved, just by using some simple heuristics in choosing your ads. Folks who pay for ads love this, and it has allowed individuals who don’t even provide free web applications turn themselves in to ad space sellers. Your blog, your specialized website, can now host ads carefully targeted toward the visitors to your blog or your website.
But just wait for the Semantic Web.
But it will really kick in when the semantic web is here. The same technology that would make browsers far, far smarter about finding good URLs for you will make the targeting of ads at you extremely precise.
This slowly-emerging technology is badly needed by the folks who sell ad space and by the people who buy that ad space. That’s because you and I are starting to get used to this world of NASCAR websites. We are looking through or past or around the ads. They need to be made a lot smarter, is order to get our attention back.
But by using Semantic Web technology to radically increase click-through rates, by getting us interested in ads again, impulse shopping on the Web might skyrocket. It’s very easy to go from seeing an ad for a product you have never heard of before to having bought it.
Like little kids watching commercials for sugar-heavy cereals on Saturday cartoon shows, we will be manipulated like we have never imagined before. That’s the bad side to the Semantic Web.
The impact of the new Web.
This posting addresses a non-technical question: What has been the impact of this technology our society?
Technological advancement can be very roughly broken into two groups: incremental and radical. Which of these is Web 2.0/3.0? Is it a radical advance?
Consider what highly responsive, multimedia web applications have done for us. They have enabled the development of:
* Wikis: These are web applications that allow us to collaboratively develop sophisticated, easily searchable information bases. These can range from dictionaries for specialized disciplines to vast databases containing DNA information. Data can be vetted by experts and/or challenged by random users.
Everybody knows about Wikipedia, but like blog and bulletin board software, wiki software can be easily installed and configured for deployment on almost any web server, whether it is publicly accessible, or used privately within a corporation or by a professional organization.
* Social networking sites: These are web applications that allow us to actively participate in a myriad of communities based on professional and personal interests. We find work, develop contacts, share music and photographs and video, and develop lifelong collaborations with people we would never have met otherwise.
They are also used by people who are in daily physical contact, but who find they can deepen their relationships by posting personal information on public sites like MySpace and Facebook. The interesting thing about these sites is that new and successful ones keep emerging,
* Tagged content vendor sites: Volunteers and paid individuals can contribute multimedia content and collaboratively tag it, using both freeform and highly sophisticated tagging protocols, such as the sophisticated MPEG-7 standard. (We will look at MPEG-7 in a future posting of this blog.) These include images and sound and video, and many taggers are highly trained professionals who can carefully categorize content according its detailed meaning. This technology makes a vast sea of otherwise-unknown assets available to us. It also makes these assets searchable, thus transforming a completely intractable task into something we easily perform.
In particular, this has radically enhanced the creative power of both professional and hobbyist animators by giving them complex scenery and character components to work with. Check out thoughtequity.com for an example of a content vendor. Take a look at daz3d.com for animation content.
* Mashups: These are portal or second tier web applications that take content from other web sources, such as Google Maps, investment information, medical advice, and scientific data. Often mashups take data from several or hundreds of other sites and create complex, highly valuable multimedia assets.
Take a look at woozor.com. It combines Google map and weather data.
* Distance learning: Universities, corporations, professional organizations, and lone instructors can develop and sell effective, multimedia educational packages that bring education to anyone who has Internet access. This allows us to retrain ourselves for new occupations, stay current in our professional skills, and find employment that is satisfying, steady, and high paying.
I teach on my university’s distance learning site, and we use video, sound, desktop video capture, slide presentations, and software demonstrations – and they can all be edited into a unified product. There are online universities now, where you can get a college degree. Take a look at jonesuniversity,com.
* Hybrid applications that support things like email, calendar, collaboration, RSS feeds, etc.
A good example of a hybrid application is zenbe.com, which provides a combined web-based email, list making, and calendar application, and in that sense is similar to many other email providers. But Zenbe also provides a collaborative tool called Zenbe Pages, which can be used by collaborators to organize their activities. A Zenbe page can have notes, calendars, lists, RSS feeds (not new ones, but existing RSS feeds) on them. Zenbe also provides quick access to Twitter, Google Talk, and Facebook.
By the way, it’s important to point out that the categories I list above are not as clear-cut as one might think. Many modern web apps contain elements from more than one of these categories.
The software building blocks.
From a programming perspective, what specific Web 2.0/3.0 software has allowed all of this to come about? We’ve discussed much of this already in previous postings of this blog. It includes XML and the exploding class of XML languages, namespaces, IDE’s (Integrated Development Environments), large code bases (such as the vast library of ready-made Java components), web service software development tools, and AJAX web page optimization technology. It also includes web development frameworks like Ruby on Rails, and newer ones, engineered toward high responsiveness, like Flex and Silverlight.
Also included are powerful media formats, codecs, players, and editors, which allow web users to do more than upload and search media; we can edit it and reform video, images, and sound, without leaving the simple world of our browsers. And of course, modern mega media apps enable us to build media assets. The list of contributing software tools goes on, but we’ll stop here.
And there is something subtle, but important that gives advanced web technology extraordinary power: it scales. We manage shared resources that are truly gigantic in size, and are spread across countless machines around the world. We leverage global user bases, cheap server technology, and wide open Internet bandwidth to give media stores belonging to Web apps astonishing growth rates.
The bottom line.
Yep. Web 2.0/3.0, as a whole, is a truly radical advancement. It has fundamentally and globally changed society in a big way.