You may remember a while ago, when everybody was talking about where the web might go as a kind of "next step" after the web we have all come to know. It was more or less generally agreed upon that the web will become "machine usable", i.e. the web content will become "semantic". What people meant by saying that was, that the information contained in the web pages would be structured in ways that would convey the meaning of the pages to machines, automating information processing. Terms like ontologies and taxonomies were heavily used.
Now, today we are still far away from that situation. In fact, the term Web 2.0
has been coined to denote the next generation web technology. Nobody really knows what that means specifically, however, a couple of things are consistenly mentioned in the respective forums and discussions. This includes AJAX
, rich clients (Google Earth, for example), involvement of the masses (e.g. del.icio.us
), "grass roots" (i.e. web logs).
Semantic Web? Nowhere to be seen. Instead of making the web more semantically rich, search engines and other tools are using computing power and clever algorithms to extract information from the web as it is (e.g. Google can translate from one language to another without understanding the language, just based on huge huge amounts of text - and a couple of clever algorithms). The only "semantics" that we put into the web today comes in the form of tags, e.g. in del.icio.us
or in flickr
So what does this tell us (and why do I tell you about it)? It think it is interesting to see that things often don't come as the "experts" predict. And also that the web has become a real mass medium, where sophisticated things like ontologies, RDF and XML only play a nice role.