Article: “The Semantic Web: The Origins of Artificial Intelligence Redux”

Just read a very interesting article from Harry Halpin, whose work stays at the borderline between history of science (of computer science especially I gather) and (S)Web development. I think it should be a must-read for all SW practitioners, so to understand where we (yes – I’m part of them..) stand in relation to the past…

The article dates back to 2004, but the insights you’ll find there are (unfortunately) still valid today. For example, the problems the SW inherits from AI, but hardly recognizes as such in this newest community(here i just outline them – have a look at the paper for their specifications):

  • knowledge representation problem
  • higher order problem
  • abstraction problem
  • frame problem
  • symbol-grounding problem
  • problem of trust

None of this has been solved yet – although apparently the ontologies in the SW are increasing in both number and size… how come? Of course, that’s what the research is about, tryin to solve unsolved problems, but what the heck, shouldn’t we already be aware of their status as “VERY OLD PROBLEMS”?
Think of an ideal world where, as you polish up your SW-paper, on the side of the ACM category descriptor, you should also explicitly mention what problem you are tackling. Mmmmmm too dangerous.. don’t know how many papers would be classified as “novel” or “interesting”, then…
As Halpin says (quoting Santayana) “those who do not remember the past are condemned to repeat it”.
I agree. And I also agree on this conclusion, which I entirely report:

Engineering or Epistemology?

The Semantic Web may not be able to solve many of these problems. Many Semantic Web researchers pride themselves on being engineers as opposed to artificial intelligence researchers, logicians, or philosophers, and have been known to believe that many of these problems are engineering problems. While there may be suitable formalizations for time and ways of dealing with higher-order logic, the problems of knowledge representation and abstraction appear to be epistemological characteristics of the world that are ultimately resistant to any solution. It may be impossible to solve some of these problems satisfactorily, yet having awareness of these problems can only help the development of the Web.

In a related and *extremely* funny rant Drew McDermott, in a 1981 paper called Artificial intelligence meets natural stupidity, is certainly not aware of the forthcoming semantic web-wave of illusions, but certainly points out a few common mistakes we could recognize also nowadays… it’s a relaxing reading , but very competent.. I just report the final “benediction“, in which he describes the major “methodological and substantive issues over which we have stumbled”:

1. The insistence of AI people that an action is a change of state of the world or a world model, and that thinking about actions amounts to •stringing state changes together to accomplish a big state change. This seems to me not an oversimplification, but a false start. How many of your actions can be characterized as state changes, or are even performed to effect state changes? How many of a program’s actions in problem solving? (NOt the actions it strings together, but the actions it takes, like “trying short strings first”, or “assuming the block is where it’s supposed to be”.)
2. The notion that a semantic network is a network. In lucid moments, network hackers realize that lines drawn between nodes stand for pointers, that almost everything in an AI program is a pointer, end that any list structure could be drawn as a network, the choice of what to call node and what to call link being arbitrary. Their lucid moments are few.
3. The notion that a semantic network is semantic.
4. Any indulgence in the “procedural-declarative” controversy. Anyone who hasn’t figured this “controversy” out yet should be considered to have missed his chance, and be banned from talking about it. Notice that at Carnegie-Mellon they haven’t worried too much about this dispute, and haven’t suffered at all.
5. The idea that because you can see your way through a problem space, your program can: the “wishful control structure” problem.

………. I couldn’t resist from adding also a reference (suggested by KMi’s mate Laurian) to a paper by Peter Gardenfors written for FOIS2004, titled “How to make the Semantic Web more semantic” . He’s proposing a novel and less-symbolic approach to knowledge representation, and the overall spirit of the paper matches the the quote from Santayana mentioned above. The conclusion reads as follows:

It is slightly discomforting to read that the philosopher John Locke already in 1690 formulated the problem of describing the structure of our semantic knowledge in his Essay Concerning Human Understanding: “[M]en are far enough from having agreed on the precise number of simple ideas or qualities belonging to any sort of things, signified by its name. Nor is it a wonder; since it requires much time, pains, and skill, strict inquiry, and long examination to find out what, and how many, those simple ideas are, which are constantly and inseparably united in nature, and are always to be found together in the same subject.” ([25], book III, chapter VI, 30) Even though our knowledge has advanced a bit since then, we still face the same problems in the construction of the Semantic Web.



Comments are closed.