artificial intelligence – Parerga und Paralipomena http://www.michelepasin.org/blog At the core of all well-founded belief lies belief that is unfounded - Wittgenstein Wed, 25 Jul 2018 16:40:54 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 13825966 Victim of the Brain http://www.michelepasin.org/blog/2010/03/23/victim-of-the-brain/ Tue, 23 Mar 2010 01:32:43 +0000 http://magicrebirth.wordpress.com/?p=642 If you’ve heard about the ‘brain in a vat‘ thought experiment but never had the time to read more about it, this movie is a quite pleasant dramatisation of the argument!

Victim of the Brain, 1:30:14 – 1988

[wikipedia article]

Victim of the Brain is a 1988 film by Dutch director Piet Hoenderdos, loosely based on The Mind’s I, a compilation of texts and stories on the philosophy of mind and self, co-edited by Douglas Hofstadter and Daniel C. Dennett.

Features interviews with Douglas Hofstadter and Dan Dennett. Dennett also stars as himself. Original acquired from the Center for Research in Concepts and Cognition at Indiana University.

]]>
642
A Sneak Preview of Wolfram Alpha http://www.michelepasin.org/blog/2009/05/02/a-sneak-preview-of-wolframalpha/ Sat, 02 May 2009 14:50:54 +0000 http://magicrebirth.wordpress.com/?p=127 It’s the new brainchild of Stephen Wolfram, author of Mathematica. It does look impressive in my opinion – can’t wait to try it live (due to launch some time in may)!

Defined as a Computational Knowledge Engine. It does an awful lot of number-crunching but looks more as a giant closed database than a distributed Web of data, or even a ‘Semantic web’.

Interesting the reaction of competitor Doug Lenat, who although is claiming that Wolfram is not AI (therefore CYC’s got nothing to fear) imho is realizing he didn’t get it right when he set out trying to capture ‘common sense’. After all, all that we find in Wolfram|Alpha is symbolic reasoning. Is that so far from the way Cyc works? This might be a nice departure point for an interesting discussion…

Lenat’s blog post contains some interesting comments on the things that Wolfram|Alpha can’t do (yet):

When it returns information, how much does it actually “understand” of what it’s displaying to you?  There are two sorts of queries not (yet) handled: those where the data falls outside the mosaic I sketched above — such as:  When is the first day of Summer in Sydney this year?  Do Muslims believe that Mohammed was divine?  Who did Hezbollah take prisoner on April 18, 1987? Which animals have fingers? — and those where the query requires logically reasoning out a way to combine (logically or arithmetically combine) two or more pieces of information which the system can individually fetch for you.  One example of this is: “How old was Obama when Mitterrand was elected president of France?”  It can tell you demographic information about Obama, if you ask, and it can tell you information about Mitterrand (including his ruleStartDate), but doesn’t make or execute the plan to calculate a person’s age on a certain date given his birth date, which is what is being asked for in this query.  If it knows that exactly 17 people were killed in a certain attack, and if it also knows that 17 American soldiers were killed in that attack, it doesn’t return that attack if you ask for ones in which there were no civilian casualties, or only American casualties.  It doesn’t perform that sort of deduction.  If you ask “How fast does hair grow?”, it can’t parse or answer that query.  But if you type in a speed, say “10cm/year”, it gives you a long and quite interesting list of things that happen at about that speed, involving glaciers melting, tectonic shift, and… hair growing.

Some nice coverage of it also on ReadWriteWeb and UMBC.

]]>
127
What can the analytical engine do? Ask Charles Babbage.. http://www.michelepasin.org/blog/2007/06/25/what-can-the-analytical-engine-do-ask-charles-babbage/ Mon, 25 Jun 2007 09:03:17 +0000 http://people.kmi.open.ac.uk/mikele/blog/?p=242 I recently read the original article of Charles Babbage – which deals with the work he and Lady Ada Augusta Countess of Lovelace did on the Analytical Engine, one of the (mechanical) predecessors of the modern computer – thanks to a blog post from David Dodds. I guess that his position is representative of that one of many people in the xml community – people who, when facing the ontologists/AI/intelligent-agents/SW prophets, always try to keep things down to earth. But anyways. The original passage from the Countess of Lovelace is quite interesting:

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulæ of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. This is a decidedly indirect, and a somewhat speculative, consequence of such an invention. It is however pretty evident, on general principles, that in devising for mathematical truths a new form in which to record and throw themselves out for actual use, views are likely to be induced, which should again react on the more theoretical phase of the subject. There are in all extensions of human power, or additions to human knowledge, various collateral influences, besides the main and primary object attained.

 

]]>
1345
Article: “The Semantic Web: The Origins of Artificial Intelligence Redux” http://www.michelepasin.org/blog/2007/06/07/sw-vs-ai-new-and-old-stuff/ Thu, 07 Jun 2007 18:07:58 +0000 http://people.kmi.open.ac.uk/mikele/blog/?p=238 Just read a very interesting article from Harry Halpin, whose work stays at the borderline between history of science (of computer science especially I gather) and (S)Web development. I think it should be a must-read for all SW practitioners, so to understand where we (yes – I’m part of them..) stand in relation to the past…

The article dates back to 2004, but the insights you’ll find there are (unfortunately) still valid today. For example, the problems the SW inherits from AI, but hardly recognizes as such in this newest community(here i just outline them – have a look at the paper for their specifications):

  • knowledge representation problem
  • higher order problem
  • abstraction problem
  • frame problem
  • symbol-grounding problem
  • problem of trust

None of this has been solved yet – although apparently the ontologies in the SW are increasing in both number and size… how come? Of course, that’s what the research is about, tryin to solve unsolved problems, but what the heck, shouldn’t we already be aware of their status as “VERY OLD PROBLEMS”?
Think of an ideal world where, as you polish up your SW-paper, on the side of the ACM category descriptor, you should also explicitly mention what problem you are tackling. Mmmmmm too dangerous.. don’t know how many papers would be classified as “novel” or “interesting”, then…
As Halpin says (quoting Santayana) “those who do not remember the past are condemned to repeat it”.
I agree. And I also agree on this conclusion, which I entirely report:

Engineering or Epistemology?

The Semantic Web may not be able to solve many of these problems. Many Semantic Web researchers pride themselves on being engineers as opposed to artificial intelligence researchers, logicians, or philosophers, and have been known to believe that many of these problems are engineering problems. While there may be suitable formalizations for time and ways of dealing with higher-order logic, the problems of knowledge representation and abstraction appear to be epistemological characteristics of the world that are ultimately resistant to any solution. It may be impossible to solve some of these problems satisfactorily, yet having awareness of these problems can only help the development of the Web.

In a related and *extremely* funny rant Drew McDermott, in a 1981 paper called Artificial intelligence meets natural stupidity, is certainly not aware of the forthcoming semantic web-wave of illusions, but certainly points out a few common mistakes we could recognize also nowadays… it’s a relaxing reading , but very competent.. I just report the final “benediction“, in which he describes the major “methodological and substantive issues over which we have stumbled”:

1. The insistence of AI people that an action is a change of state of the world or a world model, and that thinking about actions amounts to •stringing state changes together to accomplish a big state change. This seems to me not an oversimplification, but a false start. How many of your actions can be characterized as state changes, or are even performed to effect state changes? How many of a program’s actions in problem solving? (NOt the actions it strings together, but the actions it takes, like “trying short strings first”, or “assuming the block is where it’s supposed to be”.)
2. The notion that a semantic network is a network. In lucid moments, network hackers realize that lines drawn between nodes stand for pointers, that almost everything in an AI program is a pointer, end that any list structure could be drawn as a network, the choice of what to call node and what to call link being arbitrary. Their lucid moments are few.
3. The notion that a semantic network is semantic.
4. Any indulgence in the “procedural-declarative” controversy. Anyone who hasn’t figured this “controversy” out yet should be considered to have missed his chance, and be banned from talking about it. Notice that at Carnegie-Mellon they haven’t worried too much about this dispute, and haven’t suffered at all.
5. The idea that because you can see your way through a problem space, your program can: the “wishful control structure” problem.

………. I couldn’t resist from adding also a reference (suggested by KMi’s mate Laurian) to a paper by Peter Gardenfors written for FOIS2004, titled “How to make the Semantic Web more semantic” . He’s proposing a novel and less-symbolic approach to knowledge representation, and the overall spirit of the paper matches the the quote from Santayana mentioned above. The conclusion reads as follows:

It is slightly discomforting to read that the philosopher John Locke already in 1690 formulated the problem of describing the structure of our semantic knowledge in his Essay Concerning Human Understanding: “[M]en are far enough from having agreed on the precise number of simple ideas or qualities belonging to any sort of things, signified by its name. Nor is it a wonder; since it requires much time, pains, and skill, strict inquiry, and long examination to find out what, and how many, those simple ideas are, which are constantly and inseparably united in nature, and are always to be found together in the same subject.” ([25], book III, chapter VI, 30) Even though our knowledge has advanced a bit since then, we still face the same problems in the construction of the Semantic Web.

 

]]>
238
A doctor hidden in your mac http://www.michelepasin.org/blog/2006/03/06/a-doctor-hidden-in-the-mac/ Mon, 06 Mar 2006 16:34:03 +0000 http://people.kmi.open.ac.uk/mikele/blog/?p=9 I have never heard about this before, but as it turns out OS X comes with the simulated therapy program, Eliza, pre-installed. Eliza was written at MIT by Joseph Weizenbaum between 1964 to 1966, and it was one of the early examples of natural language processing.

On OSX, Eliza is ‘hidden’ inside the eMacs application. Here’s how you can open it: in the Terminal application type emacs and hit enter. When the editor has loaded, hold shift and hit the escape key. Then type xdoctor and hit return. Answer the first question and proceed from there. When you’re done, control-X control-C will quit the editor.

Screen shot 2011 05 30 at 16 26 35

Janet Murray, in her classic cyber-narrative book Hamlet on the Holodeck describes the invention with these words:

The resulting persona, Eliza, was that of a Rogerian therapist, the kind of cliniciam who echoes back the concerns of the patient without interpretation.
[…]
To Weizenbaum’s dismay, a wide range of people, including his own secretary, would “demand to be permitted to converse with the system in private, and would, after conversing with it for a time, insist, in spite of [Weizenbaum’s] explanations, that the machine really understood them.”
[…]
Weizenbaum had set out to make a clever computer program and had unwittingly created a believable character. He was so disconcerted by his achievement that he wrote a book warning of the dangers of attributing human thought to machines.

 

]]>
9