cognitiveScience – Parerga und Paralipomena http://www.michelepasin.org/blog At the core of all well-founded belief lies belief that is unfounded - Wittgenstein Thu, 07 Oct 2010 08:28:23 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 13825966 Mathematics and our body http://www.michelepasin.org/blog/2010/10/07/mathematics-and-our-body/ Thu, 07 Oct 2010 08:28:23 +0000 http://www.michelepasin.org/blog/?p=939 “Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being” [amazon link] is a recent book by cognitive scientists George Lakoff and Rafael Nuñez, in which they argue that the origin of our mathematical ideas (even the most abstract and immaterial) is to be found in the materiality of our everyday experience. That is to say, in the body.

Screen shot 2010-10-07 at 10.09.55.png

Not entirely a new idea, but the detailed analysis of the two authors (who are world-recognised experts in their fields) makes the argument particularly poignant and eloquent. A few key excerpts from the book’s preface (bold font is mine):

Mathematics is seen as the epitome of precision, manifested in the use of symbols in calculation and in formal proofs. Symbols are, of course, just symbols, not ideas. The intellectual content of mathematics lies in its ideas, not in the symbols themselves. In short, the intellectual content of mathematics does not lie where the mathematical rigor can be most easily seen — namely, in the symbols. Rather, it lies in human ideas. But mathematics by itself does not and cannot empirically study human ideas; human cognition is simply not its subject matter. It is up to cognitive science and the neurosciences to do what mathematics itself cannot do — namely, apply the science of mind to human mathematical ideas. That is the purpose of this book.

[..]

The central question we ask is this: How can cognitive science bring systematic scientific rigor to the realm of human mathematical ideas, which lies outside the rigor of mathematics itself? Our job is to help make precise what mathematics itself cannot—the nature of mathematical ideas. Rafael Nunez brings to this effort a background in mathematics education, the development of mathematical ideas in children, the study of mathematics in indigenous cultures around the world, and the investigation of the foundations of embodied cognition. George Lakoff is a major researcher in human conceptual systems, known for his research in natural-language semantics, his work on the embodiment of mind, and his discovery of the basic mechanisms of everyday metaphorical thought.

[..]

One might think that the best way to understand mathematical ideas would be simply to ask mathematicians what they are thinking. Indeed, many famous mathematicians, such as Descartes, Boole, Dedekind, Poincaré, Cantor, and Weyl, applied this method to themselves, introspecting about their own thoughts. Contemporary research on the mind shows that as valuable a method as this can be, it can at best tell a partial and not fully accurate story. Most of our thought and our systems of concepts are part of the cognitive unconscious (see Chapter 2). We human beings have no direct access to our deepest forms of understanding. The analytic techniques of cognitive science are necessary if we are to understand how we understand.

[..]

One of the great findings of cognitive science is that our ideas are shaped by our bodily experiences — not in any simpleminded one-to-one way but indirectly, through the grounding of our entire conceptual system in everyday life. The cognitive perspective forces us to ask, Is the system of mathematical ideas also grounded indirectly in bodily experiences? And if so, exactly how? The answer to questions as deep as these requires an understanding of the cognitive superstructure of a whole nexus of mathematical ideas. This book is concerned with how such cognitive superstructures are built up, starting for the most part with the commonest of physical experiences.

]]>
939
Quotation from Gregory Bateson http://www.michelepasin.org/blog/2008/07/10/quotation-from-gregory-bateson/ Thu, 10 Jul 2008 08:13:50 +0000 http://people.kmi.open.ac.uk/mikele/blog/?p=288 Gregory Bateson (9 May 1904 – 4 July 1980) was an English anthropologist, social scientist, linguist, visual anthropologist, semiotician and cyberneticist whose work intersected that of many other fields. In the 1940s he helped extend systems theory/cybernetics to the social/behavioral sciences, and spent the last decade of his life developing a “meta-science” of epistemology to bring together the various early forms of systems theory developing in various fields of science.

From: Bateson, G., 1978, ‘Afterword’, in J. Brockman (Ed.) About Bateson, London: Wildwood House pp. 244-245

Consider for a moment the phrase, the opposite of solipsism. In solipsism, you are ultimately isolated and alone, isolated by the premise “I make it all up.” But at the other extreme, the opposite of solipsism, you would cease to exist, becoming nothing but a metaphoric feather blown by the winds of external “reality”. (But in that region there are no metaphors!) Somewhere between these two is a region where you are partly blown by the winds of reality and partly an artist creating a composite out of the inner and outer events.

 

]]>
288
How semantic is the semantic web? http://www.michelepasin.org/blog/2008/01/13/how-semantic-is-the-semantic-web/ Sun, 13 Jan 2008 17:09:54 +0000 http://people.kmi.open.ac.uk/mikele/blog/?p=258 Just read this article thanks to a colleague: I share pretty much everything it says about the SW, so I though it wouldn’t be too bad to pass it on to the next reader. Basically, it is about some very fundamental issues: what do we mean by semantics? Does a computer have semantics? If not, what’s the point of the name ‘Semantic Web’? I think that it’s quite un-controversial the fact that the choice of the name ‘semantic’ web is controversial.

I guess that many of the people originally supporting the SW vision didn’t really have the time to worry about this sort of questions, as they had different background, or maybe were just so excited about the grandiose idea an intelligent world wide web interconnected at the data level. Quite understandable, but as the idea is now reaching out to the larger public and maybe connecting to the more bottom-up Web2.0 movement, I think that it’d be great to re-think the foundations of the initial vision. Also with some rigorous clarification about the terms we use. The article of Chiara Carlino reaches an interesting conclusion:

So-called semantic web technologies provide the machine with data, like chinese symbols, and with a detailed set of instructions for handling them, in the form of ontologies. The computer simply follows the instructions, as the person in the chinese room does, and returns us useful informations, avoiding us the task of processing a big set of data on our own. These technologies have in fact nothing to do with semantics, because they never refer to anything in the real world: they never have any meaning, except in the mind of those expressing their knowledge in a machine-readable language, the mind of those preparing chinese symbols for the person in the chinese room. The person in the room – the machine – never ever gets this meaning. Such technologies, eventually, deal not much with semantics, but with knowledge, and its automatic processing through informatics. It seems therefore misleading and unfitting to keep on pointing with the word semantic a not semantic at all technology. It looks quite necessary to find out a new term, capable of hitting the core of this technology without giving rise to misunderstandings.

The article was also posted on the w3c SW mailing list some time ago, and generated an interesting discussion. But then, if we have to throw away the overrated ‘semantic web’ term, how should we call it instead? Without any doubt, this research strand has generated lots of interesting results, both theoretical and practical. Mmm maybe, mainly practical – see the many prototypes, ontologies and standards for manipulating ‘knowledge’. So, continues the author, what people are doing is not really dealing with ‘semantics’, but building very complex systems and infrastructures for dealing with ‘knowledge structures’:

There is a word who seems to serve this purpose, and that is epistematics. Its root – epistéme – points out its strict connection with knowledge; nonetheless, it is not a theoric study, not an epistemology: it is rather an automatic processing of knowledge. The term informatic has been created to point out the automatic processing of informations: similarly the term epistematic is pretty fitting in pointing out the automatic processing of knowledge that the technologies we are speaking about make possible. The terms also reminds informatics, and this is pretty fitting as well, as this processing happens thanks to informatics. Eventually, the current – though not much used – meaning of epistematic is perfectly coherent with the technologies we’d like to point out with it: epistematic, in fact, means deductive, and one of the most advanced features of these technologies is exactly the chance to process knowledge deductively, using automatic reasoners who build into software the deductive rules of formal logics. The formerly so-called semantic web looks now like a new science, not bounded (and narrowed) anymore to the world of web, as the semantic web term suggested: epistematics is a real evolution of informatics, evolving from raw informations processing to structured knowledge processing. Epistematic technologies are those technologies allowing the automatic processing, performed through informatic instruments, of knowledge, expressed in a machine- accessible language, so that the machine can process it, according to a subset of first order logic rules, and thus extract new knowledge.

I like the term epistematics – and even more I like the fact that the ‘web’ is just a possible extension to it, not a core part of its meaning. Semantic technologies, based on various groundbreaking works the AI pioneers did some twenty or thirty years ago (mainly, in knowledge representation), have been used much before the web. Now, is the advent of the web making such a big difference to them? They used to write knowledge-based systems in KIF – now they do them in OWL – we change the language but aren’t the functionalities we are looking for the same? They used to harvest big companies’ databases and intranets to build up a knowledge base – now we also harvest the web – is that enough to claim the emergence of a new science, with new problems and methods? Or is it maybe just a different application of a well-known technology?

I must confess, the more I think about such issues, the more I feel they’re difficult and intricate. For sure the web is evolving fast – and the amount of available structured information is evolving fast too. Making sense of all this requires a huge amount of clarity of thought. And presumably, this clarity of thought will eventually drive to some clarity of expression. Wittgenstein wasn’t the first one claiming it, but for sure he did it well: language plays tricks on us. Better, with his words:

Philosophy is a battle against the bewitchment of our intelligence by means of language.

 

 

]]>
258