following on from my earlier post, straight after reading mixing memory on Keil’s work on our shallow knowledge, I happened to read chapter of Semantic Cognition by Rogers & McClelland (2004) that attempts to model our causal knowledge systems with a connectionist model. Except that it doesn’t!
All the other chapters give explicit simulations of phenomena in category acquisition and knowledge. Using a model originally proposed by David Rumelhart they show plausible mechanism for progressive differentiation of concepts in infancy and the patterns of loss in semantic dementia. They explicitly demonstrate the natural emergence of a favoured basic level and how this is changed by expertise. Their model generalizes it’s knowledge to new instances and learns faster when categories are coherent. All in line with the human data. Albeit these are demonstrated with a toy model that learns very simplified sets of concepts about natural world of the type:
{sunflower,salmon,maple,etc.} > {HAS,CAN,IS,ISA} > {legs,grow,move,yellow, animal,dog,etc.}
Nonetheless, the specific details are less important than the explicit demonstration of particular principles, network learns better where it has meaningful interrelated bits of information (e.g. things with fur can move, are animals, are alive, etc.) This is not new, it is based on Hinton’s work from early 80’s and Rumelhart of early 90’s.
Alas, on the interesting challenge of applying of this framework to causal knowledge, they skate across the surface. They do not present any simulation data but instead sketch out how there model might be extended to deal with causes and that awareness of these could develop empirically rather than relying on some innate ‘theory’ modules. But of course without the data they are just making up a plausible story of how the world works, like the participants in the original studies. And like me! I am reading their book and pretending to myself that I understand it but my interpretation is sketchy and superficial (as you can see above). Of course, being intelligent scientists they acknowledge their failure to grapple with reality. As do I (most likely because I want to believe I am an intelligent scientist.)
And by this stage we are up to three levels of Socratic embedding. Keil & co’s claim to understand causal knowledge, Rogers & McClellands claim that they have an intuition of a better understanding of this and my own belief that this is what is going on. Now by reading this far, you have just added a fourth level.
Leave a Reply