Wednesday, 19 December 2012

After all, it is a "Chinese" room, not a Japanese. What do you expect?

Spaun (or Semantic Pointer Architecture : Unified Network) is a cognitive neural model of a perception-memory-motor system.

"...The model consists of 2.3 million spiking neurons whose neural properties, organization, and connectivity match that of the mammalian brain. Input consists of images of handwritten and typed numbers and symbols, and output is the motion of a 2 degree-of-freedom arm that writes the model’s responses. Tasks can be presented in any order, with no “rewiring” of the brain for each task. Instead, the model is capable of internal cognitive control (via the basal ganglia), selectively routing
information throughout the brain and recruiting different cortical components as needed for each task."

The tasks it can perform are:

1. Digit Recognition, in which a handwriting digit is given to it as an input and Spaun recognizes it with an accuracy well compared to that of humans.

2. Tracing from the memory, Spaun can not only recognize a digit and draw it as an output, it can mimic the style of the input (its font for instance).

3. Serial Working Memory, that is to repeat a sequence of numbers in the same order given to it.

4. Question Answering, it can identify a digits position within a sequence. It is a different task from the  one above in that in the previous task, it only repeats the sequence.

5. Addition by Counting, Spaun can perform mental addition via counting, which might seem an easy task, given a calculator does it pretty easily, but it is done from start to the end by the network (the recognition and probably decoding of the input and the further computation).

6. Pattern Completion,  given a training set such as :

Input: Biffle biffle rose zarple. Output: rose zarple.
Input: Biffle biffle frog zarple. Output: frog zarple.
Input: Biffle biffle dog zarple. Output: dog zarple.

It can generalize the pattern and give the following output to this:

Input: Biffle biffle quoggie zarple. 
Output: quoggie zarple.

I admit this is impressive. But, from a computational semantics point of view, I wonder, if such a model simulates a network with specified neural properties and organizations for understanding and producing language, after training it (the same way a human infant is trained), would it "understand" and produce language (appropriately)?

I came across this paper through Conscious Entities' post which in my opinion, shouldn't be missed. 
Here are some videos demonstrating each task performance by Spaun.

No comments: