Rummelhart & McClelland |
---|
Rummelhart and McClelland (1986) trained a NN to learn the past tense of both regular and irregular verbs. In this research, words were entered to the network at the level of phonemes, i.e. the word was decomposed into a sequence of phonemes, and phonemes were then presented one at a time. The network was hardwired to turn these phonemes into the Wickelfeature of the input word. This wickelfeature was then mapped to the wickelfeature of the verb in past tense. Finally, the network was hardwired to turn this second wickelfeature into the phonemes of the verb in past tense. |
![]() Figure 6:Sketch of the network used by Rummelhart & McClelland (Rummelhart & McClelland, 1986). |
Rummelhart and McClelland managed to produce a network that correctly processed all 460
verbs in the training set, and that generalized to some new verbs. Their network also showed
stages of memorization, over-generalization, and re-learning.
Pinker and Prince (1988) discussed some inadequacies of the linguistic representation used by McClelland and Rummelhart. Some of these inadequacies are: the fact that two different words can have the same phonetic representation (such as break and brake). The Rummelhart and McClelland model also manages to perform processing that has no analog in human language processing. |