Modern machine learning techniques use essentially unreflective parameters – you might say the output of a neural net means 'bus', but the actual symbol "bus" does not exist within that system.

Part of the key to human one-shot learning (i.e. showing a child a cartoon-ish picture of a bus and telling them it's a bus – which is frequently enough for them to recognize a bus later, if the style is similar enough – school buses, maybe, are the level of precision required) are those symbols. We point at a picture of a school bus, call it a school bus, the child observes many features – wheels, yellow, black stripe, long shape, windshield with a driver, strange door. Importantly, those traits map to labels – "wheels", "yellow", etc. Some of them ("wheels" and "windshield + driver") overlap with another label, "car", and we can learn school buses are probably similar to cars. But the mapping to traits is done from the beginning – we teach kids what blue and yellow are, and that helps them with the bus later. Modern machine learning techniques require millions of examples to learn "bus", but I suspect you could get "blue" with fewer examples.

The symbolic net reduces the strain on the association network, making it much, much more efficient. But how would that work in practice? I'd be interested to see some code reflecting this – in particular, symbolic nodes connecting in strange ways, like the connection between "color" and "blue" helps you learn "yellow", but is a massively different type of connection than between "blue" and "sky" – maybe connections in the symbolic net themselves have symbols, so it's a asymmetric (fully ordered, actually) 3-connector. That definitely needed to adapt the human process to code (obviously when you learn a connection between two things, you'd just create a dummy connector – learning the nature of that connection connects it to other things so can be done independently).

It's also possible that pure associative nets are non-global, so "color" has a private associative net, and we learn "color" is a good connector, so we use that associative net to find the symbolic net's connection to further things.

I'd be interested in sharing code with anyone who's willing to do the same, this isn't a trivial problem and it's unlikely we'll know how useful this is until we have a good training set for this paradigm.