Deep Neural Networks Abstract Like Humans

Alex Gain, Hava Siegelmann

Go to the profile of Nature Communications
Dec 18, 2019
0
0

Received Date: 28th November 19

Deep neural networks (DNNs) have revolutionized AI due to their remarkable performance in pattern recognition through their ability to memorize complex training sets and generalize to previously unseen data (test sets); it is in this ability to generalize that their computational intelligence lies. The high generalization performance in DNNs has been explained by several mathematical tools, including optimization, information theory, and resilience analysis. In humans, it is the ability to abstract concepts from examples that facilitates generalization; this paper thus researches DNN generalization from that perspective. A recent computational neuroscience study revealed a correlation between abstraction and particular neural firing patterns. We express these brain patterns in a closed-form mathematical expression, termed the ``Cognitive Neural Activation metric' (CNA) and apply it to DNNs. Our findings reveal parallels in the mechanism underlying abstraction in DNNs and those in the human brain. Beyond simply measuring similarity to human abstraction, the CNA is able to predict and rate how well a DNN will perform on test sets, and determines the best network architectures for a given task in a manner not possible with extant tools. These results were validated on a broad range of datasets (including ImageNet and random labeled datasets) and neural architectures.

Read in full at arXiv.

This is an abstract of a preprint hosted on an independent third party site. It has not been peer reviewed but is currently under consideration at Nature Communications.

Go to the profile of Nature Communications

Nature Communications

Nature Research, Springer Nature