Conference Highlights – Cognitive Computational Neuroscience 2017

Last week, the 2017 Cognitive Computational Neuroscience conference was held at Columbia University. The meeting was studded with star speakers, and contained a mix of cognitive, brain-based, and machine learning talks throughout each day. From the outside, the conference looked like a meeting of researchers using roughly similar techniques and methods – or at the very least, a large collection of people who like math. Over the course of the conference, however, two cohorts of researchers emerged: those that use computational models to study cognition and the brain, and those that take clues from neuroscience to improve models for machine learning or AI purposes.

For an example of the former, Nicole Rust of UPenn described a theory of visual memory as a reduction in the population spike count in macaque inferotemporal cortex. Representing images as multidimensional vectors where each dimension is the spike count of one neuron in the population, Rust finds that a weighted linear classifier fits best as a threshold for the vector lengths at which an image will be deemed novel or familiar. In Rust’s research, we see the application of advanced computational techniques and statistics that aid in the characterization of behavior-linked neural responses.

On the other hand, Yann LeCun (NYU, Facebook), described ways we can take inspiration from the brain to improve the architecture of artificial intelligence systems. Artificial intelligence researchers seem obligated to begin every talk with examples of tasks that the human brain excels at quickly and effortlessly, and which our most advanced computers fumble with awkwardly or fail at completely. These examples seem to be attempts to define the boundaries of artificial intelligence. For example, perhaps if we had a computer that could quickly identify the underlying relationships of humans depicted in a picture, that computer would be artificially intelligent. Alona Fyshe of the University of Victoria opened her talk with a description of Ben Goertzel’s “Robot Student Test,” which proposes that the boundary of artificial intelligence might be the existence of a computer that could independently complete a college degree.

But if we want to make a machine that is as intelligent as a human, then we must be able to characterize the intelligence of humans themselves. The math-loving crowd laughed when LeCun admitted on a panel that he is “not interested in vision” – instead, his research prioritizes the development of machine learning and artificial intelligence tools, only taking scant suggestions from the known workings of the brain. In fact, LeCun warned not to be too inspired by nature, arguing that we would never have created airplanes if we took inspiration from bird flight too literally. The core principles of aerodynamics gave way to greater technological applications than the natural details of bird feathers.

Still, there exist core principles that the brain can teach us, if only we have the perspective to recognize them. Daniel Wolpert concluded his talk with a funny slide showing a close-fitting linear relationship between the number of pages in the textbook “Principles of Neural Science” by Kandel et al. and the edition number. Over 30 years and five editions, the classic text has increased from around 750 pages to over 1500 pages. Wolpert said that it was the experimentalists who push the curve up, increasing the number of pages in the book. However, to abate the positive linear trend, theorists are necessary to push the curve downwards and synthesize data into fewer principles of neural science.

From convolutional neural networks to post-mortem anatomical MRI, the CCN conference displayed a wide spread of advances in the study of brains and intelligence. Our own Shannon Tubridy and David Halpern (above) presented a poster on progress in the OMNI project, and many other NYU labs were represented as well. An interesting theme was the prevalence of presentations about learning – on the human side, reinforcement learning and memory, and on the machine side, the many techniques for training or “teaching” neural networks. After all, the main success of the brain is its ability to learn, and one could easily define intelligence as a capacity or skill for learning as well.

Overall, it was an interesting conference, and hopefully the dialog between these different types of theorists can continue.