About the series
Deep Learning and Cognition
In the past few years, deep learning, the re-emergence of neural networks, has succeeded stunningly as an approach towards artificial intelligence. In many fields, including my own field of computational linguistics or natural language processing, deep learning approaches have largely displaced earlier machine learning approaches, mostly because of the superior performance they provide. I will discuss some of the results in speech and language which support the preceding claims, but wish to also move on to some bigger questions: Why and how do deep learning methods manage to be so successful? What does this new perspective suggest about human cognition and the language of thought? Are there good, more scientific uses of a deep learning perspective? What are the opportunities for deep learning to move beyond its core application of prediction to be a broader tool for artificial intelligence?
Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material. He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and a Past President of ACL. He has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.
To view the webinar, please register at: http://www.tvworldwide.com/events/nsf/170309/