Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.
Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions, rather than following strictly static program instructions.
Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also the name of the academic field of study which studies how to create computers and computer software that are capable of intelligent behavior. Major AI researchers and textbooks define this field as “the study and design of intelligent agents”, in which an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as “the science and engineering of making intelligent machines”.
Guest Speaker- Mat Kelcey: Distributed Representations of Text
Neural networks fundamentally handle text by mapping symbolic representations (e.g. the characters “foo”) to a distributed representation (i.e. a point in high dimensional space). In this talk we’ll start with a couple of classic natural language processing problems and their traditional symbolic solutions followed by a discussion of how neural networks solve similar tasks using distributed representations. We’ll cover a range of building blocks; recursive/recurrent networks, 2d convolutions, differentiable memory and attention mechanisms; both from a purely theoretical standpoint as well as some tricks we use to making these practically trainable at scale.
Mat is a software engineer in the Machine Intelligence group at Google. His job involves building neural networks for knowledge representation.