Timezone: »
The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several improvements that make the Skip-gram model more expressive and enable it to learn higher quality vectors more rapidly. We show that by subsampling frequent words we obtain significant speedup, and also learn higher quality representations as measured by our tasks. We also introduce Negative Sampling, a simplified variant of Noise Contrastive Estimation (NCE) that learns more accurate vectors for frequent words compared to the hierarchical softmax. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada'' and "Air'' cannot be easily combined to obtain "Air Canada''. Motivated by this example, we present a simple and efficient method for finding phrases, and show that their vector representations can be accurately learned by the Skip-gram model.
Author Information
Tomas Mikolov (Google Research)
Ilya Sutskever (Google)
Kai Chen (Google Research)
Greg Corrado (Google Health)
Jeff Dean (Google Research)
Jeff joined Google in 1999 and is currently a Google Senior Fellow. He currently leads Google's Research and Health divisions, where he co-founded the Google Brain team. He has co-designed/implemented multiple generations of Google's distributed machine learning systems for neural network training and inference, as well as multiple generations of Google's crawling, indexing, and query serving systems, and major pieces of Google's initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google's distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on compiler techniques for object-oriented languages. He is a Fellow of the ACM, a Fellow of the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM Prize in Computing.
More from the Same Authors
-
2022 : Jeff Dean - Invited Talk »
Jeff Dean -
2019 : Invited Speaker: Jeff Dean »
Jeff Dean -
2019 : Climate Change: A Grand Challenge for ML »
Yoshua Bengio · Carla Gomes · Andrew Ng · Jeff Dean · Lester Mackey -
2019 : Jeff Dean (Google AI) »
Jeff Dean -
2017 : Future Hardware Directions »
Gregory Diamos · Jeff Dean · Simon Knowles · Michael James · Scott Gray -
2017 : Greg Corrado, Google »
Greg Corrado -
2017 : Invited Talk: Machine Learning for Systems and Systems for Machine Learning, Jeff Dean, Google Brain »
Jeff Dean -
2016 : Invited Talk: Scaling Machine Learning Using TensorFlow (Jeff Dean, Google Brain) »
Jeff Dean -
2016 : Jeff Dean – TensorFlow: Future Directions for Simplifying Large-Scale Machine Learning »
Jeff Dean -
2016 Poster: An Online Sequence-to-Sequence Model Using Partial Conditioning »
Navdeep Jaitly · Quoc V Le · Oriol Vinyals · Ilya Sutskever · David Sussillo · Samy Bengio -
2016 Poster: InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets »
Xi Chen · Peter Chen · Yan Duan · Rein Houthooft · John Schulman · Ilya Sutskever · Pieter Abbeel -
2016 Poster: Improving Variational Autoencoders with Inverse Autoregressive Flow »
Diederik Kingma · Tim Salimans · Rafal Jozefowicz · Peter Chen · Xi Chen · Ilya Sutskever · Max Welling -
2015 : TensorFlow: A system for machine learning on heterogeneous systems »
Jeff Dean -
2015 Poster: Grammar as a Foreign Language »
Oriol Vinyals · Łukasz Kaiser · Terry Koo · Slav Petrov · Ilya Sutskever · Geoffrey Hinton -
2015 Tutorial: Large-Scale Distributed Systems for Training Neural Networks »
Jeff Dean · Oriol Vinyals -
2014 Poster: Sequence to Sequence Learning with Neural Networks »
Ilya Sutskever · Oriol Vinyals · Quoc V Le -
2014 Oral: Sequence to Sequence Learning with Neural Networks »
Ilya Sutskever · Oriol Vinyals · Quoc V Le -
2013 Workshop: Deep Learning »
Yoshua Bengio · Hugo Larochelle · Russ Salakhutdinov · Tomas Mikolov · Matthew D Zeiler · David Mcallester · Nando de Freitas · Josh Tenenbaum · Jian Zhou · Volodymyr Mnih -
2013 Poster: DeViSE: A Deep Visual-Semantic Embedding Model »
Andrea Frome · Greg Corrado · Jonathon Shlens · Samy Bengio · Jeff Dean · Marc'Aurelio Ranzato · Tomas Mikolov -
2013 Demonstration: Distributed Representations of Words and Phrases and their Compositionality »
Tomas Mikolov · Kai Chen · Greg Corrado -
2012 Poster: Large Scale Distributed Deep Networks »
Jeff Dean · Greg Corrado · Rajat Monga · Kai Chen · Matthieu Devin · Quoc V Le · Mark Mao · Marc'Aurelio Ranzato · Andrew Senior · Paul Tucker · Ke Yang · Andrew Y Ng