Squashing Computational Linguistics

Date:

The computational linguistics and natural language processing community is experiencing an episode of deep fascination with representation learning. Like many other presenters at this conference, I will describe new ways to use representation learning in models of natural language. Noting that a data-driven model always assumes a theory (not necessarily a good one), I will argue for the benefits of language-appropriate inductive bias for representation-learning-infused models of language. Such bias often comes in the form of assumptions baked into a model, constraints on an inference algorithm, or linguistic analysis applied to data. Indeed, many decades of research in linguistics (including computational linguistics) put our community in a strong position to identify promising inductive biases. The new models, in turn, may allow us to explore previously unavailable forms of bias, and to produce findings of interest to linguistics. I will focus on new models of documents and of sentential semantic structures, and I will emphasize abstract, reusable components and their assumptions rather than applications.