Doing Semantics with a Probabilistic Type Theory |
Shalom Lappin |
Classical model theoretic semantics uses an impoverished type theory, possible worlds, and truth conditions. Lexical meanings are generally expressed through meaning postulates (constraints on the set of possible models). This approach offers an elegant formal representation of meaning, but it suffers from a variety of computational and empirical difficulties. These include (inter alia) the intractability of intensional representations, problems in handling fine-grained meaning, brittle and limited coverage of the semantic properties of natural language, and the absence of a plausible account of semantic learning. Recent and historical revisions of this approach have addressed some of these problems, but others remain. Distributional semantics applies vector space modelling to express semantic relations in a computationally tractable and data driven way. It also suggests a viable theory of semantic learning. However, it continues to encounter problems in moving beyond lexical meaning to a robust compositional semantics. It also does not provide a direct account of language-world connections, which are at the core of semantic relations. I present current research, with Robin Cooper, Simon Dobnik, and Staffan Larsson, on probabilistic type theory for natural language semantics. This approach formulates a probabilistic version of Cooper's (2012) Type Theory with Records. It specifies a direct connection between Bayesian learning of predicative classifiers and the compositional semantics of natural language. The proposed account models semantic judgements as a case of reasoning under uncertainty. In this way it incorporates semantic learning and representation into more general patterns of cognition. |