Author(s): Parneet Kaur; Vandana Pushe
Natural language learning is the process of learning the semantics of natural language with respect to relevant perceptual inputs. Toward this goal, computational systems are trained with data in the form of natural language sentences paired with relevant but ambiguous perceptual contexts. With such ambiguous supervision, it is required to resolve the ambiguity between a natural language (NL) sentence and a corresponding set of possible logical meaning representations (MR) in Punjabi Language in which two words have same pronunciation but different meaning. My research focuses on devising effective models for simultaneously disambiguating such supervision and learning the underlying semantics of language to map NL sentences into proper logical forms. Specifically, this research present two probabilistic generative models for learning such correspondences in which there is a reduction in ambiguous data and it can predict the results based upon the history of the data that is searched.