Computational Construction Grammar for Visual Question Answering

Research output: Contribution to journalArticle


In order to be able to answer a natural language question, a computational system needs three main capabilities. First, the system needs to be able to analyse the question into a structured query, revealing its component parts and how these are combined. Second, it needs to have access to relevant knowledge sources, such as databases, texts or images. Third, it needs to be able to execute the query on these knowledge sources. This paper focuses on the first capability, presenting a novel approach to semantically parsing questions expressed in natural language. The method makes use of a computational construction grammar model for mapping questions onto their executable semantic representations. We demonstrate and evaluate the methodology on the CLEVR visual question answering benchmark task. Our system achieves a 100\% accuracy, effectively solving the language understanding part of the benchmark task. Additionally, we demonstrate how this solution can be embedded in a full visual question answering system, in which a question is answered by executing its semantic representation on an image. The main advantages of the approach include (i) its transparent and interpretable properties, (ii) its extensibility, and (iii) the fact that the method does not rely on any annotated training data.
Original languageEnglish
Pages (from-to)1-16
Number of pages16
JournalLinguistics Vanguard
Issue number1
Publication statusPublished - 2019


  • artificial intelligence
  • computational construction grammar
  • visual question answering

Fingerprint Dive into the research topics of 'Computational Construction Grammar for Visual Question Answering'. Together they form a unique fingerprint.

Cite this