Cross-Media Learning for Visual Question Answering (VQA)

  in Special Issue   Posted on June 3, 2020

Information for the Special Issue

Submission Deadline: Mon 31 Aug 2020
Journal Impact Factor : 2.671
Journal Name : Image and Vision Computing
Journal Publisher:
Website for the Special Issue: https://www.journals.elsevier.com/image-and-vision-computing/call-for-papers/cross-media-learning-visual-question-answering
Journal & Submission Website: https://www.journals.elsevier.com/image-and-vision-computing

Special Issue Call for Papers:

Visual Question Answering (VQA) is a recent hot topic which involves multimedia analysis, computer vision (CV), natural language processing (NLP), and even a broad perspective of artificial intelligence, which has attracted a large amount of interest from the deep learning, CV, and NLP communities. The definition of this task is shown as follows: a VQA system takes a picture and a free, open-ended question in the form of natural language about the picture as input and takes the generation of a piece of answer in the form of natural language as the output. It is required that pictures and problems should be taken as input of a VQA system, and a piece of human language is required to be generated as output by integrating information of these two parts. For a specific picture, if we want that the machine can answer a specific question about the picture in natural language, we need to enable the machine to have certain understanding of the content of the picture, and the meaning and intention of the question, as well as relevant knowledge. VQA relates to AI technologies in multiple aspects: fine-grained recognition, object recognition, behavior recognition, and understanding of the text contained in the question (NLP). Because VQA is closely related to the content both in CV and NLP, a natural QA solution is integrating CNN with RNN, which are successfully used in CV and NLP, to construct a composite model. To sum up, VQA is a learning task linked to CV and NLP.

The task of VQA is rather challenging because it requires to comprehend textual questions, and analyze visual questions and image elements, as well as reasoning about these forms. Moreover, sometimes external or commonsense knowledge is required as the basis. Although some achievements have been made in VQA study currently, the overall accuracy rate is not high as far as the effect achieved by the current model is concerned. As the present VQA model is relatively simple in structure, single in the content and form of the answer, the correct answer is not so easy to obtain for the slightly complex questions which requires more prior knowledge for simple reasoning. Therefore, this Special Section in Journal of Image and Vision Computing aims to solicit original technical papers with novel contributions on the convergence of CV, NLP and Deep Leaning, as well as theoretical contributions that are relevant to the connection between natural language and CV.

The topics of interest include, but are not limited to:

  • Deep learning methodology and its applications on VQA, e.g. human computer interaction, intelligent cross-media query and etc.
  • Image captioning indexing and retrieval
  • Deep Learning for big data discovery
  • Visual Relationship in VQA
  • Question Answering in Images
  • Grounding Language and VQA
  • Image target location using VQA
  • Captioning Events in Videos
  • Attention mechanism in VQA system
  • Exploring novel models and datasets for VQA

SUBMISSIONS AND REVISIONS

Submitted papers should present original, unpublished work, relevant to one of the topics of the Special Issue. All submitted papers will be evaluated on the basis of relevance, the significance of contribution, technical quality, and quality of presentation, by at least two independent reviewers (the papers will be reviewed following standard peer-review procedures of the Journal). Each paper will be reviewed rigorously and possibly in two rounds. Prospective authors should follow the formatting and Instructions of Image and Vision Computing at https://www.elsevier.com/journals/image-and-vision-computing/0262-8856/guide-for-authors, and invited to submit their papers directly via the online submission system at https://www.editorialmanager.com/IMAVIS/default.aspx. When submitting your manuscript please select the article type “VSI: VQA” Please submit your manuscript before the submission deadline.

DEADLINES

Submission Deadline: August 31, 2020

First Review: October 31, 2020

Revisions Due: December 31, 2020

Final Decision: February 28, 2021

Guest Editors:

Prof. Shaohua Wan, Zhongnan University of Economics and Law, China. shaohua.wan@ieee.org

Prof. Alexandros Iosifidis, Aarhus University, Denmark. ai@eng.au.dk

Prof.Chen Chen, University of North Carolina at Charlotte, USA. chen.chen@uncc.edu

Other Special Issues on this journal

Closed Special Issues

2.671

Reliable Automatic Facial Coding

Image and Vision Computing
Mon 15 May 2017