Special Issue: Integrating Vision and Language for Semantic Knowledge Reasoning and Transfer

  in Special Issue   Posted on June 15, 2019

Information for the Special Issue

Special Issue Call for Papers:

Aims and Scope:

Due to the explosive growth of visual and textual data (e.g., images, video, blogs) on the Internet and the urgent requirement of joint understanding the heterogeneous data, integrating vision and language to bridge the semantic gap has attracted a huge amount of interest from the computer vision and natural language processing communities. Great efforts have been made to study the intersection of vision and language, and fantastic applications include (i) generating image descriptions using natural language, (ii) visual question answering, (iii) retrieval of images based on textural queries (and vice versa), (iv) generating images/videos from textual descriptions, (v) language grounding and many other related topics.

Though booming recently, it remains challenging as reasoning of the connections between visual contents and linguistic words are difficult. Reasoning is based on semantic knowledge, i.e. people understanding a linguistic word (for example “swan”) involves reasoning the external knowledge of the word (e.g., what swan look like, the sounds they make, how they behave and what their skin feels like.) Although reasoning ability is always claimed in recent studies, most “reasoning” simply uncovers latent connections between visual elements and textual/semantic facts during the training on manually annotated datasets with a large number of image-text pairs. Furthermore, recent studies are always specific to certain datasets that lack generalization ability, i.e., the semantic knowledge obtained from specific dataset cannot be directly transferred to other datasets, as different benchmark may have different characteristics of its own. One potential solution is leveraging external knowledge resources (e.g., social-media sites, expert systems and Wikipedia) as intermediate bridge for knowledge transfer. However, it is still implicit that how to appropriately incorporate the comprehensive knowledge resources for more effective knowledge-based reasoning and transfer across datasets. Towards a broad perspective of applications, integrating vision and language for knowledge reasoning and transfer has yet been well exploited in existing research.

Topics of Interests:

This special issue targets the researchers and practitioners from both the academia and industry to explore how advanced learning models and systems can be leveraged to address the challenges in semantic knowledge reasoning and transfer for joint understanding vision and language. It provides a forum to publish recent state-of-the-art research findings, methodologies, technologies and services in vision-language technology for practical applications. We invite original and high quality submissions addressing all aspects of this field, which is closely related to multimedia search, multi-modal learning, cross-media analysis, cross-knowledge transfer and so on.

Topics of interest include, but are not limited to:

· Big data storage, indexing, and searching

· Deep learning methods for vision and language

· Transfer learning for vision and language

· Cross-media analysis (retrieval, hashing, transfer, reasoning, etc)

· Multi-modal learning and semantic representation learning

· Learning knowledge graph over multi-modal data

· Generating image/video descriptions using natural language

· Visual question answering/generation

· Retrieval of images based on textural queries (and vice versa)

· Generating images/videos from textual descriptions

· Language grounding

Submission Guideline:

Authors are encouraged to submit high-quality, original work that has neither appeared in, nor is under consideration by, other journals. Authors should prepare their manuscripts according to the “Instructions for Authors” of “Journal of Visual Communication and Image Representation” guidelines at the journal website https://www.journals.elsevier.com/journal-of-visual-communication-and-image-representation.  When submitting via this page, please select “VSI: Vis-Lang” as the Article Type. 

All papers will be peer-reviewed following a regular reviewing procedure. Each submission should clearly demonstrate evidence of benefits to society or large communities. Originality and impact on research scopes, in combination with a media-related focus and innovative technical aspects of the proposed solutions will be the major evaluation criteria.

Important Dates:

• Paper Submission: August 1, 2019

• First Notification: November 1, 2019

• Revised Manuscript: January 1, 2020

• Notification of Acceptance: March 1, 2020

• Final Manuscript Due: April 15, 2020

• Publication Date: Middle of 2020

Guest Editors:

• Xing Xu, University of Electronic Science and Technology of China, China; Afanti AI Lab, The Lejent Ltd., China;. Email: xing.xu@lejent.com

• Lianli Gao, University of Electronic Science and Technology of China, China. Email: lianli.gao@uestc.edu.cn

• Lamberto Ballan, University of Padova, Italy. Email: lamberto.ballan@unipd.it

• Zi Huang, The University of Queensland, Australia. Email: zi.huang@uq.edu.au

• Alan Hanjalic, Delft University of Technology, The Netherland, Email: A.Hanjalic@tudelft.nl

Closed Special Issues