Special Section on Meta-learning: Theories, Algorithms and Applications

in Special Issue   Posted on January 19, 2021 

Information for the Special Issue

Submission Deadline: Fri 15 Oct 2021
Journal Impact Factor : 1.275
Journal Name : Frontiers of Computer Science
Journal Publisher:
Website for the Special Issue: https://www.springer.com/journal/11704/updates/18769800
Journal & Submission Website: https://www.springer.com/journal/11704

Special Issue Call for Papers:


Albeit being raised from the last contrary in machine learning community, meta-learning has been attracting increasingly more research attentions in the recent years. The main aim of this learning paradigm is to learn how to specify a machine learning methodology itself for a testing learning task, though training on a series of learning tasks, each of which contains a complete pair of training set (support set) and testing one (query set). The learned machine learning methodology can be an effective initialization setting of the learning model, a proper network architecture, an accurate learning rate schedule of an SGD algorithm, or a fine specification of hyper-parameters contained in the learning objective. The specific characteristic of meta-learning is that it does not aim to learn a specific deterministic function for predicting labels for future testing data, as conventional machine learning does, but purposes to learn the principle of how to readily set a machine learning implementation process and make it readily implemented for future learning tasks. This function is the so called “learning to learn” of meta-learning. Nowadays, this learning regime has been widely attempted in various application tasks, including few shot learning, network architecture search, learning to optimize, domain generalization, robust machine learning, and so on. It has been shown a great potential of alleviating many bottleneck issues of traditional machine learning from a higher perspective, and expanding the available frontier of current machine learning to make it more automatic and performable in wider range of application fields.


This special issue intends to bring together researchers to report their latest progress and exchange experience in meta-learning research, including fundamental theories, basic models and algorithms, and different areas of applications. Besides, through sharing the understandings and research attempts for meta-learning from diverse perspectives, this special issue aims to inspire more researchers of both industry and academic communities to make efforts on this hopeful research direction, and together prompt its advancement in the field.


Topics of interest include, but not limited to, the following aspects:

Statistical learning theory on meta-learning

Fundamental models on meta learning

Optimization algorithms and theories on meta learning

Applications related to meta-learning, including:

Few shot learning

Learning to optimize

Neural Architecture Search

Robust machine learning on biased training/test data

Dynamic/hyper network

Continual learning

Domain generalization

Meta-reinforcement learning

Hyper-parameter learning

and so on.

Timeline for submission, review, and publication:

Full paper due: May 1, 2021

First notification:  July 1, 2021

Revised manuscript: August 31, 2021

Acceptance notification: September 30, 2021

Final manuscript due: October 15, 2021

Publication of the special section: November 15, 2021

Credentials of the guest editors:

Deyu Meng

Xi’an Jiaotong University, China


Xiang Bai

Huazhong University of Science and Technology, China


Zhi-Hua Zhou

Nanjing University, China


Online submission:


The template will also be found at this site.

Closed Special Issues