Developing high-performance machine learning models is a difficult task that usually requires expertise from data scientists and knowledge from domain experts. To make machine learning more accessible and ease the labour-intensive trial-and-error process of searching for the most appropriate machine learning algorithm and the optimal hyperparameter setting, automated Machine Learning (AutoML) was developed and has become a rapidly growing area in recent years. AutoML aims at automation and efficiency of the machine learning process across domains and applications. AutoML is commonly defined to solve the combined algorithm selection and hyperparameter optimization (CASH) problem: given a learning task, a set of algorithms with associated hyperparameter domains and a loss function, the goal is to find the algorithm and its hyperparameter setting that minimize the loss and produce the best performing model for this task, while assuming a fixed distribution for the training and validation datasets.

However, the distribution is subject to changes as the data continues to stream in and the environment changes. Nowadays, data is commonly collected over time and susceptible to changes, such as in Internet-of-Things (IoT) systems, mobile phone applications and healthcare data analysis. A model trained under a false stationarity assumption is bound to become obsolete in time, and thus perform sub-optimally or even disastrously. It happens to most existing AutoML techniques and systems, which fail to adapt to any changes in data. Making AutoML adaptive to non-stationary data is challenging. Firstly, finding an AutoML solution can be computationally costly. It requires a large amount of resources to produce accurate models for large-scale data. It is not efficient to repeat the AutoML process from scratch every time when new data arrive. Furthermore, data non-stationarity is often unknown. Change detection and adaptation mechanisms are thus needed in the AutoML pipeline, in order to find out and tackle the change in time, which however increases the computational burden. Therefore, interesting research questions arise around whether, when and how to effectively and efficiently deal with non-stationary data in AutoML.

Current AutoML research has several active research strands, including new techniques for deep learning, model fairness and interpretability, and hyperparameter and architecture optimization. There have been a limited but rising number of works on AutoML for non-stationary data. By aiming at the emerging challenges as described above, this special issue will encourage and bring together the original and innovative research related to the topics listed in the next section, discussing, sharing, and exploring both traditional and novel adaptive AutoML solutions. The outcomes will contribute to a wide range of academic fields (e.g. theoretical machine learning and optimization) and industrial fields (e.g. IoT, edge computing, biometrics etc.) who are looking for more flexible and robust AutoML models over time. A goal of the special issue is to integrate the growing international community of researchers working on AutoML and have a fruitful collection of high quality papers on the evolution and future development of AutoML in the IEEE Transactions on Artificial Intelligence.

List of Topics

This special issue invites papers making original contributions to the theory, methodology and applications of AutoML for non-stationary data. Potential topics for contributions to this special issue include, but are not limited to:

AutoML solutions for temporal data and data streams:

  • Change detection techniques
  • Adaptation strategies in AutoML pipelines (e.g. adaptive algorithm selection and configuration)
  • Online and incremental learning in AutoML
  • Meta learning and lifelong learning
  • Model evaluation methods
  • Dynamic hyperparameter optimization
  • Automated construction of configuration space.

Contemporary AutoML:

  • Hyperparameter and architecture optimization for non-stationary data
  • Multi-objective AutoML
  • AutoML fairness, interpretability and robustness
  • Transfer learning in AutoML
  • Human-in-the-Loop AutoML
  • Automated model exploration approaches
  • AutoML in distributed learning environments
  • Automated class imbalance learning

Applied and cross-disciplinary topics:

  • Spatial-temporal modeling for Geoinformatics
  • Sensory data analysis, including IoT and edge computing applications
  • Computer vision
  • Biometric identification and recognition
  • Automated software engineering systems
  • Healthcare
  • Development of opensource adaptive AutoML systems


Authors should prepare their manuscript according to the "Guide for Authors" available from the online instruction page at All the papers will be peer-reviewed following the IEEE TAI reviewing procedures.

Important Dates

  • Paper submission due: 2nd, June, 2023.

  • First notification: 31st, August, 2023.


Deadline: 2nd June, 2023.

Information for Authors (including paper template and submission site) at:

Please select this special issue in the Manuscript Central System when submitting your paper at Step 1.

List of Guest Editors

(in Alphabetic Order)

Dr.Ran Cheng

Southern University of Science and Technology, China.
Google scholar:

Dr.Hugo Jair Escalante

National Institute of Astrophysics, Optics and Electronics,Mexico.
Google scholar:

Dr.Shuo Wang

University of Birmingham, UK.
Google scholar:

Conflict of Interest Statement

The guest editors of this special issue certify that they have no conflict of interest arising from this special issue in any organizations funding and research collaborations. If any of the guest editors is involved in a paper submission that falls into the areas of this special issue, the paper will be submitted to the main track of TAI for the review process.