Research Highlights

[IEEE TEVC] Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment

[IEEE TEVC] Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment

Zhichao Lu, Ran Cheng, Yaochu Jin, Kay Chen Tan and Kalyanmoy Deb

Abstract:

The ongoing advancements in network architecture design have led to remarkable achievements in deep learning across various challenging computer vision tasks. Meanwhile, the development of neural architecture search (NAS) has provided promising approaches to automating the design of network architectures for lower prediction error. Recently, the emerging application scenarios of deep learning (e.g., autonomous driving) have raised higher demands for network architectures considering multiple design criteria: number of parameters/weights, number of floating-point operations, inference latency, among others. From an optimization point of view, the NAS tasks involving multiple design criteria are intrinsically multiobjective optimization problems; hence, it is reasonable to adopt evolutionary multiobjective optimization (EMO) algorithms for tackling them. Nonetheless, there is still a clear gap confining the related research along this pathway: on the one hand, there is a lack of a general problem formulation of NAS tasks from an optimization point of view; on the other hand, there are challenges in conducting benchmark assessments of EMO algorithms on NAS tasks. To bridge the gap: (i) we formulate NAS tasks into general multi-objective optimization problems and analyze the complex characteristics from an optimization point of view; (ii) we present an end-to-end pipeline, dubbed EvoXBench, to generate benchmark test problems for EMO algorithms to run efficiently – without the requirement of GPUs or Pytorch/Tensorflow; (iii) we instantiate two test suites comprehensively covering two datasets, seven search spaces, and three hardware devices, involving up to eight objectives. Based on the above, we validate the proposed test suites using six representative EMO algorithms and provide some empirical analyses. The code of EvoXBench is available at https://github.com/EMI-Group/EvoXBench.

Benchmark test suite

TABLE I: Definition of the proposed C-10/MOP test suite.

TABLE II: Definition of the proposed IN-1K/MOP test suite.

Results

TABLE III: Statistical results (median and standard deviation) of the HV values on C-10/MOP test suite. The best results of each instance are in bold.

Fig. 1: Nondominated solutions obtained by each algorithm on (a) C-10/MOP1 and (b) C-10/MOP6. We select the run associated with the median HV value. For each row, the subfigures correspond to NSGA-II, IBEA, MOEA/D, NSGA-III, HypE, and RVEA, respectively.

TABLE IV: Statistical results (median and standard deviation) of the HV values on IN-1K/MOP test suite. The best results of each instance are in bold.

Fig. 2: Nondominated solutions obtained by each algorithm on (a) IN-1K/MOP1 and (b) IN-1K/MOP8. We select the run associated with the median HV value. For each row, the subfigures correspond to NSGA-II, IBEA, MOEA/D, NSGA-III, HypE, and RVEA, respectively.

Acknowledgements:

This work was supported by the National Natural Science Foundation of China (No. 62106097, 61906081), China Postdoctoral Science Foundation (No. 2021M691424), the Shenzhen Peacock Plan (No. KQTD2016112514355531), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X386). Y. Jin is funded by an Alexander von Humboldt Professor for Artificial Intelligence endowed by the German Federal Ministry of Education and Research

Citation:


@ARTICLE{
  author={ Zhichao Lu, Ran Cheng, Yaochu Jin, Kay Chen Tan and Kalyanmoy Deb},
  journal={IEEE Transactions on Evolutionary Computation },
  title={ Neural Architecture Search as Multiobjective Optimization Benchmarks: Problem Formulation and Performance Assessment },
  year={2023}}

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

Related Posts