Neural Architecture Search on Machine Learning Open Studio (MLOS)

4 min

Oct 23, 2020 from ML Team


Deep learning algorithms are a series of (deep) neural networks that learn to recognize patterns from data. It has been successfully applied to solve many challenging tasks, in areas such as computer vision, speech recognition, natural language processing and others. According to Market and Markets, the neural network software market is predicted to reach $22,55 billion by 2021 at an impressive Compound Annual Growth Rate (CAGR) of 33.2%. However, finding high-performance neural networks architectures for a certain type of application can demand many years of research and trial-error process by Artificial Intelligence (AI) experts. To overcome this limitation, the researchers proposed the Neural Architecture Search (NAS) approach. It has recently received considerable attention from both scientific and industrial communities (see references). NAS is a subfield of AutoML that allows to automate the manual process to discover the optimal network architecture, which significantly reduces human labor.

Given a human-designed search space containing a set of possible neural networks architectures, NAS uses an optimization method to automatically find the best combinations within the search space. The NAS approach consists of three main components: search space, search strategy algorithm, and performance estimation strategy:

nas components

Why to use MLOS for NAS?

NAS is a high-dimensional and time-consuming problem. However, a solution that allows to optimize and parallelize the process in the search for the best architectures for a given application can save time and money for both companies and researchers. Therefore, Activeeon recently integrated the NAS technique on the Machine Learning Open Studio (MLOS) that allows engineers or researchers to easily automate and orchestrate AI-based workflows, scaling up with parallel and distributed execution.

How to use MLOS for NAS

machine learning open studio for nas
  1. Define some candidate neural network architecture layouts using your favorite programming language or AI framework.
  2. Write the search space in a JSON file describing all possible hyperparameter and layer candidates for the neural architecture you want to find.

How to define the search space:

Common building blocks to define a search space:

  • uniform: Uniform continuous distribution
  • quantized_uniform: Uniform discrete distribution
  • log: Logarithmic uniform continuous distribution
  • quantized_log: Logarithmic uniform discrete distribution
  • choice: Uniform choice distribution between non-numeric samples

Finally, use our auto-ml-optimization catalog.

Figure below shows an overview of the auto-ml-optimization catalog.

automl optimization catalog

The auto-ml-optimization catalog contains six search strategy algorithms to enable us to search the best parameters/architecture according to the search space, which is defined as a JSON file. In the following, we briefly describe the different tuners proposed by the search algorithms:

algorithm categories

Which tuner algorithm to choose?

The choice of the tuner depends on the following aspects:

  • Time required to evaluate the model
  • Number of hyperparameters to optimize
  • Type of variable
  • The size of the search space

To help users track the process and status of how the model is searched under specified search space, we offer the job analytics interface. It enables you to quickly select the best neural architectures & view model-specific metrics.

job analytics

MLOS also includes a data-visualization catalog. It offers a large set of plots that can be organized programmatically or through the UI. These plots are used to create dashboards for both live and real-time data, inspect results of experiments, or debug experimental code. The data-visualization catalog provides a fast, easy and practical way to execute different workflows generating these diverse visualizations that are automatically cached by the TensorBoard and Visdom Server. However, other visualization libraries can be integrated as well. Please refer to the document of data visualization for more details how to use it.

In the Figure below you can see a graph showing the loss obtained by each neural network architecture by TensorBoard.


Let’s access the Activeeon online Try Platform and create a free user account.

Check out useful MLOS documentation.

If you have any questions or feedback, feel free to send us an email to Our team will be very pleased to get your feedback or help you in any way possible.


  1. D. Kobler. Evolutionary algorithms in combinatorial optimization. In Encyclopedia of Optimization, pages 950–959. Springer, 2009.
  2. B. Zoph and Q. Le. Neural architecture search with reinforcement learning. In International Workshop on Automated Machine Learning, 2017.
  3. H. Liu, K. Simonyan, O. Vinyals, F. Fernando, and K. Kavukcuoglu. Hierarchical representations for efficient architecture search. In International Conference on Learning Representations (ICLR), 2018.
  4. F. Hutter, L. Kotthoff, and J. Vanschoren. Automatic Machine Learning: Methods, Systems, Challenges. Springer, 2019.

More articles

All our articles