4 min
Oct 23, 2020 from ML Team
Deep learning algorithms are a series of (deep) neural networks that learn to recognize patterns from data. It has been successfully applied to solve many challenging tasks, in areas such as computer vision, speech recognition, natural language processing and others. According to Market and Markets, the neural network software market is predicted to reach $22,55 billion by 2021 at an impressive Compound Annual Growth Rate (CAGR) of 33.2%. However, finding high-performance neural networks architectures for a certain type of application can demand many years of research and trial-error process by Artificial Intelligence (AI) experts. To overcome this limitation, the researchers proposed the Neural Architecture Search (NAS) approach. It has recently received considerable attention from both scientific and industrial communities (see references). NAS is a subfield of AutoML that allows to automate the manual process to discover the optimal network architecture, which significantly reduces human labor.
Given a human-designed search space containing a set of possible neural networks architectures, NAS uses an optimization method to automatically find the best combinations within the search space. The NAS approach consists of three main components: search space, search strategy algorithm, and performance estimation strategy:
NAS is a high-dimensional and time-consuming problem. However, a solution that allows to optimize and parallelize the process in the search for the best architectures for a given application can save time and money for both companies and researchers. Therefore, Activeeon recently integrated the NAS technique on the Machine Learning Open Studio (MLOS) that allows engineers or researchers to easily automate and orchestrate AI-based workflows, scaling up with parallel and distributed execution.
Common building blocks to define a search space:
Finally, use our auto-ml-optimization catalog.
Figure below shows an overview of the auto-ml-optimization catalog.
The auto-ml-optimization catalog contains six search strategy algorithms to enable us to search the best parameters/architecture according to the search space, which is defined as a JSON file. In the following, we briefly describe the different tuners proposed by the search algorithms:
The choice of the tuner depends on the following aspects:
To help users track the process and status of how the model is searched under specified search space, we offer the job analytics interface. It enables you to quickly select the best neural architectures & view model-specific metrics.
MLOS also includes a data-visualization catalog. It offers a large set of plots that can be organized programmatically or through the UI. These plots are used to create dashboards for both live and real-time data, inspect results of experiments, or debug experimental code. The data-visualization catalog provides a fast, easy and practical way to execute different workflows generating these diverse visualizations that are automatically cached by the TensorBoard and Visdom Server. However, other visualization libraries can be integrated as well. Please refer to the document of data visualization for more details how to use it.
In the Figure below you can see a graph showing the loss obtained by each neural network architecture by TensorBoard.
Let’s access the Activeeon online Try Platform and create a free user account.
Check out useful MLOS documentation.
If you have any questions or feedback, feel free to send us an email to support@activeeon.com. Our team will be very pleased to get your feedback or help you in any way possible.
References