1. Overview

1.1. What is Machine Learning Open Studio (ML-OS)?

Machine Learning Open Studio (ML-OS) is an interactive graphical interface that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. It provides a rich set of generic machine learning tasks that can be connected together to build either basic or complex machine learning workflows for various use cases such as: fraud detection, text analysis, online offer recommendations, prediction of equipment failures, facial expression analysis, etc. These tasks are open source and can be easily customized according to your needs. ML-OS can schedule and orchestrate executions while optimising the use of computational resources. Usage of resources (e.g. CPU, GPU, local, remote nodes) can be easily monitored.

ML OS overview

1.2. Glossary

The following terms are used throughout the documentation:

ProActive Workflows & Scheduling

The full distribution of ProActive for Workflows & Scheduling, it contains the ProActive Scheduler server, the REST & Web interfaces, the command line tools. It is the commercial product name.

ProActive Scheduler

Can refer to any of the following:

  • A complete set of ProActive components.

  • An archive that contains a released version of ProActive components, for example activeeon_enterprise-pca_server-OS-ARCH-VERSION.zip.

  • A set of server-side ProActive components installed and running on a Server Host.

Resource Manager

ProActive component that manages ProActive Nodes running on Compute Hosts.

Scheduler

ProActive component that accepts Jobs from users, orders the constituent Tasks according to priority and resource availability, and eventually executes them on the resources (ProActive Nodes) provided by the Resource Manager.

Please note the difference between Scheduler and ProActive Scheduler.
REST API

ProActive component that provides RESTful API for the Resource Manager, the Scheduler and the Catalog.

Resource Manager Web Interface

ProActive component that provides a web interface to the Resource Manager.

Scheduler Web Interface

ProActive component that provides a web interface to the Scheduler.

Workflow Studio

ProActive component that provides a web interface for designing Workflows.

Catalog

ProActive component that provides storage and versioning of Workflows and other ProActive Objects through a REST API. It is also possible to query the Catalog for specific Workflows.

Job Planner

A ProActive component providing advanced scheduling options for Workflows.

Bucket

ProActive notion used with the Catalog to refer to a specific collection of ProActive Objects and in particular ProActive Workflows.

Server Host

The machine on which ProActive Scheduler is installed.

SCHEDULER_ADDRESS

The IP address of the Server Host.

ProActive Node

One ProActive Node can execute one Task at a time. This concept is often tied to the number of cores available on a Compute Host. We assume a task consumes one core (more is possible, so on a 4 cores machines you might want to run 4 ProActive Nodes. One (by default) or more ProActive Nodes can be executed in a Java process on the Compute Hosts and will communicate with the ProActive Scheduler to execute tasks.

Compute Host

Any machine which is meant to provide computational resources to be managed by the ProActive Scheduler. One or more ProActive Nodes need to be running on the machine for it to be managed by the ProActive Scheduler.

Examples of Compute Hosts:

PROACTIVE_HOME

The path to the extracted archive of ProActive Scheduler release, either on the Server Host or on a Compute Host.

Workflow

User-defined representation of a distributed computation. Consists of the definitions of one or more Tasks and their dependencies.

Generic Information

Are additional information which are attached to Workflows.

Job

An instance of a Workflow submitted to the ProActive Scheduler. Sometimes also used as a synonym for Workflow.

Task

A unit of computation handled by ProActive Scheduler. Both Workflows and Jobs are made of Tasks.

ProActive Agent

A daemon installed on a Compute Host that starts and stops ProActive Nodes according to a schedule, restarts ProActive Nodes in case of failure and enforces resource limits for the Tasks.

2. Get Started

To submit your first Machine Learning workflow to ProActive Scheduler, install it in your environment (default credentials: admin/admin) or just use our demo platform try.activeeon.com.

ProActive Scheduler provides comprehensive interfaces that allow to:

We also provide REST and command line interfaces for advanced users. try.activeeon.com/rest

3. Create a First Predictive Solution

Suppose you need to predict houses prices based on this information (features) provided by the estate agency:

  • CRIM per capita crime rate by town

  • ZN proportion of residential lawd zoned for lots over 25000

  • INDUS proportion of non-retail business acres per town

  • CHAS Charles River dummy variable

  • NOX nitric oxides concentration

  • RM average number of rooms per dwelling

  • AGE proportion of owner-occupied units built prior to 1940

  • DIS weighted distances to five Boston Employment centres

  • RAD index of accessibility to radial highways

  • TAX full-value property-tax rate per $10 000

  • PTRATIO pupil-teacher ratio by town

  • B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town

  • LSTAT % lower status of the population

  • MDEV Median value of owner-occupied homes in $1000' s

Predicting houses prices is a complex problem, but we can simplify it a bit for this step by step example. We’ll show you how you can easily create a predictive analytics solution using Machine Learning Open Studio.

3.1. Manage the Canvas

To use Machine Learning Open Studio, you need to add the Machine Learning Bucket as main catalog in the ProActive Studio. This bucket contains a set of generic tasks that enables you to upload and prepare data, train a model and test it.

  1. Open ProActive Workflow Studio home page.

  2. Create a new workflow.

  3. Fill the Workflow General Parameters.

  4. Click on Catalog menu then Set Bucket as Main Catalog Menu and select machine-learning bucket. This can also be achieved by adding /templates/machine-learning at the end of the URL of the proActive workflow studio.

  5. Click on Catalog menu then Add Bucket as Extra Catalog Menu and select data-visualization bucket.

  6. Organize your canvas.

Set Bucket as Main Catalog Menu allows the user to change the bucket used to get workflows from the Catalog in the studio. By selecting a bucket, the user can change the content of the main Catalog menu (named as the current bucket) to get workflows from another bucket as templates.
100000

3.2. Upload Data

To upload data into the Workflow, you need to use a dataset stored in a CSV file.

  1. Once your dataset has been converted to CSV format, upload it into a cloud storage service for example Amazon S3. For this tutorial, we will use Boston house prices dataset available on this link: https://s3.eu-west-2.amazonaws.com/activeeon-public/datasets/boston-houses-prices.csv

  2. Drag and drop the Import_Data task from the machine-learning bucket in the ProActive Workflow Studio.

  3. Click on the task and click General Parameters in the left to change the default parameters of this task.

  4. Put in FILE_URL variable the S3 link to upload your dataset.

  5. Set the other parameters according to your dataset format.

This task uploads the data into the workflow that we can for model training and testing.

If you want to skip these steps, you can directly use the Load_Boston_Dataset Task by a simple drag and drop.

100000

3.3. Prepare Data

This step consists of preparing the data for the training and testing of the predictive model. So in this example, we will simply split our datset into two separate datasets: one for training and one for testing.

To do this, we use the Split_Data Task in the machine_learning bucket.

  1. Drag and drop the Split_Data Task into the canvas, and connect it to the Import_Data or Load_Boston_Dataset Task.

  2. By default, the ratio is 0.7 this means that 70% of the dataset will be used for training the model and 0.3 for testing it.

  3. Click the Split_Data Task and set the TRAIN_SIZE variable to 0.6.

100000

3.4. Train a Predictive Model

Using Machine Learning Open Studio, you can easily create different machine learning models in a single experiment and compare their results. This type of experimentation helps you find the best solution for your problem. You can also enrich the machine-learning bucket by adding new machine learning algorithms and publish or customize an existing task according to your requirements as the tasks are open source.

To change the code of a task click on it and click the Task Implementation. You can also add new variables to a specific task.

In this step, we will create two different types of models and then compare their scores to decide which algorithm is most suitable to our problem. As the Boston dataset used for this example consists of predicting price of houses (continuous label). As such, we need to deal with a regression predictive problem.

To solve this problem, we have to choose a regression algorithm to train the predictive model. To see the available regression algorithms available on the Machine Learning Open Studio, see ML Regression Section in the machine-learning bucket.

For this example, we will use Linear_Regression Task and Support_Vector_Regression Task.

  1. Find the Linear_Regression Task and Support_Vector_Regression Task and drag them into the canvas.

  2. Find the Train_Model Task and drag it twice into the canvas.

  3. Connect the Split_Data Task to the two Train_Model Tasks in order to give it access to the training data. Connect then the Linear_Regression Task to the first Train_Model Task and Support_Vector_Regression to the second Train_Model Task.

  4. To be able to download the model learned by each algorithm, drag two Load_Trained_Model Tasks and connect them to each Train_Model Task.

100000

3.5. Test the Predictive Model

To evaluate the two learned predictive models, we will use the testing data that was separated out by the Split_Data Task to score our trained models. We can then compare the results of the two models to see which generated better results.

  1. Find the Predict_Model Task and drag and drop it twice into the canvas.

  2. Connect the first Predict_Model Task to the Train_Model Task that is connected to Support_Vector_Regression Task.

  3. Connect the second Predict_Model Task to the Train_Model Task that is connected to Linear_Regression Task.

  4. Find the Export_Results Task in the Machine Learning bucket and drag and drop it twice into the canvas.

  5. Connect each Export_Results Task with Predict_Model.

100000
if you have a pickled file (.pkl) containing a predictive model that you have learned using another platform and you need to test it in the Machine Learning Open Studio, you can load it using Load_Trained_Model Task.

3.6. Run the Experiment and Preview the Results

Now the workflow is completed, let’s execute it by:

  1. Click the Execute button on the menu to run the workflow.

  2. Click the Scheduling & Orchestration button to track the workflow execution progress.

  3. Click the Visualization tab and track the progress of your workflow execution (a green check mark appears on each Task when its execution is finished).

  4. Visualize the output logs by clicking on the output tab and check the streaming check box.

  5. Click the Tasks tab, select an Export_Results task and click on the Preview tab, then click either on Open in browser to preview the results on your browser or on Save as file to download the results locally.

100000

4. Customize the Machine Learning Bucket

4.1. Create or Update a ML Task

Machine Learning Bucket contains various open source tasks that can be easily used by a simple drag and drop.

It is possible to enrich the Machine Learning Bucket by adding your own tasks. (see section 4.3)

It is also possible to customize the code of the generic Machine Learning tasks. In this case, you need to drag and drop the targeted task to modify its code in the Task Implementation section.

It is also possible to add or/and delete variables of each task, set your own fork environments etc. More details available on Proactive User Guide

4.2. Set the Fork Environment

A fork execution environment is a new Java Virtual Machine (JVM) which is started exclusively to execute a task. Starting a new JVM means that the task inside it will run in a new environment. This environment can be set up by the creator of the task. A new JVMs is set up with a new classpath, new system properties and more customization.

We used a Docker fork environment for all the Machine Learning tasks. activeeon/dlm3 was used as a docker container for all tasks. If your task needs to install new Machine Learning libraries which are not available in this container, then, use your own docker container or an appropriate environment with the needed libraries.

The use of docker containers is recommended as that way other tasks will not be affected by change. Docker containers provide isolation so that the host machine’s software stays the same. More details available on Proactive User Guide

4.3. Publish a ML Task

The Catalog menu provides the possibility for a user to publish newly created or/and update tasks inside Machine Learning Bucket, you need just to click on Catalog Menu then Publish current Workflow to the Catalog. Choose machine-leaning Bucket to store your newly added workflow on it. If the Task with the same name already exists in the 'machine-leaning' bucket, then, it will be updated. We recommend to submit Tasks with a commit message for easier differentiation between the different submitted versions.

More details available on ProActive User Guide

4.4. Create a ML Workflow

The quickstart tutorial on try.activeeon.com shows you how to build a simple workflow using ProActive Studio.

We show below an example of a workflow created with the Studio:

ML Workflow Example

At the left part, are illustrated the General Parameters of the workflow with the following information:

  • Name: the name of the workflow.

  • Project: the project name to which belongs the workflow.

  • Description: the textual description of the workflow.

  • Documentation: if the workflow has a Generic Information named "Documentation", then its URL value is displayed as a link.

  • Job Priority: the priority assigned to the workflow. It is by default set to NORMAL, but can be increased or decreased once the job is submitted.

The workflow represented in the above is available on the 'machine-learning-workflows' bucket.

5. Machine Learning Workflows Examples

The Machine Learning Open Studio provides a fast, easy and practical way to execute different workflows using the machine learning bucket. We present useful machine learning workflows for different applications in the following sub-sections.

To test these workflows, you need to add the machine-Learning-workflows Bucket as main catalog in the ProActive Studio.

  1. Open ProActive Workflow Studio home page.

  2. Create a new workflow.

  3. Click on Catalog menu then Add Bucket as Extra Catalog Menu and select machine-learning-workflows bucket.

  4. Open this added extra catalog menu and drag and drop the workflow example of your choice.

  5. Execute the chosed workflow, track its progress and preview its results.

More details about these workflows are available in this in ActiveEon’s Machine Learning Documentation

5.1. Basic Machine Learning

The following workflows present some machine learning basic examples. These workflows are built using generic Machine learning and data visualization tasks available on the Machine Learning and Data Visualization buckets.

Diabetics_Detection_using_K_means: trains and tests a clustering model using Mean_shift algorithm.

House_Price_Prediction_using_Linear_Regression: trains and tests a regression model using Mean_shift algorithm.

Iris_Flowers_Classification_using_Logistic_Regression: trains and tests a predictive model using logistic_regressive algorithm.

Movies_Recommendation: create a movie recommendation engine using collaborative filtering algorithm.

5.2. Image Analysis

The following workflows present useful computer vision applications using Convolutional Neural Networks based on deep learning for image recognition, object detection, anomaly detection, and image segmentation. Open source libraries, such as TensorFlow, Keras, Caffe, OpenCV, PyTorch and scikit-learn are used as backend for AI workflows.

Keras_Image_Classification: classifies an input image using deep ConvNets (specifically, VGG16) pre-trained on the ImageNet dataset.

Pytorch_Image_Object_Segmentation: returns the horse segments of a given input image.

Pytorch_Train_Image_Object_Segmentation: trains an image segmentation algorithm using PyTorch* to segment horses. It segments horses in the input image using a deep neural network (DNN).

Tensorflow_Parallel_Image_Prediction: predicts three different images of flower species in parallel.

Tensorflow_Image_Prediction: returns the flower species of a given input image.

Tensorflow_Train_Image_Classifier: trains a deep ConvNets (specifically, Inception)* to recognize flower species.

YOLO_Image_Object_Detection: detects real-world objects in a image with the YOLO* library using a pre-trained.

YOLO_Image_Anomaly_Detection: checks if an anomaly exists in a certain scene using the YOLO library. However, we show a scene where only people are allowed on a pedestrian street. Anything detected other than people is considered as anomaly. YOLO_Demo_Object_Detection: detects real-world objects in a image with the YOLO* library using a pre-trained model.

5.3. Log Analysis

The following workflows are designed to detect anomalies in log files. They are constructed using generic tasks which are available on the machine-learning and data-visualization buckets.

Anomaly_Detection_in_Apache_Logs: detects intrusions in apache logs using a predictive model trained using Support Vector Machines algorithm.

Anomaly_detection_in_HDFS_Blocks: trains and test an anomaly detection model for detecting anomalies in HDFS Blocks.

Anomaly_detection_in_HDFS_Nodes: trains and test an anomaly detection model for detecting anomalies in HDFS Nodes.

6. References

6.1. Machine Learning Bucket

The machine-learning bucket contains diverse generic machine learning tasks that enable you to easily compose workflows for predictive models learning and testing. This bucket can be easily customized according to your needs. This bucket offers different options, you can customize it by adding new tasks or update the existing tasks.

6.1.1. Public Datasets

Load_Boston_Dataset

Task Overview: Load and return the Boston House-Prices dataset.

Table 1. Boston Dataset Description
Features Targets Dimensionality Samples Total

Real, positive

Real 5. -50

13

506

How to use this task:

  • The Boston House-Prices is a dataset for regression, you can only use it with a regression algorithm, such as Linear Regression and Support Vector Regression.

  • After this task, you can use the Split_Data task to divide the dataset into training and testing sets.

More information about this dataset can be found here.
Load_Iris_Dataset

Task Overview: Load and return the iris dataset.

Table 2. Iris Dataset Description
Features Classes Dimensionality Samples per class Samples total

Real, positive

3

4

50

150

How to use this task:

  • The Iris is a dataset for classification, you can only use it with a classification algorithm, such as Support Vector Machines and Logistic Regression.

  • After this task, you can use the Split_Data task to divide the dataset into training and testing sets.

More information about this dataset can be found here.

6.1.2. Input and Output Data

Download_Model

Task Overview: Download a trained model on your computer device.

How to use this task: Should be used after the Train_Model or Train_Clustering_Model tasks.

Export_Results

Task Overview: Export the results of the predictions generated by a classification, clustering or regression algorithm.

Task Variables:

Table 3. Export_Results_Task variables

Variable name

Description

Type

OUTPUT_FILE

Converts the prediction results to HTML or CSV file.

String [HTML or CSV]

How to use this task: Should be used after Predict_Model or Predict_Clustering_Model tasks.

Import_Data

Task Overview: Load data from external sources.

Task Variables:

Table 4. Import_Data_Task variables

Variable name

Description

Type

FILE_URL

Enter you URL of the CSV file.

String

FILE_DELIMITER

Delimiter to use.

String

IS_LABELED_DATA

True if your data have label data.

Boolean [True or False]

Your CSV file should be in a table format. See the example below.
csv file organisation
Load_Trained_Model

Task Overview: Load a trained model, and use it to make predictions for new coming data.

Task Variables:

Table 5. Load_Trained_Model_Task variables

Variable name

Description

Type

MODEL_URL

Type the URL to load your trained model. default: https://s3.eu-west-2.amazonaws.com/activeeon-public/models/pima-indians-diabetes.model

String

How to use this task: Should be used before Predict_Model or Predict_Clustering_Model to make predictions.

Log_Parser

Task Overview: Convert an unstructured raw log file into a structured one by matching a group of event patterns.

Task Variables:

Table 6. Log_Parser_Task variables

Variable name

Description

Type

LOG_FILE

Put the URL of the raw log file that you need to parse.

String

PATTERNS_FILE

Put the URL of the CSV file that contains the different RegEx expressions of each possible pattern and their corresponding variables. The csv file must contain three columns (See the example below):

A. id_pattern: Integer Specify the column containing the identifier of each pattern

B. Pattern: RegEx expression Define the regex expression of each pattern

C. Variables: String Specify the name of each variable included in the pattern. N.B: Use the symbol ‘*’ for variables that you need to neglect. (e.g. in the example below the 5th variable is neglected) N.B: All variables specified in each Regex expressions have to be mentioned in the column « Variables » in the right order (use ',' to separate the variable names).

String

STRUCTURED_LOG

Indicate the extension of the file where you will save the resulted structured logs.

String [CSV or HTML]

pattern file

How to use this task: Could be connected with Filter_Data task and Feature_Vector_Extractor tasks.

6.1.3. Data Preprocessing

Add_Data

Task Overview: Concatenate the newly added data to the original input data.

Task Variables:

Table 7. Add_Data_Task variables

Variable name

Description

Type

FILE_URL

Put the URL of the data that you need to add.

String

FILE_DELIMITER

Delimiter to use.

String

IS_LABELED_DATA

TRUE if the data is labeled.

Boolean [True or False]

IGNORE_INDEX

False if you need to ignore the index of the input data.

Boolean [True or False]

How to use this task: Should be used after Import_Data task.

More details about the source code of this task can be found here.
Add_Label

Task Overview: Add a new label column to the original input data.

Task Variables:

Table 8. Add_Label_Task variables

Variable name

Description

Type

FILE_URL

The URL of the file containing the labels that you need to concatenate.

String

FILE_DELIMITER

Delimiter to use.

String

How to use this task: Could be used after Feature_Vector_Extractor and Import_Data tasks.

More details about the source code of this task can be found here.
Filter_Data

Task Overview: Query the columns of your data with a boolean expression.

Task Variables:

Table 9. Filter_Data_Task variables

Variable name

Description

Type

QUERY

The query string to evaluate.

String

FILTERED_FILE_OUPUT

Refers to the extension of the file where the resulted filtered data will be saved.

String [CSV or HTML]

More details about the source code of this task can be found here.
Split_Data

Task Overview: Separate data into train and test subsets.

Task Variables:

Table 10. Split_Data_Task variables

Variable name

Description

Type

TRAIN_SIZE

This parameter must be float within the range (0.0, 1.0), not including the values 0.0 and 1.0. default = 0.7

Float

How to use this task: Should be used before Train and Predict tasks.

More details about the source code of this task can be found here.

6.1.4. Feature Extraction

Feature_Vector_Extractor

Task Overview: Encode structured data into numerical feature vectors whereby machine learning models can be applied.

Task Variables:

Table 11. Feature_Vector_Extractor_Task variables

Variable name

Description

Type

SESSION_COLUMN

the ID of the entity that you need to represent (to group by).

String

FILE_OUT_FEATURES

The extension of the file where the resulted features will be saved.

String [CSV or HTML]

PATTERN_COLUMN

The index of column containing the log patterns.[specific to features extraction from logs]

String

PATTERNS_COUNT_FEATURES

True if you need to extract count the number of occurrence of each pattern per session.

Boolean [True or False]

STATE_VARIABLES

The different variables that need to be considered to extract features according to their content.

N.B: separate the different variables with a comma ','

String

COUNT_VARIABLES

Refers to the different variables that need to be considered to count their distinct content.

String

N.B: separate the different variables with a comma ','

STATE_COUNT_FEATURES_VARIABLES

True if you nedd to extract state and count features per session.

Boolean [True or False]

How to use this task: Could be connected with Add_Label if you need to train a model using supervised learning algorithms or with Train_Clustering_Model if you need to train a model using unsupervised machine learning techniques.

Time_Series_Feature_Extractor

Task Overview: Extract features by considering the temporal distribution of data.

Task Variables:

Table 12. Time_Series_Feature_Extractor_Task variables

Variable name

Description

Type

SESSION_COLUMN

The column name of the entity that you need to represent.

String

SLIDING_STEP

The number of minutes to jump while navigating from a window to another one.

Integer (in Minutes)

WINDOW_SIZE

The window size in Minutes.

Integer (in Minutes)

START_DATE_TIME

The date starting from it the time series vectors representing each entity [Session] will be extracted.

date

PATTERN_COLUMN

The column name containing the log patterns.[specific to features extraction from logs]

String

PATTERNS_COUNT_FEATURES

True if you need to extract count the number of occurrence of each pattern per session and window.

Boolean [True or False]

STATE_VARIABLES

The name of columns to use as features by counting the number of occurrence of their distinct content per session and window.

N.B: separate the different variables with a comma ','

String

STATE_COUNT_FEATURES_VARIABLES

True if you need to extract state and count features that you already specified in COUNT_VARIABLES and STATE_VARIABLES.

N.B: separate the different variables with a comma ','

String [True of False]

How to use this task: Should be connected with specific machine learning algorithms suitable to this representation.

6.1.5. ML Classification

Gaussian_Naive_Bayes

Task Overview: Naive Bayes classifier is a family of simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions between the features.

Task Variables:

Table 13. Gaussian_Naive_Bayes_Task variables

Variable name

Description

Type

PRIORS

Prior probabilities of the classes. If specified the priors are not adjusted according to the data.

array-like, shape (n_classes)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about this task can be found here.
Logistic_Regression

Task Overview: Logistic Regression is a regression model where the Dependent Variable (DV) is categorical.

Task Variables:

Table 14. Logistic_Regression_Task variables

Variable name

Description

Type

PENALTY

Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. default=‘l2’

String

SOLDER

Algorithm to use in the optimization problem.

‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’, default: ‘liblinear’

MAX_ITERATIONS

Useful only for the newton-cg, sag and lbfgs solvers. Maximum number of iterations taken for the solvers to converge.

Integer (default=100)

N_JOBS

Number of CPU cores used. If given a value of -1, all cores are used.

Integer (default=1)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source code of this task can be found here.
Support_Vector_Machines

Task Overview: Support vector machines are supervised learning models with associated learning algorithms that analyze data used for classification.

Task Variables:

Table 15. Support_Vector_Machines_Task variables

Variable name

Description

Type

C

Penalty parameter C of the error term.

Float, optional (default=1.0)

KERNEL

Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’,‘precomputed’ or a callable.

string, optional (default=’rbf’)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source of this task can be found here.

6.1.6. ML Regression

Bayesian_Ridge_Regression

Task Overview: Bayesian linear regression is an approach to linear regression in which the statistical analysis is undertaken within the context of Bayesian inference.

Task Variables:

Table 16. Bayesian_Ridge_Regression_Task variables

Variable name

Description

Type

N_ITERATIONS

Penalty parameter C of the error term.

Integer, optional (default=300)

ALPHA_1

Hyper-parameter : shape parameter for the Gamma distribution prior over the alpha parameter.

Float (default=1.e-6)

ALPHA_2

Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the alpha parameter.

Float (default=1.e-6)

LAMBDA_1

Hyper-parameter : shape parameter for the Gamma distribution prior over the lambda parameter.

Float (default=1.e-6)

LAMBDA_2

Hyper-parameter : inverse scale parameter (rate parameter) for the Gamma distribution prior over the lambda parameter.

Float (default=1.e-6)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source of this task can be found here.
Linear_Regression

Task Overview: Linear regression is a linear approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X.

Task Variables:

Table 17. Linear_Regression_Task variables

Variable name

Description

Type

N_JOBS

Penalty parameter C of the error term.

Integer (default=1)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source of this task can be found here.
Support_Vector_Regression

Task Overview: Support vector regression are supervised learning models with associated learning algorithms that analyze data used for regression.

Task Variables:

Table 18. Support_Vector_Regression_Task variables

Variable name

Description

Type

C

Penalty parameter C of the error term.

Float, optional (default=1.0)

KERNEL

Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable.

String, optional (default=’rbf’)

EPSILON

It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value.

Float, optional (default=0.1)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source of this task can be found here.

6.1.7. ML Clustering

K_Means

Task Overview: K-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster

Task Variables:

Table 19. K_Means_Task variables

Variable name

Description

Type

N_CLUSTERS

The number of clusters to form as well as the number of centroids to generate

Integer, optional (default=8)

MAX_ITERATIONS

Maximum number of iterations of the k-means algorithm for a single run.

Integer, optional (default=300)

N_JOBS

The number of jobs to use for the computation. This works by computing each of the n_init runs in parallel. If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all

Integer, optional (default=1)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source of this task can be found here.
Mean_Shift

Task Overview: Mean shift is a non-parametric feature-space analysis technique for locating the maxima of a density function.

Task Variables:

Table 20. Mean_Shift_Task variables

Variable name

Description

Type

CLUSTER_ALL

The number of clusters to form as well as the number of centroids to generate.

Boolean [True or False]

N_JOBS

If true, then all points are clustered, even those orphans that are not within any kernel. Orphans are assigned to the nearest kernel. If false, then orphans are given cluster label -1.

Integer (default=1)

How to use this task: Should be connected with Train_Model or Train_Clustering_Model and Predict_Model or Predict_Clustering_Model.

More information about the source of this task can be found here.

6.1.8. Train

Train_Clustering_Model

Task Overview: Train a clustering model.

How to use this task: Should be used after a clustering algorithm such as Bayesian_Ridge_Regression, Linear_Regression and Support_Vector_Regression.

More information about the source of this task can be found here.
Train_Model

Task Overview: Train a model using a classification or regression algorithm..

How to use this task: Should be used after a classification or regression algorithm, such as Support_Vector_Machines, Gaussian_Naive_Bayes and Linear_Regression.

More information about the source of this task can be found here.

6.1.9. Predict

Predict_Clustering_Model

Task Overview: Generate predictions using a trained model.

How to use this task: Should be used after the Train_Clustering_Model Task.

More information about the source of this task can be found here.
Predict_Model

Task Overview: Generate predictions using a trained model.

How to use this task: Should be used after the Train_Model Task.

More information about the source of this task can be found here.

6.2. Data Visualization Bucket

Visdom Bucket integrates generic tasks that can be easily used to broadcast visualizations of the analytic results provided by ML tasks. These visualization are created, organized, and shared using Visdom, a flexible tool proposed by Facebook Research.

It provides a large set of plots that can be organized programmatically or through the UI. These plots can be used to create dashboards for both live and real-time data, inspect results of experiments, or debug experimental code.

Visdom bucket provides a fast, easy and practice way to execute different workflows generating these diverse visualizations that are automatically cached by the Visdom Server.

6.2.1. Visdom

Bind_or_Start_Visdom_Service

Task Overview: Bind or/and Start Visdom server.

Task Variables:

Table 21. Bind_or_Start_Visdom_Service_Task variables

Variable name

Description

Type

service_model

The Visdom service to start. The Visdom Service available in ProActive Cloud Automation is used by default.

String (default= "http://models.activeeon.com/pca/visdom")

instance_name

The instance name of the server to use to broadcast visualization

String (default="visdom-server-1")

If two workflows use the same service instance names, then, their generated plots will be created on the same service instance.
Visdom_Client_Example

Task Overview: Connect and plot a text into the Visdom Server.

Task Variables:

To adapt according to the plots that you need to visualize

How to use this task: Customize this task by putting the source code generating your code in the Task Implementation section

This task has to be connected to the Bind_or_Start_Visdom_Service Task. The Visdom server should be up in order to be able to broadcast visualizations.
Terminate_Visdom_Service

Task Overview: Stop and remove the Visdom Server.

Task Variables:

Table 22. Terminate_Visdom_Service_Task variables

Variable name

Description

Type

service_model

The Visdom service to start. The Visdom Service available in ProActive Cloud Automation is used by default. (default="http://models.activeeon.com/pca/visdom")

String

instance_name

The instance name of the server to use to broadcast visualization

String (default="visdom-server-1")

How to use this task: This task can be connected to Visdom_Client_Example. Visualize, then, stop the Visdom Service.

This task will immediately stop the service.
Visdom_Terminate_Service_Until_Validation

Task Overview: wait for the validation of the job by the user to stop and remove the Visdom Server.

Task Variables:

Table 23. Visdom_Terminate_Service_Until_Validation_Task variables

Variable name

Description

Type

service_model

The Visdom service to start. The Visdom Service available in ProActive Cloud Automation is used by default.

String (default= "http://models.activeeon.com/pca/visdom")

instance_name

The instance name of the server to use to broadcast visualization

String (default="visdom-server-1")

How to use this task: This task can be connected to Visdom_Client_Example. Visualize, then, stop the Visdom Service.

This task will stop the service once the user has validated this requirement using ProActive Cloud Automation portal.
Visdom_Visualize_Results

Task Overview: plot the different results obtained by a predictive model using Visdom

Task Variables:

Variable name

Description

Type

TARGETED_CLASS

The targeted class that you need to track

String

How to use this task: This task has to be connected to the Bind_or_Start_Visdom_Service Task. The Visdom server should be up in order to be able to broadcast visualizations.

6.2.2. Visdom Worflows

The following workflows present some examples using Visdom service to visualize the results obtained while training and testing some predictive models.

Visdom_Plot_Example: returns numerous examples of plots covered by Visdom.

Visdom_Realtime_Apache_Intrusion_Detection: broadcasts in real time some visualizations of the analytic results provided by the Apache log intrusion detector.

Visdom_Service_Example: evaluates in real time the performance of a simple Convolutional Neural Network (CNN) for MNIST* handwritten digit classification during the training and test processes.

A demo video of these workflows is available in ActiveEon Youtube Channel.