Running parallel Machine Learning algorithms made easy with ProActive AI Orchestration

Automatically optimize the execution of your pipeline on available infrastructure resources

3 min

Sep 3, 2020 from Activeeon

datacenter

Let’s suppose that you have a large infrastructure containing several machines that have different operating systems (e.g. Microsoft Windows, Linux, MacOS) and distinct hardware configurations, such as Processor (CPU), Video Card (GPU), Memory (RAM) , Storage (Hard Drives) and others. Furthermore, some of these machines are available in AWS, Azure or Google Cloud. Now, let’s suppose you want to run several machine learning algorithms in parallel using this hybrid infrastructure, how could you do it in an optimized way?

As you can imagine, this is not a trivial task due to time consumption and the requirement of specific technical knowledge. For this purpose, Activeeon has released the latest version of its ProActive AI Orchestration. It provides a flexible solution for users to distribute and parallelize a variety of artificial intelligence (AI) workloads (machine learning, deep learning, computer vision, etc) on a large infrastructure and to leverage hybrid and multi-cloud capabilities. ProActive AI Orchestration helps data scientists and IT operations work together in an MLOps approach allowing to bring easily ready-to-be-used ML models to production. ProActive AI Orchestration also simplifies machine learning application lifecycle management providing an end-to-end orchestration, automation and scalability.

Get Started with ProActive AI Orchestration

  1. First, let’s access the Activeeon online Try Platform and create a free user account.
  2. Click on ProActive AI Orchestration button to open the Studio.
log in to ProActive AI Orchestration

To minimize the complexity of AI with repeatable & scalable machine learning lifecycle, ProActive AI Orchestration provides powerful catalogs with some examples of machine learning & deep learning tasks & workflows that are ready to run. Nevertheless, it’s open from end to end and you can modify or adapt everything on it. You can also implement your own machine learning algorithm in different programming languages, such as Python, R, Java, etc. In addition, you can use the AI framework/library you prefer, such as Sklearn, Torch, TensorFlow or Keras.

ProActive AI Orchestration interface

In the Figure below you can see three end-to-end machine learning pipelines built by drag and drop to train three algorithms on a large infrastructure. In this example we use three algorithms: Logistic Regression, Support Vector Machine and Random Forest to predict vehicle type (e.g., Opel, Saab, Bus, Van) based on silhouette measurements.

machine learning workflows
Each task can be monitored in real-time by accessing the Scheduling web portal interface. This portal offers a detailed view (e.g, status, parameters, results, logs) of the different experiments launched via ProActive AI Orchestration. On the other hand, ProActive Resource Manager interface allows system engineers to set up infrastructure policies and monitor computing resources. From there, you can manage your heterogeneous infrastructure in a centralized way and control dynamic policy-based provisioning of resources.

sheduler and resource manager

If you don’t prefer the drag and drop solution — don’t worry. ProActive AI Orchestration also allows you to deploy your machine learning pipeline using your favorite tool through our Python/R/Matlab SDK. You can do your ML pipelines through deployment as well using the Jupyter Lab instances by our Proactive Jupyter Kernel. Our platform will automatically optimize the execution of your pipeline on available infrastructure resources. If you have any questions or feedback, feel free to contact our technical support. Our team would be very pleased to get your feedback or help you in any way possible.


More articles

All our articles