Machine Learning

Accelerate the development and deployment of AI models with scalability

Accelerate the deployment of AI models

Simplify, accelerate & industrialize machine learning with an open platform. Seamlessly execute at any scale in production with any data source, on any infrastructure.

ProActive AI Orchestration from Activeeon empowers data engineers and data scientists with a simple, portable and scalable solution for machine learning pipelines. It provides pre-built and customizable tasks that enable automation within the machine learning lifecycle, which helps data scientists and IT Operations work together. The product fully supports Machine Learning Operations (MLOps).

Register for a demo


ProActive AI Orchestration includes Machine Learning Open Studio (MLOS): compose ML workflows with drag & drop

data preparation

For data engineers

Data connector tasks & workflows templates help to automate and scale up data ingestion and data preparation pipelines

scale up model training

For data scientists

Scale up parallel model training, validation and testing
AutoML to scale up the model tuning during experiments
Jupyter Kernel & Python connector to create AI workflows from code
AI tasks & workflow templates to automate AI pipelines

machine learning operationalization

For AI architects

Model as a Service (MaaS): deploy and expose AI models in production; scale up model deployment; monitoring, alerting, data drift detection
JupyterLab as a Service: deploy a JupyterLab instance on-demand
Job analytics & visualization as a Service: track and visualize metrics of your machine learning workflow
Managed Services (KNIME, …)

Easy and secured access to interactive AI Services

In one click, securely open typical interactive AI services such as JupyterLab, TensorBoard, Visdom, COCO_Annotator, H2O GUI, Knime, etc.

IT Automation Gromacs

Consistency and reliability

ProActive workflows enable data engineers, data scientists and IT Operations to create and automate pipelines so that results could be reliable and consistent across execution. Users can execute entire machine learning pipelines with confidence and in any environment, from development to production, and on any infrastructure, from CPU to GPU and FPGA.

Use cases

  • ProActive AI Orchestration will help you automate hyperparameters tuning. Create a complete pipeline that will parallelize multiple ML model trainings and feed the results to AutoML libraries to generate a new batch of hyperparameter tests.
  • Build standard pipelines to extract transform and prepare data for training your machine learning models.
  • Review successful workflow runs and understand what has worked.
consistency and reliability schematic

Scaling and portability

Portability is key to ensure no vendor lock in and promote collaboration between users. It is also critical to scale. The workflows and algorithms created need to access any infrastructure setup (on-prem, hybrid, multi-cloud, HPC, edge, etc.) and leverage the whole compute capacity from CPU to GPU / TPU / FPGA.

Activeeon solutions include a resource manager that abstracts away the resources and offers this portability. Some smart policies can also be configured to trigger auto-scaling based on the actual scheduler queue.

Use cases

  • Build a successful DevOps pipeline across dev, staging, QA and prod environments
  • Run distributed pipelines at any scale to get results faster
  • Run workflows in parallel to test multiple options and validate hypothesis at scale
  • Move your work on infrastructure with GPUs or specific hardware to train your model faster.
  • Share pipelines with coworkers, ask for advices, share best practices.
scaling and portability schematic

Openness and loosely coupled tasks

The machine learning ecosystem is constantly evolving and its open source community is strong. The ability to leverage those contributions is key to ensure constant up to date techniques and best performances. ProActive AI Orchestration is open from end to end and support those needs.

Moreover, some steps of the machine learning process are quite repetitive and can be made generic. ProActive AI Orchestration includes a catalog solution that enables sharing, versioning and easy reutilization of machine learning models and services (MaaS).

Use cases

  • Select the libraries that you are most comfortable with
  • Build catalog of reusable code to help you get started and follow best practices
  • Edit blocks for faster iterations between algorithms, data sources, data transformation
consistency and reliability schematic

Pipeline & Workflow Orchestration: Edge to Core to Cloud to Edge for AI + HPC



  • Make deployments and scaling of machine learning models on any infrastructure simple, portable and scalable

  • Provide a straightforward way to deploy ML systems to diverse infrastructures (local, hybrid, multi-cloud, edge)

  • Provide a pipeline solution to enable automation within the machine learning lifecycle

Integration and ready-to-use libraries

BigDL
BigDL
CNTK
CNTK
Caffe
Caffe
DLib
DLib
G4j
G4j
H20
H20
Keras
Keras
MXNet
MXNet
PyTorch
PyTorch
Spark Mlib
Spark Mlib
TensorFlow
TensorFlow
cognitive services
cognitive services
jupyter
jupyter
pandas
pandas
scikit learn
scikit learn
More connectors and libraries

More resources

More resources