Features - ProActive AI Orchestration

Automate machine learning model pipelines

Build, deploy & automate machine learning at scale

AutoML with ProActive AI Orchestration
Model as a Service with ProActive AI Orchestration

ProActive AI Orchestration from Activeeon is a complete and open platform for machine learning industrialization. From build to deployment, it automates machine learning in production offering governance and control through workflows.

Get Started

consistency and repeatability

Consistency and repeatability

Reusable, standartized pipelines
AutoML and incremental AI
Automate deployments and training
Traceability over code and data
Job analytics to ease model evaluation

scalable and portable machine learning pipelines

Scalability and portability

Run on-premise, cloud, edge
Share limited resources: execution over CPU, GPU, FPGA
Scale up complex AI and big data applications Use ProActive portals directly in Jupyter

loosely coupled tasks and reusable code

Open and loosely coupled

Reusable code to integrate with any solution
ML & DL model management and version control
Python integration with a dedicated API
Execution from Jupyter with ActiveEon Kernel
Traceability over code and data


ProActive AI Orchestration interface

ML-OS Screenshot
  • Visualize workflows and their dependencies
  • Set up custom menu for simple drag and drop palettes for machine learning, deep learning, AutoML and more
  • Share workflows or tasks with your colleagues
  • Customize code on imported tasks for better results and performances

Get Started

A single portal for all AI workloads, and all interactive services

Jupyter Lab Screenshot
  • Easily open interactive AI services such as Jupyter Lab, and securely share with others access to Note Books, etc.
  • Control your job execution (stop, suspend, continue)
  • Submit Jobs directly from Jupyter Lab with the ProActive Kernel
  • Get the state of your Jobs directly within Jupyter Lab

Ease data connection

Focus on what is important with prebuilt connectors to data sources
Connect to the most popular data source with a simple drag & drop

Filesystem, FTP, HTTP, SSH, SFTP
PostgreSQL, MySQL
Analytic SQL (Greenplum, etc.)
NoSQL (MongoDB, Cassandra, Elasticsearch)
Hadoop (HDFS)
Cloud (S3, blob, buckets)
Scality

data connectors

For ML engineers & data scientists
Agility and openness


Develop Once, Deploy Anywhere

ProActive AI Orchestration is agnostic to the resource from development to production, which means that you can use it on any infrastructure:

  • Benefit from an abstraction layer on the resource thanks to the Resource Manager with ProActive Nodes
  • Run workloads locally, on-premise, in the cloud (Azure, AWS, Google Cloud, OpenStack, VmWare, etc.) and other hybrid configurations
  • Move to production in minutes

screenshot of ProActive resource manager


Scripted resource selection

Select dynamically the resource required: GPU, RAM, OS, lib, etc.

Select the most relevant resource:
- based on hardware requirements (GPU, RAM, etc.)
- based on location (Azure, AWS, OpenStack, VmWare, On-Prem, In France, In US, etc.)
- based on variable information (latency, bandwidth, etc.)
- based on OS configuration (Docker enabled, Python3 enabled, etc.)

screenshot of some ProActive node selectors


Simplified Docker Integration

Share files and variables across containers

  • Variable propagation through containers
  • File sharing through containers via Dataspace
  • All the libraries available for any environment

screenshot of ProActive feature to for environment within a Docker container


Develop with any library and DevOps tools

Benefit from a fully open system and leverage the best libraries. Set up a complete machine learning orchestration system with ProActive AI Orchestration.

  • Integrate with any machine learning and deep learning libraries
  • Extend Studio with custom packages import
  • Or extend Studio with our community packages avalable on the Hub

screenshot of Activeeon hub where package, connectors, plugins can be shared

For Production
Orchestration and Control


Error management and alerts

Set up simple recovery rules in case of errors

  • Advanced error management policies (kill job, suspend dependent tasks, ignore, etc.)
  • Set up alerts on error

error management icon


Schedule and monitor workloads

Plan jobs, add execution exceptions and monitor them

  • Set up cron expression to repeat execution
  • Set up periods of non execution (e.g. for maintenance)
  • Set up additional execution (e.g. for bank holidays)
  • Monitor all jobs from a single interface

monitoring icon


Fast time to result with distribution system and cloud bursting

Improve time to result with integrated control structures

  • Run algorithms in parallel
  • Leverage multi-threading at ease
  • Prioritize important reports

replication icon


Lifecycle management of services and applications

Manage lifecycles required for jobs or for cost purposes

  • Automatically trigger servers such as Visdom for visualization
  • Monitor service utilization and scalability

lifecycle icon


Comprehensive Rest API

Integrate and build with a completely open solution

  • Trigger workflow execution, prioritization, etc. from external applications
  • Monitor execution from third-party services

rest api icon