Features - Machine Learning Open Studio

Automate machine learning model pipelines

Build, deploy & automate machine learning at scale

Machine Learning Open Studio (MLOS) from Activeeon is a complete and open platform for machine learning industrialization. From build to deployment, it automates machine learning in production offering governance and control through workflows.

Get Started

Machine Learning Open Studio Interface

ML-OS Screenshot
  • Visualize workflows and their dependencies
  • Set up custom menu for simple drag and drop palettes for machine learning, deep learning, AutoML and more
  • Share workflows or tasks with your colleagues
  • Customize code on imported tasks for better results and performances

Get Started

Ease data connection

Focus on what is important with prebuilt connectors to data sources
Connect to the most popular data source with a simple drag & drop

Filesystem, FTP, HTTP, SSH, SFTP
PostgreSQL, MySQL
Analytic SQL (Greenplum, etc.)
NoSQL (MongoDB, Cassandra, Elasticsearch)
Hadoop (HDFS)
Cloud (S3, blob, buckets)

data connectors

For ML engineers & data scientists
Agility and openness

Develop Once, Deploy Anywhere

Machine Learning Open Studio is agnostic to the resource from development to production, which means that you can use it on any infrastructure:

  • Benefit from an abstraction layer on the resource thanks to the Resource Manager with ProActive Nodes
  • Run workloads locally, on-premise, in the cloud (Azure, AWS, Google Cloud, OpenStack, VmWare, etc.) and other hybrid configurations
  • Move to production in minutes

screenshot of ProActive resource manager

Scripted resource selection

Select dynamically the resource required: GPU, RAM, OS, lib, etc.

Select the most relevant resource:
- based on hardware requirements (GPU, RAM, etc.)
- based on location (Azure, AWS, OpenStack, VmWare, On-Prem, In France, In US, etc.)
- based on variable information (latency, bandwidth, etc.)
- based on OS configuration (Docker enabled, Python3 enabled, etc.)

screenshot of some ProActive node selectors

Simplified Docker Integration

Share files and variables across containers

  • Variable propagation through containers
  • File sharing through containers via Dataspace
  • All the libraries available for any environment

screenshot of ProActive feature to for environment within a Docker container

Develop with any library and DevOps tools

Benefit from a fully open system and leverage the best libraries. Set up a complete machine learning orchestration system with MLOS.

  • Integrate with any machine learning and deep learning libraries
  • Extend Studio with custom packages import
  • Or extend Studio with our community packages avalable on the Hub

screenshot of Activeeon hub where package, connectors, plugins can be shared

For Production
Orchestration and Control

Error management and alerts

Set up simple recovery rules in case of errors

  • Advanced error management policies (kill job, suspend dependent tasks, ignore, etc.)
  • Set up alerts on error

error management icon

Schedule and monitor workloads

Plan jobs, add execution exceptions and monitor them

  • Set up cron expression to repeat execution
  • Set up periods of non execution (e.g. for maintenance)
  • Set up additional execution (e.g. for bank holidays)
  • Monitor all jobs from a single interface

monitoring icon

Fast time to result with distribution system and cloud bursting

Improve time to result with integrated control structures

  • Run algorithms in parallel
  • Leverage multi-threading at ease
  • Prioritize important reports

replication icon

Lifecycle management of services and applications

Manage lifecycles required for jobs or for cost purposes

  • Automatically trigger servers such as Visdom for visualization
  • Monitor service utilization and scalability

lifecycle icon

Comprehensive Rest API

Integrate and build with a completely open solution

  • Trigger workflow execution, prioritization, etc. from external applications
  • Monitor execution from third-party services

rest api icon