Machine Learning Open Studio

Accelerate the development of AI Models through deployments and scaling of ML workflows on any infrastructure simple, portable and scalable

Accelerate the development of AI Models

Simplify, Accelerate, Industrialize Machine Learning with an Open platform. Seamlessly execute at any scale in production with any data source, on any infrastructure

The Machine Learning Open Studio from Activeeon empowers data engineers and data scientists with a simple, portable and scalable solution for machine learning workflows. It provides a pipeline solution that enable automation within the Machine Learning dev lifecycle.

Value Proposition

Consistency / Repeatability

consistency and repeatability schematic

Reuse pipeline
Standardize
Automate deployments, training, etc.
Audit previous workflows

Scaling / Portability

scaling and portable schematic

Run everywhere
Share limited resources (GPU, TPU, etc.)
Scale

Openness / Loosely coupled

loosely couple and reusable schematic

Build generic code to reuse
Integrate with any solution

Why consistency and reliability?

The consistency and reliability offered by Activeeon workflows enable data engineers and data scientists to create and automate pipelines. The consistency provided ensures that results are equals across execution. The reliability lets data scientists and data engineers confidently execute machine learning pipelines.

Use cases

  • Automate hyperparameters identification and tuning. With Activeeon, create a complete pipeline that will parallelize multiple ML model trainings and feed the results to AutoML libraries to generate a new batch of hyperparameter tests.
  • Build standard pipelines to extract data, transform and overall prepare it for your machine learning training models.
  • Review successful workflow runs and understand what has worked.
consistency and reliability schematic

Why scaling and portability?

Portability is key to ensure no vendor lock in and promote collaboration between users. It is also critical to scale. The workflows and algorithms created need to access any infrastructure setup (on-prem, hybrid, multi-cloud, HPC, etc.) and leverage the whole compute capacity from CPU to GPU / TPU / FPGA.

Activeeon includes a resource manager that abstracts away the resources and offers this portability. Some smart policies can also be configured to trigger auto-scaling based on the actual scheduler queue.

Use cases

  • Build a successful devops pipeline with dev, staging, QA and prod environments,
  • Run distributed pipelines at any scale to get results faster.
  • Run workflows in parallel to test multiple options and validate hypothesis at scale.
  • Move your work on infrastructure with GPUs or specific hardware to train your model faster.
  • Share pipelines with coworkers, ask advices, share best practices.
scaling and portability schematic

Why openness and loosely coupled tasks?

The machine learning ecosystem is constantly evolving and its open source community is strong. The ability to leverage those contributions is key to ensure constant up to date technics and best performances. Activeeon is open from end to end and support those needs.

Moreover, some steps of the machine learning process are quite repetitive and could be made generic. Activeeon includes a catalog solution that enables sharing, versioning and easy reutilization.

Use cases

  • Select the libraries that you are most comfortable with.
  • Build catalog of reusable code to help you get started and follow best practices.
  • Edit blocks for faster iterations between algorithms, data sources, data transformation, etc.
consistency and reliability schematic
  • Make deployments and scaling of machine learning (ML) workflows on any infrastructure simple, portable and scalable

  • Provide a straightforward way to deploy open-source systems for ML to diverse infrastructures (local, hybrid, multi-cloud)

  • Provide a pipeline solution to enable automation within the Machine Learning dev lifecycle

Legal & General
Capgemini
INRA
L'Oréal
Home Office
CNES

Proactive directly integrates
and offers ready-to-use libraries

BigDL
BigDL
CNTK
CNTK
Caffe
Caffe
DLib
DLib
G4j
G4j
H20
H20
Keras
Keras
MXNet
MXNet
PyTorch
PyTorch
Spark Mlib
Spark Mlib
TensorFlow
TensorFlow
cognitive services
cognitive services
jupyter
jupyter
pandas
pandas
scikit learn
scikit learn
More connectors and libraries

More resources

More resources