Accelerate the development and deployment of AI models with scalability
Simplify, accelerate & industrialize machine learning with an open platform. Seamlessly execute at any scale in production with any data source, on any infrastructure.
ProActive Machine Learning (PML) from Activeeon empowers data engineers and data scientists with a simple, portable and scalable solution for machine learning pipelines. It provides pre-built and customizable tasks that enable automation within the machine learning lifecycle, which helps data scientists and IT Operations work together. The product fully supports Machine Learning Operations (MLOps).
PML includes Machine Learning Open Studio (MLOS): compose ML workflows with drag & drop
Reusable, standartized pipelines
AutoML and incremental AI
Automate deployments and training
Traceability over code and data
Job analytics to ease model evaluation
Run on-premise, cloud, edge
Share limited resources: execution over CPU, GPU, FPGA
Scale up complex AI and big data applications Use ProActive portals directly in Jupyter
Reusable code to integrate with any solution
ML & DL model management and version control
Python integration with a dedicated API
Execution from Jupyter with ActiveEon Kernel
Traceability over code and data
In one click, securely open typical interactive AI services such as JupyterLab, TensorBoard, Visdom, COCO_Annotator, H2O GUI, Knime, etc.
ProActive workflows enable data engineers, data scientists and IT Operations to create and automate pipelines so that results could be reliable and consistent across execution. Users can execute entire machine learning pipelines with confidence and in any environment, from development to production, and on any infrastructure, from CPU to GPU and FPGA.
Portability is key to ensure no vendor lock in and promote collaboration between users. It is also critical to scale. The workflows and algorithms created need to access any infrastructure setup (on-prem, hybrid, multi-cloud, HPC, edge, etc.) and leverage the whole compute capacity from CPU to GPU / TPU / FPGA.
Activeeon solutions include a resource manager that abstracts away the resources and offers this portability. Some smart policies can also be configured to trigger auto-scaling based on the actual scheduler queue.
The machine learning ecosystem is constantly evolving and its open source community is strong. The ability to leverage those contributions is key to ensure constant up to date techniques and best performances. ProActive Machine Learning is open from end to end and support those needs.
Moreover, some steps of the machine learning process are quite repetitive and can be made generic. ProActive Machine Learning includes a catalog solution that enables sharing, versioning and easy reutilization of machine learning models and services (MaaS).
Pipeline & Workflow Orchestration: Edge to Core to Cloud to Edge for AI + HPC
Make deployments and scaling of machine learning models on any infrastructure simple, portable and scalable
Provide a straightforward way to deploy ML systems to diverse infrastructures (local, hybrid, multi-cloud, edge)
Provide a pipeline solution to enable automation within the machine learning lifecycle
HPE, Intel, together with Activeeon and in the framework of HPE GreenLake, allow end-users, developers and data scientists to run pure HPC, pure AI and converged HPC/AI workflows, and to extend this environment to seamlessly manage workloads and data across off-premises managed cloud and public cloud systems. more...