Accelerate the development and deployment of AI models with scalability
Simplify, accelerate & industrialize machine learning with an open platform. Seamlessly execute at any scale in production with any data source, on any infrastructure.
ProActive Machine Learning (PML) from Activeeon empowers data engineers and data scientists with a simple, portable and scalable solution for machine learning pipelines. It provides pre-built and customizable tasks that enable automation within the machine learning lifecycle, which helps data scientists and IT Operations work together. The product fully supports Machine Learning Operations (MLOps).
PML includes Machine Learning Open Studio (MLOS): compose ML workflows with drag & drop
Data connector tasks & workflows templates help to automate and scale up data ingestion and data preparation pipelines
Scale up parallel model training, validation and testing
AutoML to scale up the model tuning during experiments
Jupyter Kernel & Python connector to create AI workflows from code
AI tasks & workflow templates to automate AI pipelines
Model as a Service (MaaS): deploy and expose AI models in production; scale up model deployment; monitoring, alerting, data drift detection
JupyterLab as a Service: deploy a JupyterLab instance on-demand
Job analytics & visualization as a Service: track and visualize metrics of your machine learning workflow
Managed Services (KNIME, …)
In one click, securely open typical interactive AI services such as JupyterLab, TensorBoard, Visdom, COCO_Annotator, H2O GUI, Knime, etc.
ProActive workflows enable data engineers, data scientists and IT Operations to create and automate pipelines so that results could be reliable and consistent across execution. Users can execute entire machine learning pipelines with confidence and in any environment, from development to production, and on any infrastructure, from CPU to GPU and FPGA.
Portability is key to ensure no vendor lock in and promote collaboration between users. It is also critical to scale. The workflows and algorithms created need to access any infrastructure setup (on-prem, hybrid, multi-cloud, HPC, edge, etc.) and leverage the whole compute capacity from CPU to GPU / TPU / FPGA.
Activeeon solutions include a resource manager that abstracts away the resources and offers this portability. Some smart policies can also be configured to trigger auto-scaling based on the actual scheduler queue.
The machine learning ecosystem is constantly evolving and its open source community is strong. The ability to leverage those contributions is key to ensure constant up to date techniques and best performances. ProActive Machine Learning is open from end to end and support those needs.
Moreover, some steps of the machine learning process are quite repetitive and can be made generic. ProActive Machine Learning includes a catalog solution that enables sharing, versioning and easy reutilization of machine learning models and services (MaaS).
Pipeline & Workflow Orchestration: Edge to Core to Cloud to Edge for AI + HPC
Make deployments and scaling of machine learning models on any infrastructure simple, portable and scalable
Provide a straightforward way to deploy ML systems to diverse infrastructures (local, hybrid, multi-cloud, edge)
Provide a pipeline solution to enable automation within the machine learning lifecycle
HPE, Intel, together with Activeeon and in the framework of HPE GreenLake, allow end-users, developers and data scientists to run pure HPC, pure AI and converged HPC/AI workflows, and to extend this environment to seamlessly manage workloads and data across off-premises managed cloud and public cloud systems. more...