Don't waste another minute with repetitive tasks and focus on improving your IT business! Use ActiveEon, award winning SOAP software & ML data pipeline automation.
Workflows & Scheduling
Drive efficiency, reduce costs and automate operations from simple tasks to complex processes.
The unique MLOps solution that truly connects all elements of your ML pipeline and enables quick model deployment.
Industrialize your processes, accelerate the AI deployment and improve your AI success rate.
Orchestrate automation across all ERP, ITSM, ETL/ELT, and across your hydrid environment.
Why ActiveEon is Better & Faster
ActiveEon provides the most advantageous, efficient and fastest solutions.
Powerful and easy workflow management software for IT automation.
Automatically execute IT and business processes on hybrid cloud and on-premises resources, provide self-service portal and reduce support cost.
Hybrid Job Scheduling
Use the most advanced job scheduling software for time-based, real-time processing and event-driven triggers to control enterprise-wide tasks.
Gain additional flexibility to deal with peaks in IT demand by accessing to more computing resources when needed.
Enable process automation across your entire enterprise and hybrid cloud infrastructure.
Acceleration of Algorithms
Deliver big data and algorithm outcomes faster, scale your resources and use your computing power to the full.
Monitor & Industrialize Processes
Be efficient in operating and monitoring workload automation - and enable self-service for problem resolution.
Minimize the complexity of AI with repeatable & scalable machine learning life-cycle.
Provides a center of excellence for the production of your AI models.
Higher Education & AI
Use your existing ICT infrastructure and improve AI/ML delivery with ActiveEon.
We provide the most reliable tools to make sure you benefit from a smooth and successful migration that matches your needs and gives you a competitive edge.
ActiveEon Smart Migration Process
Learn more about how to easily switch from your old legacy automation solution to ActiveEon.
Migrate from Dollar Universe
Find out how ActiveEon can help you migrate from Dollar Universe and advantageously improve your process automation.
Discover our dedicated SaaS platform for Job Scheduling and Machine Learning.
Take a look at our documentation to understand the technical aspects of our Workflows & Scheduling and Machine Learning solutions.
Learn how to use our ProActive Workflows & Scheduling and ProActive Machine Learning in 5 training sessions.
See how to orchestrate Automation Across all ERP, ITSM, ETL/ELT, and Across your Hybrid Cloud.
Join our dynamic and supportive community!
Learn more about workload orchestration, job scheduling, IT automation and machine learning!
Explore our use cases and see how we can help your business.
Discover our customers and references.
Download our whitepapers and get free expert insights!
Browse our video library to watch testimonials, demos, use cases and more!
ActiveEon is the new automation solution of choice. Learn about who we are and what we stand for.
ActiveEon displaces legacy solutions in worklflow automation by designing state-of-the-art software solutions. See more about our company vision and values.
Explore our latest news, events, press releases, webinars and more!
See where we're located.
We believe that great companies are made of great people. Join our international team and become part of a fast-growing and innovative company!
Get in touch if you have any question, concern or ideas to share with us!
Get information about our competitors and our conversion tools.
A Jupyter Kernel to rule them all
May 16, 2019 from Activeeon
Writing a machine learning algorithm is just one step towards providing value to the business. This is actually the step that requires more fine tuning and expertise. Nevertheless, the experts responsible for developing algorithms, data scientists, are surrounded with time consuming and repetitive tasks such as preparing and cleaning the data prior to develop the algorithm or deploying the training model algorithm on a large-scale infrastructure.
In the past years, there has been a lot of innovation for them to ease the deployment of their algorithm into various environments. This article aims to describe Activeeon’s approach to deployment.
If you study the most common tools used in the field, you can quickly state that Jupyter Notebook is the standard. It can be run directly on your laptop or on a remote server, enables fast iteration and more. From this environment, at Activeeon, we wanted to offer data scientists the ability to distribute any algorithm and easily access more powerful machines. The traditional approach is to use APIs or SDKs which require to edit your original code to get started.
In a world where AGILE development is a new standard, SDKs reduce scientists flexibility and forces data scientists to use custom code. The algorithm developed with the SDK will only be able to run with the SDK:
At Activeeon, we’ve decided to go one step beyond and get closer to the AGILE principles. We’ve obviously developed our Python SDK but we went to develop a Jupyter Kernel. It is now possible to use pragmas. In a few words, they are Python comments that could be interpreted by our Activeeon kernel to perform relevant request to our SDK.
Now, you may wonder what this actually means for the data scientists and what are the benefits. Below are a few concepts we applied.
Pragmas can be interpreted by the Jupyter Kernel. As mentioned above, this means that when the Activeeon Kernel is selected, the pragmas will be read to call relevant Python SDK functions. This also means that when a standard kernel is selected, the code will run seamlessly on your local computer and the pragmas will be ignored (just as comments do).
In seconds, you can now switch from executing your code on your local machine and on a remote server. Develop locally the concepts of your training algorithm with a sample of data. Then, when you are ready, scale to a training on the complete dataset with an access to an elastic resource pool with more powerful and specialized machines.
Implementing a Jupyter Kernel also bring additional UX challenges. The code that runs on a local machine needs to also run on the remote server with minimum efforts.
When you get started with the new Activeeon Kernel, the only action required is to add a line on the first block to connect to the remote server and add a line at the end to submit the code. In a second, you can then run your algorithm on more powerful machines. If you want to go further, you can then add dependencies on named tasks/blocks so that you can create a more relevant structure with controlled parallelism.
Note, if you want to learn more, we also implemented a “help” pragma that indicates all the available pragmas.
As any developer, data scientists want to debug fast and identify errors quickly. The ability to change kernel is essential there. First, you can run your code locally to highlight any errors on your algorithm. Once it works, change the kernel and execute remotely.
With that principle, you would differentiate algorithm errors from platform ones.
Obviously, the solution supports docker containers so you can create environments that suits your execution. No need to worry about the underlying server anymore!
Obviously, we talked above about why we took this approach, but you may be interested in learning more about some of the features provided by this Kernel.
By default, each block is what we call a task within ProActive. All the tasks will be run in parallel if nothing else is added.
With the option dep with the pragma task, we provide data scientists a way to structure their execution. They can then create dependencies between named tasks, the remote scheduler server will interpret those and will optimize execution time by parallelizing the workload whenever possible.
Before submitting it, you can visualize the graph of dependencies with the pragma draw_job.
Since we are in a distributed environment, variable and file transfers could be a challenge. The task options import and export notify the SDK to save those variables in the workflow scope. This ensures variables can be used later on within any dependent task.
Watch out, if the tasks are not dependent on each other, it will not be possible to retrieve the variables previously set.
When you launch an execution on a remote server, you obviously want to visualize its progress. Don’t worry we thought about it. We will present you a direct link to the scheduler portal that is responsible to manage the execution and will provide you the progress bar until completion.
Once completed, you’ll receive a summary of the execution with the overall processing time, the total execution time of all the tasks, the number of errors, etc.
As you may know, coding machine learning models requires regular visualization. This is particularly useful to understand and analyze the incoming data or the actual results from your algorithm.
In addition to all the above features, the ProActive Machine Learning from Activeeon includes some templates for standard use cases. For instance, a template is available to perform automated machine learning at scale, another could be used to ease the deployment of a model within a container.
Finally, the ProActive Machine Learning solution includes a catalog system to store and reuse code. A pragma is available to consult the catalog, commit changes and more.
In conclusion, data scientists can quickly leverage the Jupyter Kernel offered by Activeeon to run their code locally and in the cloud without more than a few clicks. Thanks to that, they will then benefit from a quick access to larger and more specialized infrastructure and benefit from large parallelization mechanisms.
Try it on our try.activeeon.com patform, it is an open source kernel.
If you are interested in how Activeeon handles elasticity, do not hesitate to check the video about CNES which presents this specific feature.
With the contribution of our team: Andrews Sobral, Mohamed Khalil Labidi
Apr 28, 2021 from Activeeon
Notification service allows users to subscribe to event-based notifications triggered either by workflow executions and their tasks, the scheduler or job planner....
Oct 23, 2020 from Activeeon
Deep learning algorithms are a series of (deep) neural networks that learn to recognise patterns from data. However, finding high-performance neural networks architectures for a certain type of application can demand many years of research...
Sep 3, 2020 from Activeeon
Let’s suppose that you have a large infrastructure containing several machines that have different operating systems (e.g. Microsoft Windows, Linux, MacOS) and distinct hardware configurations...
All our articles