Public Deliverables

D3.3 Final Version of AI-ready MLSysOps Framework (AVAILABLE)

Deliverable D3.3 represents the third and final iteration of the advanced mechanisms developed within the MLSysOps project. It consolidates the enhancements and comprehensive developments of mechanisms designed for efficient resource allocation, adaptive configuration, and autonomous management within the heterogeneous cloud-edge continuum. D3.3 encapsulates the maturity reached by the MLSysOps AI-ready framework, highlighting the final advancements in telemetry infrastructure, computational resource management, adaptive storage distribution and networking along with the seamless deployment and orchestration across diverse computing infrastructures.

A first integrated version of the MLSysOps framework, including core components, supporting libraries, and mechanisms described throughout this deliverable, is publicly released via the project’s official open-source repository. This repository includes detailed documentation, tutorials, and usage examples to support adoption and experimentation by the community. Furthermore, slight differences may exist between the implementations described in this deliverable and those published in the repository, as the framework is subject to continuous refinement and enhancement. Therefore, the open-source repository should be considered the primary reference for the most up-to-date and operational version of the MLSysOps framework.

The deliverable can be found here: [D3.3.pdf]

Note: This deliverable has not yet been approved by the EC and may undergo some modifications at a later stage.

D4.3 Final Version of System Simulators (AVAILABLE)

This deliverable describes different simulators and simulator extensions that have been developed in the MLSysOps project. These simulation tools can be used, independently of the MLSysOps project, for various purposes. On the one hand, they enable the production of realistic system traces for scenarios that are impossible or impractical to reproduce using research testbeds or real-world testbeds. On the other hand, they can be used to evaluate different system/application management policies, including ML-based approaches. In addition, some simulation environments allow developers to test real software using virtual instead of physical nodes, which is particularly useful for mobile computing scenarios with smart vehicles.

The deliverable can be found here: [D4.3.pdf]

It is also available on the MLsysOps Zenodo community: [Zenodo]

Short descriptions of the simulators with pointers to the repos (containing the software and documentation) can be found under SW, DATA & MODELS > SIMULATORS.

 

 

D4.4 Final Version of AI Architecture and ML Models (AVAILABLE)

The deliverable describes the MLSysOps machine learning (ML) framework. It comprises several key components, including dynamic storage management, trust management, anomaly detection, application placement and management, and integration with 5G. Each component is decoupled from the other and plays a crucial role in enhancing the management and optimization of software systems. This design ensures interoperability and avoids vendor lock-in. At the core of this design are the MLSysOps agents. These specialized agents interact with the various system components and the MLConnector APIs to integrate ML models for decision-making and support for model monitoring and retraining.

The deliverable highlights ML integration across multiple system layers. It introduces trust management through online anomaly detection, a hybrid physical-layer authentication scheme, and ML-driven approaches for monitoring 5G edge networks. Reinforcement learning is applied to optimize traffic routing by balancing latency, bandwidth, and load, while storage redundancy and distribution are dynamically adjusted to reduce costs and meet performance goals. It also covers ML-enabled mechanisms for application placement, management, and reconfiguration, along with processes for model training, monitoring, retraining, and integration through the MLConnector API.

The deliverable can be found here: [D4.4.pdf]

Note: This deliverable has not yet been approved by the EC and may undergo some modifications at a later stage.

D5.3 Final Integration and Evaluation Report (AVAILABLE)

This deliverable documents the progress on integration and evaluation of the MLSysOps framework. It briefly details the framework’s core functionalities, current validation status, and the technical refinements implemented to address integration challenges. A seven-step onboarding methodology designed to be use-case independent is presented. This systematic approach streamlines the integration of applications into the MLSysOps framework, which has been successfully validated across the project’s use cases.

The framework’s evaluation is showcased through its use on two distinct scenarios: Smart City and Smart Agriculture. In the smart city use case, the framework optimizes the balance between energy efficiency and detection accuracy by utilizing acoustic sensors to trigger image-based traffic incident detection. In the smart agriculture scenario, it enhances weed detection capabilities by dynamically deploying a drone when its input is expected to significantly improve accuracy. Finally, the deliverable provides an assessment of various functionalities that have been developed in the project vs the corresponding Requirement Group (RG) and Project-level (P) KPIs.

The deliverable can be found here: [D5.3.pdf]

 

 

 

D6.4 MLSysOps Open Datasets (AVAILABLE)

This deliverable reports the final status of the datasets and machine learning (ML) models made publicly available within the MLSysOps project, in accordance with FAIR principles and the project’s open science strategy. To ensure long-term accessibility and scientific transparency, Zenodo was selected as the primary platform to host the MLSysOps Community, serving as a persistent repository for sharing datasets, ML models, and technical reports.

The deliverable provides a comprehensive index and technical description of nineteen distinct assets:

  • Eleven Machine Learning Models: These models, produced through collaborative efforts between partners, are primarily provided in the ONNX (Open Neural Network Exchange) format to ensure cross-platform interoperability and ease of deployment.
  • Eight Public Datasets: These resources include raw and processed data collected from diverse sources, including real-world IoT testbeds, 5G signal monitoring, and high-fidelity system simulators.

By centralizing these resources, Deliverable D6.4 establishes a foundation for future research in autonomic system management across the cloud-edge continuum. The availability of these assets directly supports the project’s goal of fostering an open research ecosystem for AI-driven infrastructure management.

The deliverable can be found here: [D6.4.pdf]

Short descriptions of the ML models and datasets, with pointers to the corresponding repos on Zenodo can be found under SW, DATA & MODELS > DATA SETS & ML MODELS

Skip to content