Cloud‑ and Edge‑Deployed Machine Learning Network Function (xApp)
This project explored the design, deployment, and evaluation of a machine‑learning‑based virtual network function (VNF) implemented as a lightweight, containerised xApp. The goal was to assess the feasibility and performance of running ML‑driven traffic classification in resource‑constrained cloud and edge environments, rather than relying on heavyweight telecom network functions.
A custom xApp was designed and implemented instead of deploying a full 5G core, allowing the system to operate within strict resource limits (1 CPU, 2 GB RAM) while still addressing realistic telecom‑relevant use cases. The xApp integrates a Random Forest classifier to label network flows as benign or malicious, using NFStream for passive flow monitoring. While intentionally simple, the model enabled meaningful evaluation of inference latency, throughput, and system bottlenecks.
The xApp was packaged as a Docker container and deployed in two environments:
- a cloud‑style Kubernetes setup using Minikube on Ubuntu
- an edge‑oriented deployment using lightweight Kubernetes (K3s).
Both environments used the same Kubernetes manifests to ensure comparability. System rebuild scripts were created to allow clean redeployment between experiments.
To evaluate performance, Prometheus and Grafana were used to monitor:
- per‑flow inference latency
- flow processing rate
- container CPU utilisation
- container memory usage.
Synthetic network traffic was generated using tcpreplay with recorded PCAP data, enabling controlled load testing at traffic rates ranging from 10 Mbps to 500 Mbps. Each experiment was run under sustained load with a full cluster rebuild between tests to ensure reproducibility.
Results showed broadly similar performance between cloud and edge deployments when resource limits were equivalent, confirming that lightweight ML‑based VNFs can be deployed effectively at the edge. Inference latency increased with traffic load, with CPU emerging as the primary bottleneck, while memory usage remained low throughout. These findings align with existing research indicating CPU‑bound behaviour in lightweight VNFs.
While the implementation is intentionally limited in scope (binary classification, single‑threaded inference), the project demonstrates:
- practical ML deployment under tight resource constraints
- end‑to‑end containerisation and orchestration
- performance instrumentation and monitoring
- realistic evaluation of ML inference trade‑offs in distributed systems.
The project provides a solid foundation for extending toward more complex models, multi‑threaded inference, and action‑triggering network functions in real‑world edge and telecom scenarios.
Code & artefacts
Source code for the xApp, deployment scripts, and experiments is available on GitHub.
A pre‑built Docker image of the containerised xApp is published on Docker Hub.