The success and business case for NFV largely depends on how it will be architected and further matured to strengthen 5G for telecom service providers. 5G VNFs need to host real time applications with low latency requirements to fulfil network data, control and signalling needs.
Cloud Native have been investigated by companies such as Netflix, Twitter, Alibaba and Facebook. Practices include containers and microservices to help achieve scaling capabilities, speed of introducing new functionality and automation.
Containers are a form of virtualization that encapsulates application dependencies, required libraries and configuration in a package which is isolated from other containers in the same operating system. As a move towards cloud native, VNF microservices can be deployed in containers which enable the continuous delivery/deployment of large, complex applications.
However, containers are still immature as compared to virtual machines. Various security risks are involved with containers. All containers in an OS share a single kernel. Any breach on kernel OS break down all containers dependent on it. Isolating a fault is not easy with containers and a fault can be replicated to subsequent containers.
Service providers who want to use containers in NFV environment may face challenges. It is still possible to use containers in Multi-access Edge Computing (MEC) environment which is going to co-exist with NFV in 5G in the future. MEC implies deploying a User Plane Function (UPF) at the edge of the network, closer to the user’s application in order to provide very low latency for use cases like V2X (Vehicle-to-anything), Augmented Reality or Virtual Reality.
Containers can be used along with virtual machines in NFV environment. The deployment of VNFs can be as virtual machines only, containers only, hybrid where container will run in virtual machines providing security and isolation features and in heterogeneous mode where some of the VNFs will run in VMs, some in containers and mix of both.
Let’s see what happens.
Until next time,
The Apis Team