Welcome back to Apis TechTips, a series of short excerpts from Apis Training’s diverse telecom courses.
This episode is about virtualization from a birds-eye perspective and comes from the course Network Slicing in an Hour.
If you liked this Apis TechTip, check out the complete course Network Slicing in an Hour. The course provides you with a condensed view of the 5G Network slicing feature as defined by the 3GPP. It presents operational benefits and explains the technology needed to actualize network slicing in 5G.
Network Slicing in an Hour covers topics such as:
- Network Slicing: Overall Perspective
- Technological Aspects of Network Slicing
- Business Aspects of Network Slicing
Learn more about the full course here: https://apistraining.com/portfolio/network-slicing-in-an-hour/
This TechTip is also part of a whole eBook of tips, all focusing on Cloud technology. We call it an eBook+ since all chapters are both text and video. If you want to read the text, you can do that, and if you want to watch a teacher tell the story, you can choose that.
All the video chapters are excerpts taken directly from our recorded lessons, so if one of them piques your interest, you can easily go to the course and dive deeper into that particular subject.
This particular eBook+ is called “Cloud Chronicles: A Journey into a Virtualized and Software-Defined World“, and you only need to CLICK HERE to request it for immediate download.
Below you can find the transcribed text for this particular TechTip.
Intro to Virtualization
In this chapter, I just want to contain the things that I find most relevant. Virtualization is a topic in its own right, of course. This picture, on the left, presents what is called the traditional approach. This is one machine that does one thing.
So within the red area, I have dedicated hardware with specialized software that runs on it, and it has certain functionalities and capabilities. And you can see three of these machines over here. This is the way things were for a very, very long time. It’s a good solution. It does have a lot of good things about it. What’s bad about the traditional approach? Because this is an explanation of why we move away from that. And the answer is: The traditional approach is expensive.
It’s expensive because buying dedicated hardware and software is expensive. But also because it’s a very long time to market and long deployment times. Establishing a new MME (4G signalling node) within the evolved packet (4G) core is a project for one and a half months if we do it in this traditional approach. We have a lack of scalability and lack of flexibility. So basically, the traditional approach is fantastic if we can afford it. It’s the best possible solution because you have dedicated resources and everything is beautiful, but it is expensive.
So in order to save, and economizing is what the operators really need to do nowadays, we started with the idea of sharing the hardware resources. And the first really big method was using virtual machines, which can be seen in the middle part of the picture.
So this application in the middle runs on a virtual machine and has a certain functionality. The red shape is one virtual machine that has its own operating system. And there are a number of them, running on the same hardware with a hypervisor between. The hypervisor ensures that we have some kind of multiplexing and resource allocation that fits all of these three virtual machines, which are, by the way, unaware of the fact that anybody else exists on that hardware. That’s what the hypervisor does. It hides the fact that there are other users.
And over time, looking at the rightmost part of the image, you can see that what is the actual functionality gets smaller and smaller. Smaller building blocks are easier to design, and they can be put out onto the market much quicker. These are containers that share an operating system, so they are smaller than the virtual machine solution. There is a container management tool (e.g. a Docker engine) to basically dock them into the operating system. And then the hardware below is going to be used by all of them.
There are also microservices which are even smaller building blocks, but without going any deeper into details, I simply wanted to say that the virtualization as such gives us huge benefits. It’s a much more cost-efficient way of designing the network. It has its drawbacks, of course, stemming from the fact that we are sharing physical resources, but it’s often worth it.
With virtualization, we get scalability, automated management, and orchestration. Lower power consumption is one of the main arguments for virtualization. We get isolation. One thing that also comes with virtualization is the fact that a process that is running on a particular piece of hardware can be moved to another piece of hardware without interrupting the process. This is called live migration and is probably going to be quite useful for us in the mobile networks to follow the mobility of the customers across the virtualized networks. These are also the underlying benefits of virtualization, which we use for network-slicing solutions.