While virtualization and containerization are different approaches for improving the usage and flexibility of cloud computing resources, we add Network Virtualization to this topic as it is the key enabler for hybrid clouds to realize the workload mobility expectation from every modern CIO.
Simply put, it’s the process of creating a virtual, rather than physical, version of something. Virtualization can apply to computers, operating systems, storage devices, applications, or networks. However, server virtualization is at the heart of it.
IT organizations are challenged by the limitations of today’s x86 servers, which are designed to run just one operating system and application at a time. As a result, even small data centers have to deploy many servers, each operating at just 5 to 15 percent of capacity—highly inefficient by any standard.
Virtualization uses software to simulate the existence of hardware and create a virtual computer system. Doing this allows businesses to run more than one virtual system – and multiple operating systems and applications — on a single server. This can provide economies of scale and greater efficiency.
The concept of containerization basically allows virtual instances to share a single host operating system and relevant binaries, libraries or drivers. This approach reduces wasted resources because each container only holds the application and related binaries or libraries. Containers use the same host operating system (OS) repeatedly, instead of installing (and paying to license) an OS for each guest VM. This is often referred to as operating system-level virtualization. The role of a hypervisor is instead handled by a containerization engine, which installs atop the host operating system.
Network virtualization completely decouples network resources from the underlying hardware. All networking components and functions are faithfully replicated in software. Virtualization principles are applied to physical network infrastructure to create a flexible pool of transport capacity that can be allocated, used, and repurposed on demand. This means once you have your initial physical hypervisors in your physical network in operation, you no longer need to touch any networking physical device.
This set of technologies are established at the core to move foward into the implemenation of private and hybrid cloud environments.
Once an organization has successfully moved into a virtualized environment, the company is well prepared to embark into projects such as:
Based on each concept the main benefits are:
Since each application’s container is free of OS overhead:
Our experience will be very valuable in various aspects, from design to implementation:
Among other tools and technologies our team uses the following toolset for each implementation type:
Cloud computing is not the same thing as virtualization; rather, it’s something you can do using virtualization. Cloud computing describes the delivery of shared computing resources (software and/or data) on demand through the Internet. Whether or not you are in the cloud, you can start by virtualizing your servers and then move to cloud computing for even more agility and increased self-service.
Definitively not. We have certified Red Hat Enterprise Virtualization engineers using Linux KVM (RHEV-H) as the underlying hypervisor of RHEV. We’ve had outstanding results for customers in a variety of industries.
No, the concept of cloud is geared to the model of consumption of compute, storage and networking resources. These consumption is generally self-served and on-demand. Two virtualized data centers are usually established and interconnected in order to implement a DR/BC strategy. Generally called “Main Site” and “Disaster Recovery Site”.