Virtualization is the process or a technique which can create a virtual version of IT datacenter components such as Compute, storage and network etc. Hypervisor is one of the commonly used virtualization technology to create virtualized IT infrastructures. It is the important technology and is the main driver of cloud based infrastructure. Hypervisor is widely used by many cloud service providers to virtualize the traditional hardware IT infrastructure and offer them as one or more Cloud Services.
In the previous post we learned fundamental basics of Cloud Computing, models and services. In this post we will learn below topics about Hypervisor – the core concept of Cloud
- What is a Hypervisor
- Physical components that creates Datacenter Infrastructure
- Virtual Components that creates Cloud Infrastructure
- How to create a Virtualized Cloud Infrastructure
- Benefits of virtualization in a cloud environment
- How to connect to a Virtualized Infrastructure
Physical components that creates Datacenter Infrastructure
Following IT infrastructure components form a traditional IT Datacenter in On-premises
- Compute components
- Network Components
- Storage Components
Below are the important components of a Compute and the combination of these components will make a server or a computer to perform its duties.
- BIOS is a small program stored on a memory chip of every motherboard that provides basic configuration information to the CPU when it first boots and before the operating system is loaded.
- The BIOS plays a critical role when the CPU first starts up. The BIOS provides a low-level operating function that is necessary to provide basic preboot and boot configuration information to the system.
- The BIOS program is stored on a flash or ROM chip, and it contains the initial configuration information such as the clock, hard drives connected, basic driver software for the keyboard, display, USB settings, security settings, order of drives to boot off, virtualization settings, and many other options that the server may need when it is powered up but before the operating system is loaded.
- The BIOS is the first stage in the boot process. When a server is running hypervisor software for virtualization, the BIOS can be configured to optimize CPU performance by enabling certain features on the CPU.
- Server and motherboard manufacturers will from time to time update the BIOS firmware to enable new features and capabilities as well as correct issues found in earlier versions of the code.
- A cloud service provider offering compute services may not allow a customer access to the BIOS because many other VMs may be running on the same server hardware and may be used by other customers.
- CPU is the important component in a Compute which executes the instructions. Each CPU can have multiple cores, and multiple CPUs can be inserted into a server. Enough CPU slots should be available in motherboard to increase the physical CPU power.
- A single-core CPU has one processing thread and can perform only one task at a time. A core is an individual processing unit within a CPU chip. Each CPU can have from one to many cores on a single silicon wafer. With multicore processing, multiple threads can run simultaneously, dramatically increasing the processing power of a single server.
- Each CPU core will access its own cache memory, known as Level 1 (L1) cache. The L1 cache is a small but very fast memory pool that can be used to reduce the time it takes to access the main memory on the motherboard.
- Initially CPU will run a self-test and then read its initial configuration information from the BIOS. This configuration information will instruct the CPU what storage devices are connected and activate the monitor and keyboard.
- Based on the drive and boot information, the CPU will access the storage hardware and boot the operating system or hypervisor.
- CPUs that are used in servers will have a feature called hardware-assisted virtualization. This feature is used to optimize processing in a hypervisor environment.
- Memory is the another important component which is used to store the information for immediate use.
- When configuring a server to be used in a virtualized environment it is important to take into consideration the memory to be installed in the bare-metal server.
- The memory will be consolidated into a resource pool and shared between the VMs running on the hypervisor.
- Memory requirements for each VM’s are determined based on below settings
- Memory for running hypervisor
- Available memory slots in motherboard.
- Memory requirements for an application that will be hosted.
- Additional memory to support peak work loads.
The following is the important component of a Network which allows the servers to communicate and transfer data from one compute to another compute within a network
Network Interface Card (NIC)
- The network interface card (NIC) is the Ethernet physical connection from the server to the data network. NICs can vary in speed from 100 Mbps, 1 Gbps, 10 Gbps, up to 25 and 40 Gbps.
- Often multiple Ethernet ports are installed in a single server for redundancy and capacity, and ports are assigned specifically to a group of vertical servers.
- Ethernet interfaces can support a large number of features that are commonly used in a cloud datacenter such as link aggregations, VLAN tagging, jumbo frame support, checksum, and TCP offload. We will learn more about these concepts in future posts.
The following are the important components of a Storage which allows the servers to store data and transfer data from one compute to another compute within a LAN or SAN network.
Host Bus Adapter (HBA)
- A host bus adapter (HBA) is a network interface installed in a server to provide a connection to remote storage.
- HBAs are installed in a server’s expansion slot, much like NIC cards are installed to access the LAN.
- To the server’s operating system, the storage appears to be attached locally as it talks to the HBA.
- The HBA hardware and driver software will take the SCSI storage commands and encapsulate them into the Fibre Channel networking protocol. Fibre Channel is a high-speed, optical SAN, with speeds from 2 Gbps per second to 16 Gbps and higher.
Various types of storage devices can be installed directly inside the server, locally attached, or accessed over a network
- Tape – Tape storage uses magnetic material to store information and are commonly found in the slower storage tiers due to long read and write access times. For example, tape storage is frequently found in offline storage backup systems as tapes are easy to store in a safe facility offsite from the datacenter for disaster recovery purposes.
- SSD – A solid-state drive (SSD) replaces the mechanical spinning platters of a traditional hard drive with silicon storage chips. Data read and write times are significantly faster than mechanical drives, SSDs are a natural choice for I/O-intensive applications such as databases or other processes that could benefit from fast storage access.
- USB – The universal serial bus (USB) drive is a removal storage interface option that is useful when performing server maintenance or installing locally drivers or updates. The USB interfaces can support thumb drives, external hard drives, and DVD drives, as well as many other externally attached devices.
- Disk – Traditional disk drives consist of a spinning magnetic platter with read-write heads floating above them. Hard disks are the backbone of all storage systems and are usually installed in RAID arrays for redundancy. Hard drives can be installed directly inside a server using SCSI and SATA interfaces as the most common connection types. In the large storage systems that centralizes the storage, connects to the servers using a storage area network (SAN).
Do you know What are the intelligent Storage Systems