Confronted by the sheer number of new announcements from virtualization vendors, the avalanche of acquisitions and the constant launching of new virtualization products, you could be forgiven for thinking that King Solomon in the old testament had easier choices to make when making his judgments. This daunting situation really didn’t exist even a few years ago when virtualization was still in its infancy. Recent surveys have shown that today as many as 60%+ of organizations have deployed some type of server virtualization.
In order to get a better idea of the criteria that you need to use and who are the key players in the virtualization market you should read this document.
Objectives of this document
Who are the key players in the hypervisor market today?
In order to understand the hypervisor market the first thing that you will want to confirm is that you know who the main players are in the market and more or less what their market share of the overall market is. The top three server virtualization solutions are XenServer from Citrix, vSphere from VMware Inc. and Microsoft Hyper-V that comes as an integral part of Windows Server 2008 R2. According to a TechTarget survey these three leading players account for over 80% of the currently installed hypervisor systems. Despite Microsoft’s best efforts VMware is still the most installed hypervisor version deployed today and numerous surveys report that as much as 70% of different hypervisor versions use VMware. In conclusion it should be considered that VMware is recognized as the undisputed leader today.
As you evaluate the virtualization options and vendors available to you don’t forget to check out alternative hypervisors from SUSE, Sun Micros Systems, IBM, HP is also well worth a look and Oracle and Red Hat will want to draw your attention to their hypervisor offerings. Data Center managers often explain that they have selected these hypervisors because they use a particular operating supplied by the hardware vendor or alternatively these hypervisors are selected because of the choice of operating system. All the time you must remember to keep server hardware and operating system compatibility in mind to ensure your hypervisor choice is correct.
What are Type 1 and Type 2 hypervisors?
You will discover that there are different ways to achieve the same objectives, Type 1 and Type 2 hypervisors will offer you two design choices and it is important to understand how to make the right buying decision. In fact the top 3 most popular hypervisiors are Type 1 hypervisors.
Type 1 hypervisors
These Type 1 hypervisors are often referred to as “bare metal” hypervisors, what this means is that this form of hypervisor directly accesses the hardware resources of the server. The advantage of this type of hypervisor is that a greater number of virtual machines can be loaded onto the server while at the same time offering improved or optimized performance for all of the VMs or virtual machines. The servers operating system in this case runs above the hypervisor itself.
Type 2 hypervisors
Type 2 hypervisors need an operating system to be loaded onto the server before the hypervisor can be loaded and started. The advantage of this type of hypervisor is that the configuration of the hypervisor is easier because the operating system can assist the installer. However the downside is that the performance of the VMs is more limited and the capacity of the server to host hypervisors is reduced.
Regardless of which type of hypervisor you decide to use there the basic design is in actually identical. Firstly every hypervisor needs to allocate a unique segregated environment for each virtual machine. Server resources will need to be allocated and management procedures established for each VM to ensure that they a correctly supported by the operations management resources. It is necessary to keep in mind at all times that the methods used to complete these core functions can be extremely different from one hypervisor to another and the actual feature set of each hypervisor can vary one from another.
6 key selection criteria to choose the right hypervisor
The first good advice to follow is to not make a hasty decision when choosing your own hypervisor, just because a hypervisor is recommended by a friend or colleague or even if you have read a positive review don’t automatically assume this is the right choice for you. Do your own analysis and keep these six key selection points in mind.
1. How important is performance?
You need to be sure that the applications and the virtual machines will run well and optimize the server resources that you have available. Getting the best performance from your hardware platform argues in favor of you choosing a type 1 hypervisor.
2. Is hardware compatibility important?
Naturally the answer to that question is obviously, yes. If you already have the hardware platform that you are going to run your hypervisor on. Your server is normally suitable to support windows or Linux and this will allow you to run a Type 2 hypervisor. If you are preparing to run a Type 1 hypervisor it is essential that you choose a suitable and compatible server hardware platform to run the hypervisor without the risk of problems and conflicts
3. Ease of management and use
Consider the skills of your IT Department, do you have a team that will be able to install, configure and support your chosen hypervisor platform? Be extremely cautious in your assessment of the capacity of your team to complete a successful deployment. One aspect you should keep in mind is major vendors have big user communities and the information you can find on blogs and user forums can be extremely useful should you need to seek advice and assistance. Each hypervisor you will evaluate will provide management resources and a degree of automation to assist you.
4. Reliability is essential
A single software bug on its own can cause disruption across many virtual machines. It is therefore essential that you satisfy yourself that your choice of hypervisor has been extensively tested and is well supported so that you can feel sure that it will work well in your operational environment. The best Type 1 bare metal hypervisor platforms come with certificates of compliance from the major hypervisor producers, look out for these.
5. Will your hypervisor scale to your needs?
Forecast your needs from the hypervisor that you are planning to acquire over the whole economic life of the server platform. If, for example, you need a working life of three years from your hypervisor you need to calculate how many VMs you need it to support. Consider aspects regarding memory will you require up to 1 TB per VM to be available? If so your choice of hypervisor needs to be able to support from a single VM up to the maximum number of VMs you plan to deploy with each one needing his amount of memory. This ability to increase VMs over time and support large memory requirements define the scalability of your server platform and the hypervisor.
6. Calculating the costs
It is important to understand the actual costs of provisioning a hypervisor. You should notice immediately that basic hypervisors are available free of charge, however as your requirements demand more features the cost for the platform management and the advanced features that you will need make the costs rise until you could be looking at a substantial investment. It is important to understand how the hypervisor licensing system works vendor by vendor to avoid making mistakes because you considered one vendor would price licenses in the same way as another
Comparing VMware vSphere with Microsoft Hyper-V you will find that vSphere includes features like vStorage, vMotion and the Distributed Resource Scheduler. If you are planning to use Hyper-V you will need to be familiar with features like thin provisioning and Live Migration.
Peter Melerud CTO of KEMP Technologies the company who bought the Virtual Load Master product range to market responds to the question “how can you select the right hypervisor to respond to your company’s needs?” Melerud’s reply was clear and concise “Well it all boils down to the three principals that are essential
If organizations use less well known hypervisors they run the risk of finding that their virtualization platforms are not supported by major application vendors and worse by the hardware server vendors themselves.”
Chris Heyn responsible for KEMP Technologies business in Italy and Turkey added; “When the Virtual Load Master from KEMP Technologies was available in our market we took this problem and treated it extremely seriously, because if KEMP technologies developed a virtual load balancer that the market looked at and said “Wow this is great! but is it recognized either by VMware and Microsoft for the Hyper-V version?” Should the answer be no it is not certified, the interest of the potential buyer would wane dramatically. Having a recognized solution that is supported by the major hypervisor manufacturers will easily offset the attraction to a cheaper but unsupported product. As Heyn continues; the costs of design, testing and evaluation and support for the date center must always be taken in to consideration and by not using a top vendor’s solution the long term costs are almost certainly going to be higher than using a top brand hypervisor.
Research and evaluation is very important
Your main design objective for virtualization should be the ability to consolidate and concentrate you could say “doing more for less”. The most obvious benefit of virtualizing your servers is to allow yourself to consolidate your servers and use them to run multiple virtualized server workloads at the same time on the physical servers. One of the great advantages of virtualization is your ability to consolidate the physical number of servers that you need by running in order to serve the user community and probably reduce the physical server count.
Drive down costs
While for the future this is good news as the capital cost of server replacement is reduced there is an additional benefit for each physical server that you are able to switch off and disconnect from your network. Experts don’t exact agree but the common stance of both Gartner Group and IDC is that each physical server that is taken out of your network can save you between $4,000 and $5,000 per year on running costs and energy consumption. For this reason it is no surprise that TechTarget survey recent results show that 58% of IT managers have already decided that virtualized consolidated servers are for them.
Advantages of virtualization for storage
Another important reason for adopting virtualization is the ability to consolidate your storage needs. Virtualization allows you to strip storage tasks from isolated physical servers that are part used and from disparate arrays that are minimally managed. Step by step you can bring the storage requirements into a virtual environment that does not have any limitations based on the physical storage locations.
As you aggregate your storage requirements you can physically partition the servers you have dedicated to the storage task and create an array of physical LAN servers that you partition into multiple logical networks creating a single logical LAN by using this process. Once you complete this task you will be able to apply the benefits of traffic control and security procedures that can be applied to your storage resources secure in the knowledge that you have been able to concentrate your storage assets in one virtual environment.
Managing the user endpoints
Application virtualization and the virtual desktop infrastructure also known as VDI can benefit the use of a hypervisor virtualization platform. In this scenario the IT administrators place the desktop resources on centralized servers that can be managed more easily and certainly more securely, control of desktops is naturally more complete. This end point control virtualization has been adopted by as many as 20% of organizations today according to recent surveys and this percentage looks set to increase.
Better systems management and agility
Adopting a plan to consolidate workloads should lead to better and easier systems management Multiple application workloads can be taken off servers in a 1:1 workload/server ratio and can be concentrated with a few physical servers managing multiple virtualized servers. Software tools aid the management and monitoring of the performanceof these virtual servers. A good example of a management product is SCCM (System Center Configuration Manager) 2012 from Microsoft it is intended to specifically enhance while at the same time simplifying the management of infrastructure processes.
Improvements in the management of virtualized servers make the data center always more responsive and agile. By virtualizing the verver roles new application workloads can be deployed, configured and made operational in a fraction of the tima and at a fraction of the cost needed to request and then purchase a server hardware platform that would be required for each application workload if virtualization was not deployed. Recent surveys show that the use of virtualization is becoming increasingly popular for resource allocation and this popularity is also being followed by those who use virtualization for the maintenance of “golden images” of their virtual machines.
Disaster recovery and virtualization
Thanks to the flexible nature of virtualization this environment lends itself perfectly to hosting mission critical applications as well as mission critical activities, a good example being disaster recovery. Recent survey data that comes from a series of interviews around the world points out that as many of 43% of all IT professions and network managers have deployed a hypervisor virtualized workload server environment that will be used in case disaster recovery needs to be activated.
Virtual machines offer the ability to be easily replicated while at the same time they are better protected than their physical counterparts. From remote locations it is quite straight forward to manage these VMs. In disaster recovery mode once the affected servers or locations have been made safe and secure and are ready to take on the original server functionality the virtual machines in the disaster recovery center can return the protected content to the original locations.
Why would you not want to virtualize?
Virtualization of workload servers is gaining popularity among a growing majority of IT Professionals and a number of organizations, at the time of writing this article, are working on deployment projects or virtualization expansion, however like cloud computing there can be good valid reasons why some IT professionals avoid both virtualization and perhaps at the same time clod computing. In most cases the reason to not adopt a virtualized platform is caused by practical rather than psychological reasons.
Recent surveys reported the following reasons for not adopting virtualization
It could be a valid response to insist that your network is too small, there are too few applications and end points to warrant moving from a traditional physical server environment. However if this is the situation that you face today it would be swell worth investigating if a managed hosted solution for your needs wouldn’t be better value. Some IT professional responders to recent surveys on this question point out tha they have more than enough physical server resources that means that the need to virtualize is not a problem that they are currently facing. These respondents fail to take into account that by decommissning a single physical the IT professional could save the company between $4,000 and $5,000 a year according to Gartner and IDC.
Money plays it’s part too, a number of IT professionals who hav not transitioned to Virtualized application servers do not have accesss to the necessary funds or budget from their CFO. In certain quarters virtualized production environment are considered impractical as the view of the organization is that they are unstable. While this point of view could be justified if an unsuitable, poorly implemented and supported hypervisor is chosen. On the other hand to say that VMware, Hyper-V from Microsoft, XEN or KVM are unstable environments is clearly nonsense.
Chris Heyn KEMP Technologies Business Development Manager for Italy comments “ We are able to offer network load balancers either as physical appliances called Load Master or as VM Virtual Load Master,(VLM) in fact we were one of the first load balancer vendors to be able to ffer both versions. From our point of view you could say that we are indifferent whether our clients virtualize or not the load balancer, th important thing is that we work with our clients to ize the right platform for the application. The applications could be Microsoft Exchange, Windows Terminal Server, web content @commenrce etcetra.
For us 2 things are important today, firstly we have extended the VLM platforms that we support from VMWare and Hyper-V now we also support KVM and XEN, particularly important for data center mnagers. Secondly we can confirm the growth of the percentage sales of the VLMs compared with our physical devices. This change can be attributed to:
Despite all of this there are still legacy applications often on mainframe environments that swill never lend themselves to virtualization. However IT professionals can deploy the other applications that they run around these legacy workloads in a virtualized environment.
Virtualization has come a long way in the last 6 years, servers have grown in power, Microsoft who said in 2009 that OCS the forerunner to its unified communication platform LYNC should never be run in a virtualized environment change tack when Lync was first released in 2010. By then Microsoft had a stable hypervisor platform Hyper-V. So with Microsoft throwing it’s hat into the virtualization ring too the arguments against considering virtualization continue to weaken and increasingly lack credence.
Application vendors today, like KEMP, have prepared their applications virtualization ready, they practice virtualization best practice. Changes of policy by companies like Microsoft itself have come from the growth in the stability of the virtual platform that can be run on increasingly powerful servers and the maturity as they have come of age.