Trends in Server Virtualization

  • A recent survey of IT professionals by Spiceworks found 80% of small- and mid-sized businesses have already adopted some form of server virtualization.
  • Why are IT organizations so enamored with this technology? The simplest answer is that it saves money.
  • While there are cost savings to virtualization, the technology can also lead to increased expenditure.

Server virtualization has come a long way in a very short time. From its early days in IT test and development pilot projects, with VMware’s vSphere being the only game in town, to mass adoption, virtualization-first IT policies, and a range of hypervisors available from companies like Citrix, Microsoft, and Red Hat Linux.

Server virtualization has now found its way into IT infrastructures of just about every size. A recent survey of IT professionals by Spiceworks found 80% of small- and mid-sized businesses have already adopted some form of server virtualization.

Despite an increasing focus on technology and the benefits of IT to an organization’s bottom-line, a recent survey by Spiceworks found that IT budgets are at a standstill.

Trends in Data Center Virtualization

A lot of companies have now adopted virtualization-first policies that dictate all new apps must run in virtual environments. Moving legacy applications to virtualized environments, however, is another matter entirely

Despite an increasing focus on technology and the benefits of IT to an organization’s bottom-line, a recent survey by Spiceworks found that IT budgets are at a standstill. IT hiring is also not keeping track with demand for new technology. As a consequence of these trends, IT staffs are continually expected to do more with less.

Most organizations, regardless of size, are now dealing with hybrid IT environments containing both virtual and physical servers. This is not an ideal situation, by any means. Hybrid environments complicate just about every aspect of server administration. They reduce administrator productivity, increase the cost of management tools, spread the knowledge of management tools too thinly among administrators, and can lead to compromised application availability and data integrity.

Why Virtualize?

So, what is all the fuss about virtualization? Why are IT organizations so enamored with this technology? The simplest answer is that it saves money.

When business applications are deployed on physical servers, administrators and capacity planners put their heads together to figure out how big a server is needed for the application. They make estimates for how much processing power, how much memory, how much storage, and how much network bandwidth the application needs.

  • Reduced Capital Expenditure

Virtualization enables a physical server to host more than one virtual server. It also provides the capability to easily move virtual servers between different physical servers to balance demand for resources. The physical servers running virtualization software frequently run at above 80 percent of their rated capacity. The consolidation of business applications on a single physical server, each with their own discreet operating environment, can dramatically reduce the number of servers in the data center. With fewer physical servers, IT organizations are able to reduce their capital expenditure, freeing these funds for use elsewhere in the organization to increase revenue growth. Of course, there are other benefits, too.

  • Reduced Operating Expenses

Reducing the number of physical servers in the data center also saves energy, an important consideration when carbon-footprint is a metric tracked by investors and shareholders. And, it enables a data center to host more applications, a critical factor when data center real estate is becoming more valuable. From an administrative perspective, virtual machines are much easier to set up, and break down. If an application needs a new server, an administrator can provision the resources much faster than they could do if setting up a physical server. This often reduces provisioning from weeks to hours, or less, benefiting rapid application development.

Virtualization’s Downside

There are, of course, downsides to virtualization, too. Not all business applications are appropriate for running on a virtual server. While there are cost savings to virtualization, the technology can also lead to increased expenditure. As with many technologies, uninformed use can exasperate the very problems that it is intended to solve.

  • Not Everything Can Be Virtualized

Not all applications are great candidates for virtualization. Applications that are very sensitive to performance may not be a good fit. These apps are unlikely to tolerate sharing physical resources with others and the overhead of running a hypervisor on the same hardware may be unwelcome.

There is a wide variety of applications that require physical appendages to their servers, often with unique driver software. Because hypervisor software has to appeal to the majority of application use cases, these unusual applications are often not supported.

  • Increased Cost

There is a cost component that can impact the adoption of virtualization. While virtualization can reduce operating costs in the long-term, there are up-front expenses associated with implementing the technology. The host servers used to run each virtualization hypervisor must be capable of supporting the performance needs of all virtual servers. These servers are likely to be more costly than the physical servers they replace.

  • Server Sprawl

Ironically, server sprawl — a condition that virtualization holds out the promise of solving — can, in fact, be exasperated by the ease of spinning up virtual machines. Server sprawl became a significant issue in the data center when servers were being deployed without a sufficient` understanding of their impact. This often resulted in data centers full of under-utilized server hardware that consumed precious energy and floor-space.

  • Single Point of Failure

Finally, a glaringly obvious downside to server virtualization is the fact that hosting multiple virtual servers on one piece of hardware introduces the potential for a single point of failure. If the physical server running the hypervisor fails, all applications running on virtual machines hosted by the hypervisor will become unavailable.

Backup Issues Unique to Virtualization

As applications like ERP, CRM, and email have moved to virtual machines, data protection in the virtualized environment has undoubtedly become more important. Unlike less critical applications, these new workloads often have no tolerance for data loss, and leave very little room for downtime. Unfortunately, data protection software vendors have frequently found themselves playing catch-up to rapid changes in the virtualization operating environments.

One of the goals of server virtualization has been to make more efficient use of physical server hardware. Previous architectures would use perhaps ten to thirty percent of a server’s CPU, on average, leaving plenty of capacity for periodic workloads like backup. Virtualized server environments now typically see utilization greater than 80 percent. This leaves very little excess capacity for other workloads.

  • Backup Issues For Small & Mid-Sized Companies

If you’re a small- to mid-sized enterprises (SMEs) chances are you are at a crossroads when it comes to choosing the right data protection solution for your data. Moving to virtualized servers is not going to happen overnight. It is highly likely that your environment has a mix of physical and virtual servers running a variety of business applications, and will do for some time. This brings up several issues.

  • Agentless Backup

When thinking of backup, we need to distinguish between the application and the machine. Traditional backup software requires an agent be installed on the host OS to communicate with the backup server that catalogs and stores backup data. Agent software helps the backup become application-aware.
Virtualization environments are increasingly relying on agent-less backup. This approach backs up the entire virtual machine but has less understanding of the applications running on the host.

New Backup & Recovery Capabilities

The unique nature of virtual servers has generated several opportunities for extending the use of data protection techniques. VMware, for example, hosts each virtual machine in a core vmdk file, with several smaller supporting files for logs, configuration, and the like. Being able to capture an entire virtual machine by performing an image-based backup of a handful of files has its advantages.

  • In-Place Recovery

The vmdk file contains an encapsulated host operating system and the applications running within it. Restoring this file to a suitable device can allow the virtual machine to be restarted straight from the backup. This is known as an in-place recovery.

  • VM Migration

One of the benefits of server virtualization is that virtual machines are easy to migrate between servers. There are any number of reasons why you might want to do this: moving workloads between physical and virtual servers, or vice versa; host server maintenance and upgrades; and balancing workloads.

  • Changed Block Tracking

Hypervisor vendors have added APIs into their operating systems specifically for backup software. For VMware the vStorage APIs for Data Protection are specifically for backup. VMware provide their own integrated backup facility, VMware Data Protection (VDP), but third-party vendors have also incorporate the backup APIs into their solutions, benefiting overall stability of the backup tool and giving virtual server admins greater selection of backup solutions to choose from.

The unique nature of virtual servers has generated several opportunities for extending the use of data protection techniques.

The Business of Data Recovery

It is critically important to understand how sensitive each area of your organization is to data loss when evaluating data protection options.

This information informs technology selection, provides the foundation for your backup and recovery and business continuity planning, and lets IT know the consequences of a failure to recover each business application. This is even more important in a virtualized setting where an outage to a physical server can affect many different applications.

There are two industry-standard metrics used to record a business application’s tolerance of downtime and data loss: recovery point objective (RPO) and recovery time objective (RTO). These metrics are units of time and indicate how much data application users can tolerate losing (RPO) and how quickly an application has to be back online before the organization begins to suffer significant losses (RTO). RPO extends back from the time of an outage and RTO extends forward.

Learn More Trends In Server Virtualization has come a long way in a very short time.

Only $1/click

Submit Your Ad Here

techcloud link

Tech Cloud Link is the place to get free technology whitepapers downloads in a variety of formats, including PDF versions of popular articles tech briefs, tech whitepapers, and research articles into profoundly diverse spectrum within IT landscape. Here you will resolve trending IT concerns on topics like – Network Communication – Storage – Data Center – Server – Network Security. The whitepapers accurately address convergence between industrial and enterprise networks and collections of Articles, Features, Slide Shows and Analysis on Enterprise IT, Business and Leadership strategies that focus on critical
https://techcloudlink.com/

Leave a Reply