Top 10 Things
Need To Know
Top 10 Things System Administrators
Need to Know About Virtualization
Jeffrey W. Hall, VCI, VCP4, CCSI, CCNP Voice, CCNP Security, Datacenter Support Specialist, CCIP, CCDP, CCNP, MCT, MCITP, MCSE
The only constant thing about technology is that it is never constant. The advances that we’ve witnessed in computing over the last couple of decades are staggering. To even suggest even a decade ago that a single hard drive could be as big as 2 Terabytes (TB) or that a single server could have 1TB+ of RAM would have gotten you strange looks and even possibly laughs.
Today, we are experiencing one of the greatest paradigm shifts that computing has seen yet: virtualization. We are quickly moving away from thinking about a “one operating system per one server” model to one that says “how many operating systems can we place on a single server?” This single but monumental change in datacenter design can be very difficult for system administrators to grasp. For many years, our practice was to buy the biggest servers we could, install as much CPU and memory as possible, then take that server and install our operating system of choice. It didn’t matter that we saw incredibly inefficient use of those expensive resources (usually only 5-10%)…this was the only design we knew. Virtualization technology changes everything we know currently about datacenter management. By implementing a robust solution, such as the VMware vSphere 4.1 suite, we are able to realize benefits and costs savings that were never once thought possible.
The purpose of this white paper is to offer a high-level description of the top 10 things about virtualization every system administrator (sysadmin) needs to know; regardless whether you’re just dipping your toes in the virtualization pool or are already in the deep end. And, in the best David Letterman tradition, we’ll start with number 10. DISCLAIMER: This list is not in a strict order of importance. It is ok for you to disagree with my order choice, or even the completeness of this listing.
Number 10: It is much easier to test new solutions in your infrastructure One of the challenges we face as sysadmins is the validation of changes to our network without disrupting the status quo of the production environment. In order to properly test new software, updates, patches, etc., we have usually have to beg and plead with our management to purchase a test and validation lab. The logic is simple: change isn’t always good. We need an external environment to verify that the new patch or Service Pack we are about to unleash on our company is not going to wreak havoc.
Copyright ©2011 Global Knowledge Training LLC. All rights reserved.
Let’s consider this example: You have a web server farm deployed on the Windows Server 2003 platform. A new FastCGI for IIS (Internet Information Services) update has been released, and you’re anxious to implement it in your production environment. Of course, you’re not going to go in guns blazin’ and blindly install the upgrade on your web servers. You’re going to test the upgrade first to make sure there are no negative consequences. So, in a traditional datacenter, you would have to make sure you have an identical physical environment in your test labs for your webserver farm so you could deploy the update. But what if you don’t have an extensive test lab or even a lab at all? Many small- to medium-sized businesses find themselves in exactly this situation. Without a test lab, your only recourse will be to schedule a maintenance window for your production servers and test the update during the planned outage.
How can virtualization fix this required downtime for the production servers? With VMware vSphere 4.x, you can run the Converter plug-in and do a “Physical to Virtual” conversion, also called...