Green Data Center with Virtualization and Cloud Computing Technique
Nowadays, reducing the power consumed by large-scale data centers is most concerned issue in several large enterprises. Most of them have a number of servers situated in all branches providing online services. This consumes large amount of power for maintaining and cooling. Virtualization is a promising approach to consolidating multiple online services into a smaller number of computing resources. A virtualized server environment allows computing resources to be shared among multiple performance-isolated platforms called virtual machines. This will allow data center operators to maintain the desired performance while achieving higher server utilization and energy efficiency. For the server part, we implement dynamic management algorithm (DMA). From this, Virtual Machines are ordered according to their migration costs within each Physical Machine. For application part, we adopt cloud computing concept to implement applications providing service to both internal and external stakeholder. To test the system, we implement workload generator to produce workload to the system and measure the performance of the system. By doing this, the enterprise can reduce the cost for maintain servers and provide better service for the staffs and customers effectively. ________________________________________________________________________ 1. INTRODUCTION
Green computing or green IT, refers to environmentally sustainable computing or IT. It is the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems—such as monitors, printers, storage devices, and networking and communications systems—efficiently and effectively with minimal or no impact on the environment. Green IT also strives to achieve economic viability and improved system performance and use, while abiding by our social and ethical responsibilities. Thus, green IT includes the dimensions of environmental sustainability, the economics of energy efficiency, and the total cost of ownership, which includes the cost of disposal and recycling is the study and practice of using computing resources efficiently. Traditionally, most data centers were designed according to traditional capacity models and technology limitations, which forced system architects to expand capacity by attaching new assets. In essence, one server per workload, with every asset requiring dedicated floor space, management, power and cooling. These silo infrastructures are inherently inefficient, leading to asset underutilization, greater hardware expenditure and higher total energy consumption. From this data center power and cooling costs have increased largely and over the next five years, it is expected that most data centers will spend as much on energy costs as on hardware, and twice as much as they currently do on server management and administration costs. Consequently, green computing has an impact on design and develop newly data center as there are many benefits of Green Computing. It can lower overall energy expenses including general energy consumption, as well as power and cooling costs while optimize server capacities and performance. Moreover it can increase ease of systems and solutions management. In short, Green Computing enables companies to meet business demands for cost-effective, energy-efficient, flexible, secure and stable solutions while being environmentally responsible. Service-oriented computing (SOC) is the computing paradigm that utilizes services as fundamental elements for developing applications. SOC involves the service layers, functionality, and roles (see the sidebar “Services Overview”) as described by the extended service-oriented architecture (SOA) depicted in the figure here....
References: Gunjan Khanna and Kirk Beaty, Gautam Kar, Andrzej Kochut, 2006. Application Performance Management in Virtualized Server Environments. , 373-381.
www.phdcomputing.net, “Understanding Cloud Computing”, IEEE Transactions on Semiconductor Manufacturing Industry, August 10, 2009.
Liting Hu, Hai Jin, Xiaofei Liao, Xianjie Xiong, Haikun Liu, 2008. Magnet: A Novel Scheduling Policy for Power Reduction in Cluster with Virtual Machines. In IEEE Proceedings of the International Conference on Cluster Computing, 13-22.
Please join StudyMode to read the full document