Cloud

Only available on StudyMode
  • Download(s): 17
  • Published: March 30, 2013
Read full document
Text Preview
7th NRENs and Grids Workshop Trinity College, Dublin, September 2, 2008

Cloud Computing for on-Demand Resource Provisioning

Distributed Systems Architecture Research Group Universidad Complutense de Madrid

1/23

Objectives •  Show the benefits of the separation of resource provisioning from job execution management for HPC, cluster and grid computing •  Introduce OpenNEbula as the Engine for on-demand resource provisioning •  Present Cloud Computing as a paradigm for the ondemand provision of virtualized resources as a service •  Describe Grid as the interoperability technology for the federation of clouds •  Introduce the RESERVOIR project as the infrastructure technology to support the setup and deployment of services and resources on-demand across administrative domains 2/23

Contents 1. Local On-demand Resource Provisioning
1.1. The Engine for the Virtual Infrastructure 1.2. Virtualization of Cluster and HPC Systems 1.3. Benefits 1.4. Related Work

2. Remote On-demand Resource Provisioning
2.1. Access to Cloud Systems 2.2. Federation of Cloud Systems 2.3. The RESERVOIR Project

3. Conclusions

3/23

1. Local on-Demand Resource Provisioning
1.1. The Engine for the Virtual Infrastructure

The OpenNEbula Virtual Infrastructure Engine • OpenNEbula creates a distributed virtualization layer • Extend the benefits of VM Monitors from one to multiple resources • Decouple the VM (service) from the physical location • Transform a distributed physical infrastructure into a flexible and elastic virtual infrastructure, which adapts to the changing demands of the VM (service) workloads Any service, not only cluster working nodes

4/23

1. Local on-Demand Resource Provisioning
1.2. Virtualization of Cluster and HPC Systems

Separation of Resource Provisioning from Job Management
•  New virtualization layer between the service and the infrastructure layers •  Seamless integration with the existing middleware stacks. •  Completely transparent to the computing service and so end users

SGE Frontend Virtualized SGE nodes

Dedicated SGE working physical nodes OpenNebula
VMM VMM VMM VMM

5/23

1. Local on-Demand Resource Provisioning
1.3. Benefits

SGE Frontend

User Requests •  SGE interface •  Virtualization overhead Virtualized SGE nodes

OpenNebula
VMM VMM VMM

Dedicated SGE nodes

Cluster Nodes
6/23

1. Local on-Demand Resource Provisioning
1.3. Benefits

Cluster Consolidation •  Heuristics for dynamic capacity provision leveraging VMM functionality (e.g. live migration) SGE Frontend •  Reduce space, administration effort, power and cooling requirements or support the shutdown of systems without interfering workload Virtualized SGE nodes

OpenNebula
VMM VMM VMM

Dedicated SGE nodes

Cluster Nodes
7/23

1. Local on-Demand Resource Provisioning
1.3. Benefits

Cluster Partitioning •  Dynamic partition of the infrastructure SGE Frontend •  Isolate workloads (several computing clusters) •  Dedicated HA partitions Virtualized SGE nodes

OpenNebula
VMM VMM VMM

Dedicated SGE nodes

Cluster Nodes
8/23

1. Local on-Demand Resource Provisioning
1.3. Benefits

Support of Heterogeneous Workloads •  Custom worker-node configurations (queues) SGE Frontend •  Dynamic provision of cluster configurations •  Example: on-demand VO worker nodes in Grids Virtualized SGE nodes

OpenNebula
VMM VMM VMM

Dedicated SGE nodes

Cluster Nodes
9/23

1. Local on-Demand Resource Provisioning
1.3. Benefits

On-demand resource provisioning SGE Frontend VIRTUAL INFRASTRUCTURE Virtualized SGE nodes Virtualized Web server

OpenNebula
VMM VMM VMM

Dedicated SGE nodes

Cluster Nodes
10/23

3. Conclusions
1.3. Benefits

Benefits for Existing Grid Infrastructures (EGEE, TeraGrid…) •  The virtualization of the local infrastructure supports a virtualized alternative to contribute resources to a Grid infrastructure •  Simpler deployment and operation of new...
tracking img