Maxwell – a 64 Fpga Supercomputer

Only available on StudyMode
  • Topic: Field-programmable gate array, Reconfigurable computing, Parallel computing
  • Pages : 17 (4629 words )
  • Download(s) : 32
  • Published : August 12, 2008
Open Document
Text Preview
Rob Baxter1, Stephen Booth,
Mark Bull, Geoff Cawood,
James Perry, Mark Parsons,
Alan Simpson, Arthur Trew
EPCC and FHPCA
Andrew McCormick,
Graham Smart,
Ronnie Smart
Alpha Data ltd and FPHCA
Allan Cantle,
Richard Chamberlain,
Gildas Genest
Nallatech ltd and FHPCA
1 communicating author: r.baxter@epcc.ed.ac.uk; 0131 651 3579; University of Edinburgh, James Clerk Maxwell Building, King’s Buildings, Edinburgh EH9 3JZ

Abstract
We present the initial results from the FHPCA
Supercomputer project at the University of Edinburgh.
The project has successfully built a general-purpose 64
FPGA computer and ported to it three demonstration
applications from the oil, medical and finance sectors.
This paper describes the machine itself – Maxwell – its
hardware and software environment and presents very
early benchmark results from runs of the demonstrators.
1. Introduction
Against the background of possibilities in the emerging
area of high-performance reconfigurable computing [1]
the FPGA High Performance Computing Alliance
(FHPCA [2]) was founded in early 2005 to take forward
the ideas of an FPGA-based supercomputer. The alliance
partners are Algotronix, Alpha Data, EPCC at the
University of Edinburgh, the Institute for System Level
Integration, Nallatech and Xilinx. The project was
facilitated and part funded by the Scottish Enterprise
Industries team and had two main goals:
· design and build a 64-FPGA supercomputer from
commodity parts and “plug-in” FPGA cards;
· demonstrate its effectiveness (or otherwise) for
real-world high-performance computing (HPC)
applications.
We describe here the results of the first of these goals
and report some early results of the second.
The machine itself – Maxwell – was completed in the
first part of this year. We describe its architecture in
Section 3; interestingly it shares a number of similarities
with the proposed petascale Reconfigurable Computing
Cluster described by Sass et al in [3].
This paper is structured as follows. Section 2 describes
the motivation behind Maxwell. Section 3 delves into the
details of the machine’s hardware while Section 4
describes the software environment and programming
methodology used in porting a number of demonstration
applications. Section 5 discusses three key demonstration
applications from the fields of financial services, medical
imaging and oil and gas exploration., and Section 6
presents early performance results from these applications
on Maxwell. Finally Section 7 offers thoughts for the
future.
2. Motivation
Maxwell is designed as a proof-of-concept generalpurpose
FPGA supercomputer. Given the specialized
nature of hardware acceleration the very concept of
‘general-purpose’ for high-performance reconfigurable
computing (HPRC) is worth investigating in its own right.
Can a machine built to be as broadly applicable as
possible deliver enough FPGA performance to be worth
the cost?
Our real interest in building Maxwell was not to test
whether FPGA hardware can be used to accelerate
segments of a standard HPC application, but to explore
whether standard HPC applications can be run almost
entirely on FPGA hardware, parallel communications and
all. We take the same view as Bennett et al [4] in
regarding the FPGAs as the primary compute platform
rather than co-processors to a CPU. To this end we
constructed a machine capable in principle of parallel
operation across a network of large FPGAs linked directly
together.
3. Hardware
Maxwell is essentially an IBM BladeCentre Cluster
with FPGA acceleration. Altogether it comprises 32 blade
servers each with one Intel Xeon CPU and two Xilinx
Virtex-4 FPGAs. The CPUs are connected to the FPGAs
with a standard IBM PCI-X Expansion Module.
3.1 BladeCentre chassis
Physically Maxwell comprises two 19-inch racks and
five IBM BladeCentres. Four of the BladeCentres have
seven IBM Intel Xeon blades and the fifth has four. Each
blade is a...
tracking img