HPN Group / Home
High-performance Computer Networks and Services for
Parallel and Distributed Computing
The high-performance networking (HPN) group has a long and successful history on research with
high-performance computer networks (e.g. system-area networks, local-area networks, etc.) for
high-performance computing and high-performance embedded computing. For the past decade, the group has
been working with a variety of cutting-edge HPN technologies such as 10 Gigabit and Gigabit Ethernet,
InfiniBand, RapidIO, Scalable Coherent Interface (SCI), Myrinet, Fibre Channel, ATM, SuperHIPPI, Giganet
cLAN, Synfinity, etc. A broad range of testbed experiments, coupled with the development of a number of
simulative and analytical models for HPNs, have led to new and better insight about the inherent
performance characteristics and tradeoffs of HPN protocols and technologies for application in
general-purpose HPC systems as well as embedded and real-time systems.
Sponsors: Department of
Defense, National Science Foundation
Principal Investigator: Dr. Alan D.
Spring 2006 meetings: 3pm (8th period) Mondays and 12:50pm (6th
period) Thursdays, HCS conference room (LAR335)
Casey Reardon, PhD student, UF Presidential Fellow, group leader
Himanshu Anand, MS student
Trevor Finn, BS student
Jack Profumo, MS student
Karthik Veeramani, MS student
JunBok You, PhD student
Related LinksUPC@Florida web page
maintained by Brian Letzen
project web page
Progress Report on HPN/UPC research in HCS Lab (10/15/04) -- PPT
Annual Progress Report on HPN/UPC research in HCS Lab (05/12/04)
-- PPT'02 format, PPT'95 format
Interim Report for Rockwell (05/03/04) -- PDF
Progress Report on HPN/UPC research in HCS Lab (02/05/04) -- PPT'02 format, PPT'95 format
Progress Report on HPN/UPC research in HCS Lab (11/05/03) -- PPT'02 format, PPT'95 format
Progress Report on HPN/UPC research in HCS Lab (7/30/03) -- PPT'02 format, PPT'95 format, PDF format
Research activities with Scalable
Coherent Interface (SCI) in the HCS Lab
We gratefully acknowledge the following vendors for their support of this
Dolphin Interconnects for
their donation of SCI equipment and software tools.
Intel for their donation of Xeon
processors and motherboards for Kappa
AMD for their donation of Opteron
processors for Lambda cluster.
for their donation of ScaMPI and other software tools for SCI.
their loan of QsNet equipment and software tools.
for their driver support for 4X InfinBand equipment previously donated
by (now defunct) Fabric Networks.