Antares & FireAnt

38-node, ~200 GFLOP Beowulf supercomputing cluster of the Planetary System Formation group
of Stockholm Observatory, Stockholm University. Inaugurated: 6 Sept. 2002

Hardware Summary(totals):

ANTARES (the original system from 2002):

FireAnt (the 2005 upgrade more than tripling the computational power): ***(Our interconnect transfers up to 0.5 Gbit/s of data from one cpu to another, only 2-2.5 times slower than the Dolphin Wulfkit3 network on the #2 Swedish supercomputer Seth. Click to see the details of the comparison calculations)

ANTARES (2002): Maximum estim. performance = 61 GFlop, a factor ~2.2 away from qualifying for the 2002 list of world's Top 500 Supercomputers. While this itself isn't bad, the really neat thing is the price/perf. ratio, which shows you how CHEAP we are :-)
Total cost of the system = 207 kSEK + VAT. (~$20k+tax at the time of purchase; we don't pay VAT at the University.) Can YOU beat our $330/Gflop ratio (in rack mounting, which involves some extra cost)? Please let us know if you do, but frankly we're not expecting your email before, say, late 2003. [We didn't get any....]
FIRE ANT (2005): Maximum estim. performance = 144 GFlop, assuming the 61 GFlop number above, significantly beating the Athlon system because of the ~2.5 times faster cpu's. While Opterons are 64-bit machines, we'll mostly benefit, not from that but from the on-the-chip memory controller (once again, memory bandwith is tied to the cpu speed!).
Price/perf. ratio's quite good. Total cost of the system = 202 kSEK + VAT. (~$30k+tax, of which we don't pay tax. Notice how the exchange rate changed...) Can YOU beat our $206/Gflop ratio of the new Sunfire system? Please let us know what ratios you were able to achieve.

Main Software in Use: Debian Linux, MPICH, PVM, Absoft Fortran

Main Applications: extrasolar planet formation (CFD of protoplanetary disks),   circumstellar dust disks

Slide show on original Antares (2002):


What is Antares ?

What is it made of ?


Applications of parallel computing 

Typical ways to make your program parallel 

What are PVM (Parallel Virtual Machine) and MPI (Message Passing Interface) ?

Clocks and speed

File Storage

Price/performance issues 

Caveats, Conclusions, Invitation to use


To open account on Antares contact:
Pawel Artymowicz, Adam Peplinski 

Home Page:



 Antares (front side & the people behind Antares)

 Antares (back side)

 FireAnt (front side) and the   back side.


Cluster supercomputing, MPI, PVM, etc.


Historical Footnotes: Acknowledgments: Funding for Antares (earmarked 200 kSEK research grant) and Fire Ant extension (202 kSEK from a general-purpose research grant) was generously provided by Vetenskapsradet (Swedish Science Board).  We also acknowledge Iouri Belokopytov, Sergio Gelato, and Uno Wänn from the Stockholm Center for Physics, Astronomy and Biotechnology for their interest and help in building this system: the racks provided by Iouri, system work by Sergio, help with the rails on Antares by Uno. Adam Peplinski spent much time and effort brillintly running Antares as a sysadmin. Many thanks to all.

last update: Jan 2005, P.Art.