Facilities for CSIRO research staff

NCI National Facility - vayu and xe

The National Computational Infrastructure (NCI) is a federally funded Australian partnership committed to providing peak advanced computing facilities for the Australian research community.

The National Facility is a peak computing facility funded by NCI and is managed by the Australian National University Supercomputing Facility (ANU-SF). The National Facility also has an impressive range of software available and can be a useful resource for software porting and development for high performance scientific computing platforms.

For more information about the NCI National Facility visit http://nf.nci.org.au/

CSIRO has a significant partner share in the NCI-NF, managed by ASC, and CSIRO researchers also have access to the NCI-NF Merit Allocation Scheme (MAS).

For guidance on getting access to the NCI-NF please read our local Guide to the Sun Constellation at NCI. For further information please contact or call the ASC Help Desk on (03) 8601 3800

SGI Altix 4700
Pictured above: SGI Altix 4700
StorageTek SL8500
Pictured above: CSIRO ASC's Sun/StorageTek SL8500 tape library

CSIRO ASC SGI Altix 4700 NUMA - cherax

This machine is a large shared memory multiprocessor which has 128 1.67Hz Itanium (ia64) processor cores and 512 Gbyte of memory as of August 2008. For full specifications see the SGI web site where it is also called the Infinite Storage Data Lifecycle Management Server (DLM Server).

The Altix is tightly coupled with the CSIRO Data Store and provides significant processing capacity for working with the Data Store holdings.

CSIRO ASC Data Store

The Altix machine has 38 terabytes of disc and hosts a hierarchical data store with data on 6 terabytes of high-performance disk being staged to or from cache disc and magnetic tape cartridges in automatic tape libraries (Sun/StorageTek SL8500) as required. The tape libraries have capacities in excess of 5 petabytes.

The Data Store is used as a central data repository for users of the ASC systems, with live rather than archive access, and providing virtually 'infinite' storage capacity. The data holdings reached 1 petabyte in October 2009. Two or more copies of all files are kept, with copies of small files being kept off-site.

CSIRO ASC Compute Cluster - burnet

CSIRO has a IBM System x iDataplex dx360 M3 system of 96 nodes.

The cluster is available for general purpose use by all CSIRO ASC registered users. Specific research groups that have co-invested with ASC have priority access to portions of the cluster.

Each compute node comprises of dual hex-core Intel Xeon CPUs, for a total of 1152 cores.

There are two main hardware configurations of nodes. 48 nodes have have 48GB of memory and 48 nodes have 96GB of memory. The node interconnect is Quad Data Rate (QDR) Infiniband. There is approximately 50TB of globally accessible storage.

CSIRO GPU Cluster - linuxgpu and wingpu

The new CSIRO high performance computing cluster will deliver up to 256 plus Teraflops of computing performance and consists of the following components:

  • 128 Dual Xeon E5462 Compute Nodes (i.e. a total of 1024 2.8GHz compute cores) with 32 GB of RAM, 500 GB SATA storage and DDR InfiniBand interconnect
  • 64 Tesla S2050 (256 GPUs with a total of 114,688 streaming processor cores)
  • 144 port DDR InfiniBand Switch
  • 80 Terabyte Hitachi NAS file system.
The cluster is supplied by Xenon Systems of Melbourne and is located in Canberra, Australia.

Bureau of Meteorology SUN Constellation - solar

The SUN Constellation system is supported by the High Performance Computing and Commmunications Centre (HPCCC), a partnership between BoM and CSIRO. The HPCCC partnership has been operational since 1997 and is staffed by specialists in High Performance Computing. For more on the HPCCC see the Partnerships page.

The SUN constellation consists of 576 nodes, each with 2 quad-core Intel 64-bit Xeon Nehalem procesors, totalling 4808 CPU cores. Each node has 24 Gbytes of main memory and 24 Gbytes of flash memory instead of local disc. All of the nodes are connected by a dual-rail Infiniband network, with data rates of 40 Gbit/s per connection.

The system runs the CentOS distribution of Linux, Sun Grid Engine for job management and uses the Lustre global file system comprising of 115TB of disk space.

In addition, there are 4 user login nodes, and 6 data-mover nodes.

NCI Specialised Facility for Bioinformatics - barrine

The National Computational Infrastructure Specialised Facility (NCI-SF) in Bioinformatics provides a compute cluster and a variety of bioinformatics tools and databases for all interested CSIRO scientists. For more information about the facility please visit the NCI-SF in Bioinformatics Cluster portal

CSIRO has a partner share in the NCI-SF in Bioinformatics and CSIRO researchers also have access to the NCI Merit Allocation Scheme (MAS).

MASSIVE - NCI Specialised Facility in Imaging and Visualisation

MASSIVE (Multi-modal Australian ScienceS Imaging and Visualisation Environment) is a specialised high performance computing facility for imaging and visualisation available to CSIRO researchers. The MASSIVE facility provides the hardware, software and expertise to help scientists apply advanced imaging and visualisation techniques across a wide range of scientific fields.

CSIRO has a partner share in MASSIVE and CSIRO researchers also have access to the NCI Merit Allocation Scheme (MAS).

For more information about the facility please visit http://www.massive.org.au.

IVEC Facility

CSIRO is a partner in iVEC. Please contact ASC if you would like to get access to the iVEC Altix or other resources.

For more information about the iVEC visit http://www.ivec.org

TPAC Facility

CSIRO is a partner in TPAC (Tasmanian Partnership of Advanced Computing). The TPAC facility consists of a 128 processor SGI Altix 4700 (~0.8 Tflops) with a 350 Terabyte SAN/robotic tape silo system. Please contact ASC if you would like to get access to the TPAC Altix or other resources.

For more information about the TPAC visit http://www.tpac.org.au

Condor Cycle Harvesting

Condor has been setup to take advantage of the large number of Windows desktop PCs across CSIRO. The majority of these machines are idle for many hours each the day. To utilise this spare computing capacity, state-based central manager computers have been configured to manage a pool of nominated PCs within each state. Jobs submitted in one state will preferentially run in that pool but can migrate (or "flock") to other pools if necessary. Jobs are generally only allowed to run overnight, i.e. between 6:00pm and 8:00am, although a small class of "shortjobs" can run at any time. Individual desktop owners always have priority use of their machines. If a job is running on a desktop and any owner activity is detected (keyboard, mouse, CPU) then that job is terminated and sent elsewhere to run. More detailed information about Condor in CSIRO can be found in the ASC User Documentation, and about Condor generally at the University of Wisconsin Condor website [external link] , where Condor was developed.


Comments to:


© Copyright 2010, CSIRO Australia
Use of this web site and information available from it is subject to our Legal Notice and Disclaimer and Privacy Statement