TransWikia.com

Supercomputers around the world

Matter Modeling Asked on August 19, 2021

For a matter modelling person, the most valuable resource is computing power. For many of us, computing power at hand limits the scale of problems we can solve. There are many national supercomputing facilities for academics. What are the resources available in each country?

14 Answers

Canada: Compute Canada


Before Compute Canada (Antiquity)

Supercomputing in Canada began with several disparate groups:

  • 1997: MACI (Multimedia Advanced Comp. Infrastructure) is formed from Alberta universities.
    • 2001: WestGrid is formed by 7 Alberta and BC institutions (facilities launched in 2003).
  • 1998: HPCVL is formed by institutions in Toronto and eastern Ontario.
  • 2000: RQCHP is formed in Quebec. Little information is available about it (see the URL link).
  • 2001: SHARCNET is formed by 7 Ontario universities, by 2019 it has 19 institutions.
  • 2001: CLUMEQ is formed with some institutions in Quebec (facilities launched in 2002).
  • 2003: ACENET is formed with institutions in Atlantic (Far-Eastern) Canada.
  • 2004: SciNet is formed with University of Toronto & some Ontario hospitals (facilities in 2009).

Also in 2003: high-speed optical link was made between WestGrid & SHARCNET (West and East).

Amalgamation into Compute Canada (CC)

  • 2006: WestGrid, HPCVL, SciNet, SHARCNET, CLUMEQ, RQCHP & ACEnet formed CC.
    • $60 million from CFI's National Platforms Fund and $2 million/year from NSERC.
  • 2011: Everything is split into just 4 regions: West, Ontario, Quebec, East:
    • HPCVL, SciNet & SHARCNET form Compute Ontario (just a name, they're autonomous).
    • CLUMEQ & RQCHP form Calcul Quebec.
  • 2012: CC is incorporated as a non-profit, and received $30 million more from CFI.
  • 2016: CC starts deploying supercomputers (HPC existed before but through the sub-groups).

Throughout this time SHARCNET, and the others continue expanding to include more universities, colleges, and research institutions. ComputeOntario adds HPC4Health. Sub-groups of CC grow.


HPC facilities offered

You would have to ask a separate question to go into detail about all the systems that are offered by virtue of the fact that CC is an amalgamation of several pre-existing consortia. The following is what was made available after the formation of CC:

  • 2017: Cedar (Vancouver, BC)
    • Enetered Top500 ranked #87.
    • 94,528 CPU cores, 2502 nodes.
    • 1532 GPUs.
    • 504T of RAM (for CPUs alone).
    • 2.2P of node-local SSD space.
    • /home 250TB total.
    • /scratch 3.7PB total (LUSTRE).
    • /project 10PB total.
    • Double-precision: 936 TFLOPS for CPUs, 2.744 PFLOPS for GPUs, 3.6 PFLOPS total.
    • CPU: Broadwell E5-2683/E7-4809/E5-2650, some SkyLake & CascadeLake added later.
    • GPU: Originally Pascal P100 w/ 12G or 16G HBM2 RAM but 32G Volta V100 added later.
    • Interconnect: OmniPath (v1) at 100Gbit/s bandwidth.
  • 2017: Graham (Waterloo, Ontario)
    • Enetered Top500 ranked #96.
    • 41,548 CPU cores, 1185 nodes.
    • 520 GPUs
    • 128T of RAM (for CPUs alone).
    • 1.5P of node-local SSD space.
    • /home 64 TB total.
    • /scratch 3.6PB total (LUSTRE).
    • /project 16 PB total.
    • Double-precision: Unknown.
    • CPU: Broadwell {E5-2683, E7-4850} v4, some SkyLake & CascadeLake added later.
    • GPU: Pascal P100 w/ 12G HBM2 RAM but 16G Volta V100 & Turing T4 added later.
    • Interconnect: Mellanox FDR 56Gb/s (GPU & cloud), EDR 100Gb/s InfiniBand (rest).

Correct answer by Nike Dattani on August 19, 2021

India : National Supercomputing Mission

Supercomputing and the C-DAC Missions:

Supercomputing in India started in the 1980's. After difficulties in obtaining supercomputers from abroad for weather forecasting and academic work (due to the potential for dual-use), it was decided to build ingenious supercomputing facilities.

Supercomputers were made by C-DAC (Center for Development of Advanced Computing, est. 1987) Pune, in several 'Missions', leading to the production of the PARAM (PARAllel Machine, also 'supreme' in Sanskrit) series.

Examples include PARAM 8000 (1990- several models including exports to Germany UK and Russia), PARAM-9000 (1994), PARAM Padma (2002), PARAM ISHAN (2016, IIT Guwahati campus) and PARAM Brahma (2020, IISER Pune campus). These supercomputers are interfaced with via the PARAMNet. (IIT's (Indian Institute of Technology) and IISER's (Indian Institute of Scientific Education and Research) are families of premier Indian research and technical institutes).

12th plan:

There also exists a project under the 12th five-year plan handled by the Indian Institute of Science (IISc) Banglore.

National Supercomputing Mission (NSM):

The National Supercomputing Mission jointly implemented by Department of Science and Technology (DST) Ministry of Electronics and Information Technology (MeitY), IISc and C-DAC is creating 70 supercomputers in various academic and research institutes linked by a high-speed network.

3 supercomputers were built during 2015-19 and 17 are being built in 2020.


List of supercomputing facilities:

As per C-DAC's website:

C-DAC has commissioned and operates three national supercomputing facilities for HPC users community.

These are:

  • National PARAM Supercomputing Facility at C-DAC, Pune (runs Param Yuva II)
  • C-DAC's Terascale Supercomputing Facility at C-DAC, Bengaluru
  • Bioinformatics Resources and Applications Facility (BRAF) at C-DAC, Pune

C-DAC also provides high-performance computing facilities in the form of PARAM SHAVAK.

Other than the facilities directly hosted by C-DAC, most premier academic institutions have supercomputing facilities. Examples:

  • Param Brahma (IISER Pune, since late-2019/ early 2020)
  • Param Shivay (IIT BHU, since 2019)
  • Param Ishan (IIT Guwahati, since 2016)
  • HPC (IIT Delhi)
  • HP Apollo 6000 (IIT Delhi)
  • Virgo (IIT Madras)
  • SahasraT (IISc Banglore)
  • Colour Boson (Tata Institute of Fundamental Research)
  • PARAM Kanchenjunga (NIT Sikkim, since 2016)
  • Pratyush (Indian Institute of Tropical Meteorology, this may not be generally available for materials work, but it is India's most powerful supercomputer ranking #67 in the TOP-500 list for 2020)
  • Aaditya (Indian Institute of Tropical Meteorology)
  • Mihir (National Centre for Medium Range Weather Forecasting)

Top of the line supercomputers are also available with other organizations. Examples:

  • SAGA-220 (Supercomputer for Aerospace with GPU Architecture-220 teraflops) by ISRO
  • Eka by Computational Research Laboratories, Tata Sons
  • Virgo Supernova by Wipro Infotech

The above list is not complete. Many other institutions also operate supercomputing facilities (For instance, IIT Roorkee has a PARAM 10000). And those that don't often have lower powered server clusters offering computing power to researchers (IIT Indore operates an IBMx Intel HPC cluster).

Answered by Devashish on August 19, 2021

Switzerland

In Switzerland, the Swiss National Supercomputing Centre (CSCS) provides most computing power. Refer to the Wikipedia article for a list of all the computing resources; it started with a 2-processor computer in 1992. Most notably though, since December 2012 it is the provider of Piz Daint, which in 2016 after an upgrade became the third most powerful supercomputer in the world with 25 petaflops. Piz Daint is a Cray XC50/XC40 system featuring Nvidia Tesla P100 GPUs. The title of "third most powerful supercomputer in the world" is not current anymore, unfortunately. CSCS at the time of writing does provide four other active clusters. The CSCS computers are used by universities and research facilities including Meteo/weather services and private stakeholders.

Of course many universities and sub-departments have their own little clusters of computers for their high performance and specialized applications. Empirically, when studying at ETH Zürich, I had access to a cluster for students of the D-CHAB (chemistry department) called Realbeaver, the ETH-computer-cluster Euler, that is currently in stage VII of expansions as well as Piz Daint which I mentioned above. For the latter two, the computer resources are limited according to some shareholder agreements. For students, the resources generally depend on the course they are taking/the group they do their project in.

Answered by BernhardWebstudio on August 19, 2021

Finland

Finland has a long history in supercomputing; CSC - the Finnish IT Center for Science, administered by the Finnish Ministry of Education and Culture, has provided computing services since 1971, starting with an Univac computer.

The strategy in Finland has been to pool national resources from the start, and this has enabled Finnish researchers to have access to have access to up-to-date computer resources for many decades. The policy of CSC has been to update their supercomputers regularly, and they've been a semi-permanent attender on the top 500 list of supercomputers in the world.

Although many universities and departments in Finland also operate their own computer clusters, anyone with an academic affiliation in Finland can get a CSC user account, and apply for their computational resources with a relatively easy procedure. This has greatly aided computational work (especially in matter modeling!) for a long time.

CSC is currently installing new supercomputers. In addition to the recently installed Puhti supercomputer (Atos BullSequana X400 supercomputer, 1.8 petaflops, 682 nodes with 2x20 core Xeon Gold 6230 i.e. 27280 cores in total, a mix of memory sizes on each node and a 4+ PB Lustre storage system), the upcoming Mahti and Lumi supercomputers will lead to a huge increase in computing power.

Mahti is a Atos BullSequana XH2000 supercomputer, with 1404 nodes with a peak performance of 7.5 petaflops. Each node has two 64 core AMD EPYC 7H12 (Rome) processors with a 2.6 GHz base frequency (3.3 GHz max boost), and 256 GB of memory. Mahti will also have a 8.7 PB Lustre parallel storage system. Mahti should become generally available for Finnish users in August 2020.

Lumi is a EuroHPC supercomputer, with a computing power of over 200 petaflops, and over 60 PB of storage, and will become available in early 2021. Although this is a European joint project, since the supercomputer is based in Finland, it will have a quota for Finnish users.

Answered by Susi Lehtola on August 19, 2021

Taiwan: National Center for High-performance Computing (NCHC)

Users can apply for time on nationally-shared computing resources (e.g. TAIWANIA 1). Unfortunately only a limited amount of the support is available in English (mostly in traditional Chinese).

https://www.nchc.org.tw/

Answered by taciteloquence on August 19, 2021

USA: Facilities for High Throughput Computing (HTC)


Several other answers mention USA centers at national labs and NSF XSEDE. There's another NSF-funded project for high throughput computing (HTC)*, versus traditional high performance computing (HPC):

OSG (Open Science Grid)

The OSG is a distributed, international network of computing facilities aimed at providing high throughput computing. Rather than having a large central system, they utilize the unused cycles of computers in their network (some of which are traditional HPC systems, whereas others are closer to commodity resources).

Because OSG focuses on HTC across a distributed network, they have particular criteria about what sorts of jobs they can support. For example, parallelized parameter sweeps or image processing on discrete datasets would benefit from HTC/OSG, whereas jobs that share a large dataset or are otherwise tightly coupled wouldn't benefit much.

Nonetheless, a lot of analyses can be broken into small, independent jobs to run opportunistically on the network, so they have a lot of usage in the science communities.


*Briefly, HTC differs from HPC in that HTC is focused on sustained execution of many discrete "jobs" over longer periods time (months/years), compared to the shorter time scales (seconds/days) for HPC-centric systems. For HTC, metrics like FLOPS or peak performance are not very relevant; instead, the amount of operations over a weeks/months/years is of interest. HTCondor has more about HTC, and is used in the OSG.

Answered by ascendants on August 19, 2021

China: National supercomputers


China has 6 national supercomputer centers and several university/Chinese academy of science supercomputer centers. Here are the brief about China national national supercomputer centers.

NATIONAL SUPERCOMPUTING CENTER IN CHANGSHA

  • Hunan University is responsible for operation management, National University of Defense Technology is responsible for technical support.

  • The peak computing performance of the whole system is 1372 trillion times, of which, the peak computing performance of the whole system CPU is 317.3 trillion times, and the peak computing performance of GPU is 1054.7 trillion times.

  • The system is configured with 2048 blade nodes to form a computing array. The node adopts a 2-way 6-core Intel Xeon Westmere EP high-performance processor with a main frequency of 2.93GHz and a memory of 48GB. Each node is equipped with an Nvidia M2050 GPU. The single computing node has a peak CPU performance of 140.64GFlops and a peak GPU performance of 515GFlops.

NATIONAL SUPERCOMPUTER CENTER IN TIANJING

  • The National Supercomputing Tianjin Center is the first national supercomputing center approved for establishment in May 2009. It is equipped with the “Tianhe No. 1” supercomputer and the “Tianhe No. 3” prototype that ranked No. 1 in the world’s top 500 supercomputers in November 2010.

NATIONAL SUPERCOMPUTER CENTER IN JINAN

  • The National Supercomputing Jinan Center was founded in 2011. The E-class computing prototype was built in 2018, and the tens of billions of supercomputing platforms are under construction in 2019-2022

NATIONAL SUPERCOMPUTER CENTER IN GUANGZHOU

  • The national supercomputing Guangzhou center business host-"Tianhe No. 2" supercomputer system, undertaken by the National University of Defense Technology. The peak calculation speed of the first phase of the "Tianhe 2" system is 549 billion times per second, the continuous calculation speed is 339 billion times per second, and the energy efficiency ratio is 1.9 billion double-precision floating-point operations per watt-second.

NATIONAL SUPERCOMPUTING CENTER IN SHENZHEN

  • The Dawning 6000 "Nebula" supercomputer purchased by the National Supercomputing Center in Shenzhen has the second highest computing speed in the world: The world's supercomputing organization has measured the Dawning 6000 supercomputer with an operating speed of 1271 trillion times per second, ranking second in the world with double precision floating point The processing performance exceeds 2400 trillion times per second, the total memory capacity of the system is 232TB, and the storage is 17.2PB.
  • There are a total of 2560 computing nodes in the main blade area. Each computing node is configured with two Intel Xeon X5650 CPUs (6-core 2.66GHz), 24GB memory, one NVidia C2050 GPU, one QDR IB network card, and the system peak value is 1271 trillion times per second. The fat node partition has a total of 128 rack-mounted server nodes. Each node is configured with 4 AMD Opteron 6136 (8-core 2.4GHz), 128GB memory, and 1 QDR IB network card.

NATIONAL SUPERCOMPUTING CENTER IN WUXI

  • Shenwei Taihuguang Supercomputer is composed of 40 computing cabinets and 8 network cabinets. 4 super nodes composed of 32 arithmetic plug-ins are distributed among them. Each plug-in consists of 4 computing node boards, and one computing node board contains 2 "Shenwei 26010" high-performance processors. One cabinet has 1024 processors, and the whole "Shenwei·Light of Taihu Lake" has a total of 40960 processors. Each single processor has 260 cores, the motherboard is designed for dual nodes, and the onboard memory solidified by each CPU is 32GB DDR3-2133.

Answered by Franksays on August 19, 2021

Universities have supercomputers of smaller magnitude, but permit the same function. A supercomputer is not a fancy state of the art set up. It's processing and computing power is determined by the number of independent processors equipped to it. A real supercomputer may even use obsolete and years old processors (whose acquisition value is insignificant) Using state of the art processors would make them ridiculously expensive than they already are. A state of the art xeon processor from Intel for example costs thousands, acquiring the chip set needed to build a supercomputer would cost over 2 billion dollars! for the chips alone. Obsolete chips from disposed computers cost virtually nothing. With the advent of mainframe computing; companies that specialized in supercomputer structures either went out of business or folded like Cray, Wang, etc.

Common mainframes can be built. A simple mother board is equipped with several processors than the mother boards are inserted into a box (Shelves, which is connected to vertically on a rack. Then the mainframe chassis are linked. A supercomputer does what your computer at home does......with tens of thousands of processors; some dedicated to graphics/physics engines exclusively.

With distributive computing and cloud set up, processing without the need of large mainframes is becoming more apparent. Google rents supercomputer time. One company "Cycle Computing" has assemebled a makeshift super computer from linking old mainframes, the cost 1,300 bucks per hour

The biggest detriment to supercomputing now is energy consumption. The proliferation of more and more computing power has led to an exponential rise in energy demand. Processors get hot, for every watt of energy dedicated to actual processing 3 watts are needed to mechanically move waste heat away from the system. As more and more systems are added; more and more heat energy must be passed. Air based heat exchangers in cold climates may help with this (Thor Data Center in Reykjavik, Iceland, this supercomputer runs air cooling units outside) In the mid 1990s a top 10 supercomputer required in the range of 100 kilowatt, in 2010 the top 10 supercomputers required between 1 and 2 megawatt. For larger scale supercomputing, vaster energy requirements and energy dedicated solely to heat dissipation.

Answered by LazyReader on August 19, 2021

United Kingdom (Great Britain)


ARCHER (Advanced Research Computing High End Resource)

ARCHER is as of today the UK's national supercomputing service, run by the EPCC (Edinburgh Parallel Computing Centre). It has been operating since late 2013, and is based around a Cray XC30 supercomputer. Note, however, ARCHER is right at the end of its lifecycle. It was due to shut down in February of this year, but things are slightly behind schedule. (In fact, ARCHER2 is currently being set up, and is due to be operational shortly; see below.)

Here is a brief overview of its capabilities from the hardware & software informational page.

ARCHER compute nodes contain two 2.7 GHz, 12-core E5-2697 v2 (Ivy Bridge) series processors. Each of the cores in these processors can support 2 hardware threads (Hyperthreads). Within the node, the two processors are connected by two QuickPath Interconnect (QPI) links.

Standard compute nodes on ARCHER have 64 GB of memory shared between the two processors. There are a smaller number of high-memory nodes with 128 GB of memory shared between the two processors. The memory is arranged in a non-uniform access (NUMA) form: each 12-core processor is a single NUMA region with local memory of 32 GB (or 64 GB for high-memory nodes). Access to the local memory by cores within a NUMA region has a lower latency than accessing memory on the other NUMA region.

There are 4544 standard memory nodes (12 groups, 109,056 cores) and 376 high memory nodes (1 group, 9,024 cores) on ARCHER giving a total of 4920 compute nodes (13 groups, 118,080 cores). (See the "Aries Interconnect" section below for the definition of a group.)

  • There is a general overview of the ARCHER service on the official website, and another on the EPCC website.
  • Academic access is straightforward if you have an EPSRC or NERC research grant, though other avenues may also allow access, in particular limited access for new users. Access is open to all domains within the remit of the EPSRC and NERC.
  • Commercial and industrial access is also available, though obviously paid. In particular, an on-demand (pay-per-core-hour) service is available.

ARCHER2

The successor to ARCHER is currently being installed at the EPCC. See the news section on the website.

Again, here is a brief overview from the hardware & software informational page.

ARCHER2 will be a Cray Shasta system with an estimated peak performance of 28 PFLOP/s. The machine will have 5,848 compute nodes, each with dual AMD EPYC Zen2 (Rome) 64 core CPUs at 2.2GHz, giving 748,544 cores in total and 1.57 PBytes of total system memory.

ARCHER2 should be capable on average of over eleven times the science throughput of ARCHER, based on benchmarks which use five of the most heavily used codes on the current service. As with all new systems, the relative speedups over ARCHER vary by benchmark. The ARCHER2 science throughput codes used for the benchmarking evaluation are estimated to reach 8.7x for CP2K, 9.5x for OpenSBLI, 11.3x for CASTEP, 12.9x for GROMACS, and 18.0x for HadGEM3.

  • The criteria for academic Access are apparently very similar as for ARCHER. Right now there is an open call for EPSRC access with a deadline of 4 September 2020.
  • I cannot at present find any information on the ARCHER2 website about commercial/industrial access. Perhaps that will come later, after launch.

MMM Hub (Materials and Molecular Modelling Hub)

This one couldn't more more suited to the concern of this SE, as is evident in the name!

The Hub hosts a high performance computing facility known as Thomas. Thomas is a 17,000 core machine based around Lenovo 24 core Intel x86-64 nodes. It is designed to support small to medium sized capacity computing focusing on materials and molecular modelling. 75% of Thomas is reserved for Tier-2 use by MMM Hub partners who are contributing towards the running costs of the facility. The other 25% of the machine is available free of charge to materials and molecular modelling researchers from anywhere in the UK.

The Hub is operated through the partnership of eight of the UK’s leading universities (UCL, Imperial College London, Queen Mary University of London, Queen’s University Belfast, the University of Kent, King’s College London, the University of Southampton and the University of Oxford) and OCF Plc.

Per the page for the Thomas supercomputer, "75% of Thomas is reserved for Tier-2 use by MMM Hub partners who are contributing towards the running costs of the facility. The other 25% of the machine is available free of charge to materials and molecular modelling researchers from anywhere in the UK." See that page for points of contacts at each institution.

Other HPC Services

See the above link for other (Tier 2) services. Note that some like DiRAC are domain-specific (targeted at particle physics and astronomy research), though paid access is available for users outside of these fields.

Answered by Noldorin on August 19, 2021

USA: Facilities funded by DOD


Other answers have addressed National Science Foundation (NSF) resources via XSEDE here and Department of Energy (DOE) resources here within the United States. Another set of computing resources in the US are those via the Department of Defense (DoD).

HPCMP (High Performance Computing Modernization Program)

The DoD High Performance Computing Modernization Program (HPCMP) handles the computing centers administered by the DoD. As might be expected, the DoD HPCMP resources are meant for research that aligns with DoD mission statements. For those interested, the Army Research Laboratory (ARL), Air Force Research Laboratory (AFRL), and Navy Research Laboratory (NRL) all put out broad agency announcements (BAAs) that describe the current areas of research. An example for the Army Research Office can be found here.

Access to DoD HPCMP resources is generally restricted to those that already receive research funding from the DoD, so they are not as easy to get access to as NSF's XSEDE or DOE's NERSC. However, it is a major source of research computing in the US all the same. The DoD HPCMP has several machines that are meant for unclassified research that academics can get access to, provided they are supported by the DoD. These machines are outlined here and include many of the top computing machines in the world. As an example, the US Air Force's Mustang is currently #80 on the TOP500 list.

Answered by Andrew Rosen on August 19, 2021

MÉXICO


Mexico has several supercomputing centers that provide service to the nation's academic researchers. These are the main ones (not in any particular order).

Kan Balam (2007): Universidad Nacional Autónoma de México (UNAM)

  • by Hewlett-Packard
  • 1,368 AMD Opteron 2.6 GHz processors
  • 3,016 GB of RAM y 160 TB for storage,
  • Processing capability of 7 TFlops (teraflops)
  • Runs on GNU/Linux

Aitzaloa (2008): Universidad Autónoma Metropolitana (UAM)

  • 2,160 Intel Xeon E5272 QuadCore
  • 100TB storage
  • Processing capability of 18 TFlops
  • #226 of Top500 in 2008

Atócatl (2011): Universidad Nacional Auónoma de México (UNAM)

  • Used primarily by and for the National Institute of Astrophysics.

Abacus (2014): Centro de Investigación y Estudios Avanzados (CINVESTAV)

  • 8,904 cores, Intel Xeon E5
  • 100 GPU K40 , Nvidia,
  • 1.2 Petabytes for storage y 40TB of RAM
  • Can reach 400 Teraflops.

Miztli (2013): Universidad Nacional Autónoma de México (UNAM)

  • HP Cluster Platform 3000SL
  • 5,312 Intel Xeon E5-2670
  • 16 GPU NVIDIA m2090
  • 15,000 GB RAM
  • 750 TB for storage
  • runs on Linux Redhat

Yoltla (2014): Universidad Autónoma Metropolitana (UAM)

  • 4,920 CPUS,
  • 6,912 GB RAM,
  • 70TB storage

Reaches 45 TFlops.

Xiuhcoatl (2012): Centro de Investigación y Estudios Avanzados (CINVESTAV)

  • 4,724 CPU y 374,144 GPU
  • Up to 250 TFlops by itself,

Connected via Optical Fiber to Kan Balam and Aitzaloa, combined > 7000 CPUs, 300 TFlops


The supercomputers mentioned until now are owned by universities or university research centers. Additionally, Mexico has a National Supercomputing Laboratory, which gives service to uses nationwide as well. It is hosted by the Benemérita Universidad Autónoma de Puebla, (BUAP) and is called "Laboratorio nacional de Supercómputo" (LNS). Their full infrastructure page is here , and below a summary of Cuetlaxcoapan, the main one.

Cuetlaxcoapan: LNS

  • 6796 CPU cores
  • 2048 GB RAM
  • 1520 CUDA cores
  • Nvidia k40
  • Storage of 1.2 Petabytes (PB)
  • Capability up to 153.408 Tflops.

Answered by Etienne Palos on August 19, 2021

USA: Facilities funded by DOE


For USA, XSEDE was mentioned in [another answer](https://mattermodeling.stackexchange.com/a/1517/671). It is funded by the US National Science Foundation. There are also some facilities under Department of Energy (DOE) that might be more or less approachable depending on research profile and funding sources.

NERSC (National Energy Research Scientific Computing Center)

NERSC, located at Lawrence Berkeley National Laboratory, is the primary computing facility for DOE. Currently its main HPC system is Cori, a Cray XC40 at #16 on the Top500 list, but a new Cray system named Perlmutter is supposed to be installed late 2020 through mid 2021. Both systems have (will have) both GPU-accelerated and pure CPU nodes. NERSC also provides a good amount of training opportunities for its users, some in cooperation with the leadership facilities mentioned below.

From their mission statement:

The mission of the National Energy Research Scientific Computing Center (NERSC) is to accelerate scientific discovery at the DOE Office of Science through high performance computing and data analysis.

From their website:

More than 7,000 scientists use NERSC to perform basic scientific research across a wide range of disciplines, including climate modeling, research into new materials, simulations of the early universe, analysis of data from high energy physics experiments, investigations of protein structure, and a host of other scientific endeavors.

All research projects that are funded by the DOE Office of Science and require high performance computing support are eligible to apply to use NERSC resources. Projects that are not funded by the DOE Office of Science, but that conduct research that supports the Office of Science mission may also apply.


DOE also has two so-called leadership computing facilities. The point of these is not to support typical, small-scale computational research. Instead, they deliberately target a limited number of large-scale projects in need of large allocations, projects that may not be possible elsewhere. From experience with OLCF there is often also a need to demonstrate that your code can take advantage of the hardware offered.

OLCF (Oak Ridge Leadership Computing Facility)

The Oak Ridge Leadership Computing Facility (formerly known as the National Leadership Computing Facility), located at Oak Ridge National Laboratory, is home to the Summit supercomputer that debuted as #1 on the Top500 list, but was recently dethroned to #2. It's next supercomputer, Frontier, is supposed to reach exascale performance, and open to users in 2022.

ALCF (Argonne Leadership Computing Facility)

The Argonne Leadership Computing Facility (at Argonne National Laboratory) has a similar role. Currently, its main supercomputer is Theta (#34 on the Top500 list). Their planned exascale supercomputer Aurora is coming in 2021.

Answered by Anyon on August 19, 2021

Brazil: CENAPADs


CENAPAD stands for Centro Nacional de Processamento de Alto Desempenho (National High Performance Processing Center). They form a supercomputing network instituted by the Ministry of Science, Technology and Innovation (MCTI) and coordinated by the National High Performance Processing System (SINAPAD).

Some of them are:

  • CENAPAD-SP: it is in the Campinas city in São Paulo state, linked to the Research Office of UNICAMP (State University of Campinas, the third best Latin American university). Hardware: The IBM Power 750 Express system is made up of 40 IBM P750 SMP computational nodes, using 3.55GHz Power7 processors. There are 32 colors (processing cores), 128GB of RAM and 908.8 GFlops of theoretical performance in each computational node, totaling 1280 colors, 5TB of RAM, theoretical processing capacity of approximately 37 TFLOPs and sustained capacity (linpack) of 27 TFLOPs. An IBM iDataPlex / GPU device with twelve nVIDIA Tesla M2050 GPU cards, installed on six X86 servers with Intel Xeon Processor X5570 (2.93GHz) processors, is also available.
  • CENAPAD-MG: it is in the Belo Horizonte city in the Minas Gerais state, linked to the Scientific Computing Division of the Scientific Computing Laboratory (LCCUFMG) at Federal University of Minas Gerais (UFMG). Hardware: It is characterized by a cluster of 53 computational nodes physically distributed in 4 racks, each node with 2 quadcore processors and 16 Gigabytes of main memory, adding a total of 848 processing cores (colors) and approximately 1.7 Terabytes of memory distributed by the machine. The theoretical performance potential of the cluster translates into 9 Teraflops (nine trillion elementary operations per second). This equipment is served by an internal communication network of the type INFINIBAND that interconnects all nodes, and that is specially dedicated for parallel calculation, and by storage with a storage capacity of 45 TB for data and backup.
  • CENAPAD-RJ: it is in the Petrópolis (Imperial City) city in the Rio de Janeiro state, linked to the National Laboratory for Scientific Computing (LNCC). Hardware: SunHPC: 72 nodes with 2 Intel(R) Xeon(R) CPU E5440 (8 cores) and 16GB RAM each. Altix-XE: 30 nodes with 2 Intel(R) Xeon(R) E5520 (8 cores) and 24GB RAM each; 6 nodes with 2 Xeon X6560 hexacore (12 cores), 98GB RAM and 2 NVIDIA TeslaS2050 (896 cores) each; 2 nodes Tyan FT72-B7015 with 2 Xeon X6560 hexacore (12 cores), 24GB RAM, 4 NVIDIA TeslaC2050 (1792 cores) each; 4 nodes SGI C1104G with 2 Intel Xeon 8-core E5-2660, 64GB RAM and 1 NVIDIA TeslaK20 each. Altix ICE8400: 25 nodes with 2 Xeon X6560 hexacore (12 cores) and 48GB RAM each.
  • CENAPAD-UFC: it is in the Fortaleza city in the Ceará state, linked to the Research Office of Federal University of Ceara (UFC). Hardware: 48 nodes Bullx B500 with 2 Intel X5650 Hexa-Core 2,67 Ghz and 24GB RAM each (576 processing cores and 1,152 GB of RAM, reaching approximately 6.2 TFlops of processing) and 3 GPUs Bullx B515 with 2 NVIDIA K20m, 2 Intel E5-2470 Octa-Core 2,3GHz and 96GB RAM each (48 processing cores, 288GB of RAM and 15GB of GPU and can reach approximately 8Tflops of processing)
  • CESUP: it is in the Porto Alegre city in the Rio Grande do Sul state, linked to the Federal University of Rio Grande do Sul (UFRGS). Hardware: 64 nodes with 2 AMD Opteron Twelve-Core (6176 SE and 6238) and 64 GB RAM, 6 nodes with 8 AMD Opteron 8356 Quad-core and 128 GB RAM, and 24 nodes with Intel Xeon Silver 4116 Twelve-Core and 96 GB RAM and 32 GPU.

Bellow is the distribution of SINAPAD related centers.

enter image description here

Just as a curiosity, the image below shows the CPU use by Brazilian states between 1995 and 2015.

enter image description here

Answered by Camps on August 19, 2021

USA: Facilities funded by NSF


XSEDE (Extreme Science and Engineering Discovery Environment)

XSEDE (pronounced like "exceed") provides access to both computational resources and trainings on HPC. These may be especially useful if your institution does not provide good support for scientific computing.

From their website:

  • XSEDE provides live and recorded training on a wide range of research computing topics.
  • XSEDE programs offer our users in-depth collaborations and on-campus facilitators.
  • Most US-based researchers are eligible for no-cost XSEDE allocations. Get started in two weeks or less!

Answered by taciteloquence on August 19, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP