Menu

Lincoln Laboratory's new artificial intelligence supercomputer is the most powerful at a university

November 5, 2019

TX-GAIA is tailor-made for crunching through deep neural network operations.
Read this story at MIT News
The new TX-GAIA (Green AI Accelerator) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) has been ranked as the most powerful artificial intelligence supercomputer at any university in the world. The ranking comes from TOP500, which publishes a list of the top supercomputers in various categories biannually. The system, which was built by Hewlett Packard Enterprise, combines traditional high-performance computing hardware — nearly 900 Intel processors — with hardware optimized for AI applications — 900 Nvidia graphics processing unit (GPU) accelerators.
"We are thrilled by the opportunity to enable researchers across Lincoln and MIT to achieve incredible scientific and engineering breakthroughs," says Jeremy Kepner, a Lincoln Laboratory fellow who heads the LLSC. "TX-GAIA will play a large role in supporting AI, physical simulation, and data analysis across all laboratory missions."
TOP500 rankings are based on a LINPACK Benchmark, which is a measure of a system's floating-point computing power, or how fast a computer solves a dense system of linear equations. TX-GAIA's TOP500 benchmark performance is 3.9 quadrillion floating-point operations per second, or petaflops (though since the ranking was announced in June 2019, Hewlett Packard Enterprise has updated the system's benchmark to 4.725 petaflops). The June TOP500 benchmark performance places the system No. 1 in the Northeast, No. 20 in the United States, and No. 51 in the world for supercomputing power. The system's peak performance is more than 6 petaflops.
But more notably, TX-GAIA has a peak performance of 100 AI petaflops, which makes it No. 1 for AI flops at any university in the world. An AI flop is a measure of how fast a computer can perform deep neural network (DNN) operations. DNNs are a class of AI algorithms that learn to recognize patterns in huge amounts of data. This ability has given rise to "AI miracles," as Kepner puts it, in speech recognition and computer vision; the technology is what allows Amazon's Alexa to understand questions and self-driving cars to recognize objects in their surroundings. The more complex these DNNs grow, the longer it takes for them to process the massive datasets they learn from. TX-GAIA's Nvidia GPU accelerators are specially designed for performing these DNN operations quickly.
TX-GAIA is housed in a new modular data center, called an EcoPOD, at the LLSC’s green, hydroelectrically powered site in Holyoke, Massachusetts. It joins the ranks of other powerful systems at the LLSC, such as the TX-E1, which supports collaborations with the MIT campus and other institutions, and TX-Green, which is currently ranked 490th on the TOP500 list.
Kepner says that the system's integration into the LLSC will be completely transparent to users when it comes online this fall. "The only thing users should see is that many of their computations will be dramatically faster," he says.
Among its AI applications, TX-GAIA will be tapped for training machine learning algorithms, including those that use DNNs. It will more quickly crunch through terabytes of data — for example, hundreds of thousands of images or years' worth of speech samples — to teach these algorithms to figure out solutions on their own. The system's compute power will also expedite simulations and data analysis. These capabilities will support projects across the laboratory's R&D areas, such as improving weather forecasting, accelerating medical data analysis, building autonomous systems, designing synthetic DNA, and developing new materials and devices.
TX-GAIA, which is also ranked the No. 1 system in the U.S. Department of Defense, will also support the recently announced MIT-Air Force AI Accelerator. The partnership will combine the expertise and resources of MIT, including those at the LLSC, and the U.S. Air Force to conduct fundamental research directed at enabling rapid prototyping, scaling, and application of AI algorithms and systems.
Story image: TX-GAIA is housed inside of a new EcoPOD, manufactured by Hewlett Packard Enterprise, at the site of the Lincoln Laboratory Supercomputing Center in Holyoke, Massachusetts. Photo: Glen Cooper
 
 

Research projects

A Future of Unmanned Aerial Vehicles
Yale Budget Lab
Volcanic Eruptions Impact on Stratospheric Chemistry & Ozone
The Rhode Island Coastal Hazards Analysis, Modeling, and Prediction System
Towards a Whole Brain Cellular Atlas
Tornado Path Detection
The Kempner Institute - Unlocking Intelligence
The Institute for Experiential AI
Taming the Energy Appetite of AI Models
Surface Behavior
Studying Highly Efficient Biological Solar Energy Systems
Software for Unreliable Quantum Computers
Simulating Large Biomolecular Assemblies
SEQer - Sequence Evaluation in Realtime
Revolutionizing Materials Design with Computational Modeling
Remote Sensing of Earth Systems
QuEra at the MGHPCC
Quantum Computing in Renewable Energy Development
Pulling Back the Quantum Curtain on ‘Weyl Fermions’
New Insights on Binary Black Holes
NeuraChip
Network Attached FPGAs in the OCT
Monte Carlo eXtreme (MCX) - a Physically-Accurate Photon Simulator
Modeling Hydrogels and Elastomers
Modeling Breast Cancer Spread
Measuring Neutrino Mass
Investigating Mantle Flow Through Analyses of Earthquake Wave Propagation
Impact of Marine Heatwaves on Coral Diversity
IceCube: Hunting Neutrinos
Genome Forecasting
Global Consequences of Warming-Induced Arctic River Changes
Fuzzing the Linux Kernel
Exact Gravitational Lensing by Rotating Black Holes
Evolution of Viral Infectious Disease
Evaluating Health Benefits of Stricter US Air Quality Standards
Ephemeral Stream Water Contributions to US Drainage Networks
Energy Transport and Ultrafast Spectroscopy Lab
Electron Heating in Kinetic-Alfvén-Wave Turbulence
Discovering Evolution’s Master Switches
Dexterous Robotic Hands
Developing Advanced Materials for a Sustainable Energy Future
Detecting Protein Concentrations in Assays
Denser Environments Cultivate Larger Galaxies
Deciphering Alzheimer's Disease
Dancing Frog Genomes
Cyber-Physical Communication Network Security
Avoiding Smash Hits
Analyzing the Gut Microbiome
Adaptive Deep Learning Systems Towards Edge Intelligence
Accelerating Rendering Power
ACAS X: A Family of Next-Generation Collision Avoidance Systems
Neurocognition at the Wu Tsai Institute, Yale
Computational Modeling of Biological Systems
Computational Molecular Ecology
Social Capital and Economic Mobility
All Research Projects

Collaborative projects

ALL Collaborative PROJECTS

Outreach & Education Projects

See ALL Scholarships
100 Bigelow Street, Holyoke, MA 01040