A parallel visualization pipeline for terascale earthquake simulations

Hongfeng Yu, Kwan-Liu Ma, Joel Welling

Research output: Chapter in Book/Report/Conference proceedingConference contribution

17 Citations (Scopus)

Abstract

This paper presents a parallel visualization pipeline implemented at the Pittsburgh Supercomputing Center (PSC) for studying the largest earthquake simulation ever performed. The simulation employs 100 million hexahedral cells to model 3D seismic wave propagation of the 1994 Northridge earthquake. The time-varying dataset produced by the simulation requires terabytes of storage space. Our solution for visualizing such terascale simulations is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively reduces interframe delay by dedicating some processors to I/O and preprocessing tasks. In addition, a 2D vector field visualization method and a 3D enhancement technique are incorporated into the parallel visualization framework to help scientists better understand the wave propagation both on and under the ground surface. Our test results on the HP/Compaq AlphaServer operated at the PSC show that we can completely remove the I/O bottlenecks commonly present in time-varying data visualization. The high-performance visualization solution we provide to the scientists allows them to explore their data in the temporal, spatial, and variable domains at high resolution. The new high-resolution explorability, likely not available to most computational science groups, will help lead to many new insights.

Original languageEnglish (US)
Title of host publicationIEEE/ACM SC2004 Conference - Bridging Communities, Proceedings
Pages649-660
Number of pages12
StatePublished - Dec 1 2004
EventIEEE/ACM SC2004 Conference - Bridging Communities - Pittsburgh, PA, United States
Duration: Nov 6 2004Nov 12 2004

Other

OtherIEEE/ACM SC2004 Conference - Bridging Communities
CountryUnited States
CityPittsburgh, PA
Period11/6/0411/12/04

Fingerprint

Earthquakes
Visualization
Pipelines
Wave propagation
Seismic waves
Data visualization

Keywords

  • High-performance computing
  • Massively parallel supercomputing
  • MPI
  • Parallel I/O
  • Parallel rendering
  • Scientific visualization
  • Time-varying data
  • Vector field visualization
  • Volume rendering

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Yu, H., Ma, K-L., & Welling, J. (2004). A parallel visualization pipeline for terascale earthquake simulations. In IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings (pp. 649-660)

A parallel visualization pipeline for terascale earthquake simulations. / Yu, Hongfeng; Ma, Kwan-Liu; Welling, Joel.

IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings. 2004. p. 649-660.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yu, H, Ma, K-L & Welling, J 2004, A parallel visualization pipeline for terascale earthquake simulations. in IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings. pp. 649-660, IEEE/ACM SC2004 Conference - Bridging Communities, Pittsburgh, PA, United States, 11/6/04.
Yu H, Ma K-L, Welling J. A parallel visualization pipeline for terascale earthquake simulations. In IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings. 2004. p. 649-660
Yu, Hongfeng ; Ma, Kwan-Liu ; Welling, Joel. / A parallel visualization pipeline for terascale earthquake simulations. IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings. 2004. pp. 649-660
@inproceedings{8c8f6ce838f746bdb63d7616e234b556,
title = "A parallel visualization pipeline for terascale earthquake simulations",
abstract = "This paper presents a parallel visualization pipeline implemented at the Pittsburgh Supercomputing Center (PSC) for studying the largest earthquake simulation ever performed. The simulation employs 100 million hexahedral cells to model 3D seismic wave propagation of the 1994 Northridge earthquake. The time-varying dataset produced by the simulation requires terabytes of storage space. Our solution for visualizing such terascale simulations is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively reduces interframe delay by dedicating some processors to I/O and preprocessing tasks. In addition, a 2D vector field visualization method and a 3D enhancement technique are incorporated into the parallel visualization framework to help scientists better understand the wave propagation both on and under the ground surface. Our test results on the HP/Compaq AlphaServer operated at the PSC show that we can completely remove the I/O bottlenecks commonly present in time-varying data visualization. The high-performance visualization solution we provide to the scientists allows them to explore their data in the temporal, spatial, and variable domains at high resolution. The new high-resolution explorability, likely not available to most computational science groups, will help lead to many new insights.",
keywords = "High-performance computing, Massively parallel supercomputing, MPI, Parallel I/O, Parallel rendering, Scientific visualization, Time-varying data, Vector field visualization, Volume rendering",
author = "Hongfeng Yu and Kwan-Liu Ma and Joel Welling",
year = "2004",
month = "12",
day = "1",
language = "English (US)",
isbn = "0769521533",
pages = "649--660",
booktitle = "IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings",

}

TY - GEN

T1 - A parallel visualization pipeline for terascale earthquake simulations

AU - Yu, Hongfeng

AU - Ma, Kwan-Liu

AU - Welling, Joel

PY - 2004/12/1

Y1 - 2004/12/1

N2 - This paper presents a parallel visualization pipeline implemented at the Pittsburgh Supercomputing Center (PSC) for studying the largest earthquake simulation ever performed. The simulation employs 100 million hexahedral cells to model 3D seismic wave propagation of the 1994 Northridge earthquake. The time-varying dataset produced by the simulation requires terabytes of storage space. Our solution for visualizing such terascale simulations is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively reduces interframe delay by dedicating some processors to I/O and preprocessing tasks. In addition, a 2D vector field visualization method and a 3D enhancement technique are incorporated into the parallel visualization framework to help scientists better understand the wave propagation both on and under the ground surface. Our test results on the HP/Compaq AlphaServer operated at the PSC show that we can completely remove the I/O bottlenecks commonly present in time-varying data visualization. The high-performance visualization solution we provide to the scientists allows them to explore their data in the temporal, spatial, and variable domains at high resolution. The new high-resolution explorability, likely not available to most computational science groups, will help lead to many new insights.

AB - This paper presents a parallel visualization pipeline implemented at the Pittsburgh Supercomputing Center (PSC) for studying the largest earthquake simulation ever performed. The simulation employs 100 million hexahedral cells to model 3D seismic wave propagation of the 1994 Northridge earthquake. The time-varying dataset produced by the simulation requires terabytes of storage space. Our solution for visualizing such terascale simulations is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively reduces interframe delay by dedicating some processors to I/O and preprocessing tasks. In addition, a 2D vector field visualization method and a 3D enhancement technique are incorporated into the parallel visualization framework to help scientists better understand the wave propagation both on and under the ground surface. Our test results on the HP/Compaq AlphaServer operated at the PSC show that we can completely remove the I/O bottlenecks commonly present in time-varying data visualization. The high-performance visualization solution we provide to the scientists allows them to explore their data in the temporal, spatial, and variable domains at high resolution. The new high-resolution explorability, likely not available to most computational science groups, will help lead to many new insights.

KW - High-performance computing

KW - Massively parallel supercomputing

KW - MPI

KW - Parallel I/O

KW - Parallel rendering

KW - Scientific visualization

KW - Time-varying data

KW - Vector field visualization

KW - Volume rendering

UR - http://www.scopus.com/inward/record.url?scp=23944520105&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=23944520105&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:23944520105

SN - 0769521533

SN - 9780769521534

SP - 649

EP - 660

BT - IEEE/ACM SC2004 Conference - Bridging Communities, Proceedings

ER -