Distance visualization of ultrascale data with explorable images

Kwan-Liu Ma, Anna Tikhonova, Carlos D. Correa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This talk presents a new approach to distance visualization of very large data sets output from scientific supercomputing. The processing power of massively parallel supercomputers increases at a rather fast rate, about an order of magnitude faster every three years, enabling scientists to model complex physical phenomena and chemical processes at unprecedented fidelity. Several petascale computers are already in operation (http://www.top500.org) and exascale computing is around the corner. Each run of a petascale simulation typically outputs several hundred terabytes of data to disk. Transferring data at this scale over wide-area networks to the scientist's laboratory for post-processing analysis is not an option. Even the data files may be transferred, existing desktop data analysis and visualization tools cannot effectively handle such large-scale data. If the scientists may use the same supercomputing facility for data analysis and visualization, there are three viable solutions: in situ visualization, where visualization is computed during the simulation on the same supercomputer co-processing visualization, where visualization is computed during the simulation on a separate computer, and post-processing visualization, where visualization is computed after simulation is over.

Original languageEnglish (US)
Title of host publicationACM SIGGRAPH 2010 Talks, SIGGRAPH '10
DOIs
StatePublished - Sep 10 2010
EventACM SIGGRAPH 2010 Talks, SIGGRAPH '10 - Los Angeles, CA, United States
Duration: Jul 26 2010Jul 30 2010

Other

OtherACM SIGGRAPH 2010 Talks, SIGGRAPH '10
CountryUnited States
CityLos Angeles, CA
Period7/26/107/30/10

Fingerprint

Visualization
Data visualization
Supercomputers
Processing
Wide area networks

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Software

Cite this

Ma, K-L., Tikhonova, A., & Correa, C. D. (2010). Distance visualization of ultrascale data with explorable images. In ACM SIGGRAPH 2010 Talks, SIGGRAPH '10 [9] https://doi.org/10.1145/1837026.1837038

Distance visualization of ultrascale data with explorable images. / Ma, Kwan-Liu; Tikhonova, Anna; Correa, Carlos D.

ACM SIGGRAPH 2010 Talks, SIGGRAPH '10. 2010. 9.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ma, K-L, Tikhonova, A & Correa, CD 2010, Distance visualization of ultrascale data with explorable images. in ACM SIGGRAPH 2010 Talks, SIGGRAPH '10., 9, ACM SIGGRAPH 2010 Talks, SIGGRAPH '10, Los Angeles, CA, United States, 7/26/10. https://doi.org/10.1145/1837026.1837038
Ma K-L, Tikhonova A, Correa CD. Distance visualization of ultrascale data with explorable images. In ACM SIGGRAPH 2010 Talks, SIGGRAPH '10. 2010. 9 https://doi.org/10.1145/1837026.1837038
Ma, Kwan-Liu ; Tikhonova, Anna ; Correa, Carlos D. / Distance visualization of ultrascale data with explorable images. ACM SIGGRAPH 2010 Talks, SIGGRAPH '10. 2010.
@inproceedings{b2577ad78a3746e5b6e41b1a3dbfd60c,
title = "Distance visualization of ultrascale data with explorable images",
abstract = "This talk presents a new approach to distance visualization of very large data sets output from scientific supercomputing. The processing power of massively parallel supercomputers increases at a rather fast rate, about an order of magnitude faster every three years, enabling scientists to model complex physical phenomena and chemical processes at unprecedented fidelity. Several petascale computers are already in operation (http://www.top500.org) and exascale computing is around the corner. Each run of a petascale simulation typically outputs several hundred terabytes of data to disk. Transferring data at this scale over wide-area networks to the scientist's laboratory for post-processing analysis is not an option. Even the data files may be transferred, existing desktop data analysis and visualization tools cannot effectively handle such large-scale data. If the scientists may use the same supercomputing facility for data analysis and visualization, there are three viable solutions: in situ visualization, where visualization is computed during the simulation on the same supercomputer co-processing visualization, where visualization is computed during the simulation on a separate computer, and post-processing visualization, where visualization is computed after simulation is over.",
author = "Kwan-Liu Ma and Anna Tikhonova and Correa, {Carlos D.}",
year = "2010",
month = "9",
day = "10",
doi = "10.1145/1837026.1837038",
language = "English (US)",
isbn = "9781450303941",
booktitle = "ACM SIGGRAPH 2010 Talks, SIGGRAPH '10",

}

TY - GEN

T1 - Distance visualization of ultrascale data with explorable images

AU - Ma, Kwan-Liu

AU - Tikhonova, Anna

AU - Correa, Carlos D.

PY - 2010/9/10

Y1 - 2010/9/10

N2 - This talk presents a new approach to distance visualization of very large data sets output from scientific supercomputing. The processing power of massively parallel supercomputers increases at a rather fast rate, about an order of magnitude faster every three years, enabling scientists to model complex physical phenomena and chemical processes at unprecedented fidelity. Several petascale computers are already in operation (http://www.top500.org) and exascale computing is around the corner. Each run of a petascale simulation typically outputs several hundred terabytes of data to disk. Transferring data at this scale over wide-area networks to the scientist's laboratory for post-processing analysis is not an option. Even the data files may be transferred, existing desktop data analysis and visualization tools cannot effectively handle such large-scale data. If the scientists may use the same supercomputing facility for data analysis and visualization, there are three viable solutions: in situ visualization, where visualization is computed during the simulation on the same supercomputer co-processing visualization, where visualization is computed during the simulation on a separate computer, and post-processing visualization, where visualization is computed after simulation is over.

AB - This talk presents a new approach to distance visualization of very large data sets output from scientific supercomputing. The processing power of massively parallel supercomputers increases at a rather fast rate, about an order of magnitude faster every three years, enabling scientists to model complex physical phenomena and chemical processes at unprecedented fidelity. Several petascale computers are already in operation (http://www.top500.org) and exascale computing is around the corner. Each run of a petascale simulation typically outputs several hundred terabytes of data to disk. Transferring data at this scale over wide-area networks to the scientist's laboratory for post-processing analysis is not an option. Even the data files may be transferred, existing desktop data analysis and visualization tools cannot effectively handle such large-scale data. If the scientists may use the same supercomputing facility for data analysis and visualization, there are three viable solutions: in situ visualization, where visualization is computed during the simulation on the same supercomputer co-processing visualization, where visualization is computed during the simulation on a separate computer, and post-processing visualization, where visualization is computed after simulation is over.

UR - http://www.scopus.com/inward/record.url?scp=77956310802&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956310802&partnerID=8YFLogxK

U2 - 10.1145/1837026.1837038

DO - 10.1145/1837026.1837038

M3 - Conference contribution

SN - 9781450303941

BT - ACM SIGGRAPH 2010 Talks, SIGGRAPH '10

ER -