End-to-end study of parallel volume rendering on the IBM blue Gene/p

Tom Peterka, Hongfeng Yu, Robert Ross, Kwan-Liu Ma, Rob Latham

Research output: Chapter in Book/Report/Conference proceedingConference contribution

25 Citations (Scopus)

Abstract

In addition to their role as simulation engines, modern supercomputers can be harnessed for scientific visualization. Their extensive concurrency, parallel storage systems, and high-performance interconnects can mitigate the expanding size and complexity of scientific datasets and prepare for in situ visualization of these data. In ongoing research into testing parallel volume rendering on the IBM Blue Gene/P (BG/P), we measure performance of disk I/O, rendering, and compositing on large datasets, and evaluate bottlenecks with respect to system-specific I/O and communication patterns. To extend the scalability of the direct-send image compositing stage of the volume rendering algorithm, we limit the number of compositing cores when many small messages are exchanged. To improve the data-loading stage of the volume renderer, we study the I/O signatures of the algorithm in detail. The results of this research affirm that a distributed-memory computing architecture such as BG/P is a scalable platform for large visualization problems.

Original languageEnglish (US)
Title of host publicationICPP-2009 - The 38th International Conference on Parallel Processing
Pages566-573
Number of pages8
DOIs
StatePublished - Dec 1 2009
Event38th International Conference on Parallel Processing, ICPP-2009 - Vienna, Austria
Duration: Sep 22 2009Sep 25 2009

Other

Other38th International Conference on Parallel Processing, ICPP-2009
CountryAustria
CityVienna
Period9/22/099/25/09

Fingerprint

Volume Rendering
Volume rendering
Visualization
Genes
Scientific Visualization
Gene
Data visualization
Many-core
Supercomputers
Supercomputer
Distributed Memory
Storage System
Parallel Systems
Interconnect
Concurrency
Large Data Sets
Performance Measures
Rendering
Scalability
System Performance

Keywords

  • Distributed scientific visualization
  • Image compositing
  • Parallel I/O
  • Parallel volume rendering

ASJC Scopus subject areas

  • Software
  • Mathematics(all)
  • Hardware and Architecture

Cite this

Peterka, T., Yu, H., Ross, R., Ma, K-L., & Latham, R. (2009). End-to-end study of parallel volume rendering on the IBM blue Gene/p. In ICPP-2009 - The 38th International Conference on Parallel Processing (pp. 566-573). [5362481] https://doi.org/10.1109/ICPP.2009.27

End-to-end study of parallel volume rendering on the IBM blue Gene/p. / Peterka, Tom; Yu, Hongfeng; Ross, Robert; Ma, Kwan-Liu; Latham, Rob.

ICPP-2009 - The 38th International Conference on Parallel Processing. 2009. p. 566-573 5362481.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Peterka, T, Yu, H, Ross, R, Ma, K-L & Latham, R 2009, End-to-end study of parallel volume rendering on the IBM blue Gene/p. in ICPP-2009 - The 38th International Conference on Parallel Processing., 5362481, pp. 566-573, 38th International Conference on Parallel Processing, ICPP-2009, Vienna, Austria, 9/22/09. https://doi.org/10.1109/ICPP.2009.27
Peterka T, Yu H, Ross R, Ma K-L, Latham R. End-to-end study of parallel volume rendering on the IBM blue Gene/p. In ICPP-2009 - The 38th International Conference on Parallel Processing. 2009. p. 566-573. 5362481 https://doi.org/10.1109/ICPP.2009.27
Peterka, Tom ; Yu, Hongfeng ; Ross, Robert ; Ma, Kwan-Liu ; Latham, Rob. / End-to-end study of parallel volume rendering on the IBM blue Gene/p. ICPP-2009 - The 38th International Conference on Parallel Processing. 2009. pp. 566-573
@inproceedings{714a8725d57c40cc9c8aa1e155efcd8f,
title = "End-to-end study of parallel volume rendering on the IBM blue Gene/p",
abstract = "In addition to their role as simulation engines, modern supercomputers can be harnessed for scientific visualization. Their extensive concurrency, parallel storage systems, and high-performance interconnects can mitigate the expanding size and complexity of scientific datasets and prepare for in situ visualization of these data. In ongoing research into testing parallel volume rendering on the IBM Blue Gene/P (BG/P), we measure performance of disk I/O, rendering, and compositing on large datasets, and evaluate bottlenecks with respect to system-specific I/O and communication patterns. To extend the scalability of the direct-send image compositing stage of the volume rendering algorithm, we limit the number of compositing cores when many small messages are exchanged. To improve the data-loading stage of the volume renderer, we study the I/O signatures of the algorithm in detail. The results of this research affirm that a distributed-memory computing architecture such as BG/P is a scalable platform for large visualization problems.",
keywords = "Distributed scientific visualization, Image compositing, Parallel I/O, Parallel volume rendering",
author = "Tom Peterka and Hongfeng Yu and Robert Ross and Kwan-Liu Ma and Rob Latham",
year = "2009",
month = "12",
day = "1",
doi = "10.1109/ICPP.2009.27",
language = "English (US)",
isbn = "9780769538020",
pages = "566--573",
booktitle = "ICPP-2009 - The 38th International Conference on Parallel Processing",

}

TY - GEN

T1 - End-to-end study of parallel volume rendering on the IBM blue Gene/p

AU - Peterka, Tom

AU - Yu, Hongfeng

AU - Ross, Robert

AU - Ma, Kwan-Liu

AU - Latham, Rob

PY - 2009/12/1

Y1 - 2009/12/1

N2 - In addition to their role as simulation engines, modern supercomputers can be harnessed for scientific visualization. Their extensive concurrency, parallel storage systems, and high-performance interconnects can mitigate the expanding size and complexity of scientific datasets and prepare for in situ visualization of these data. In ongoing research into testing parallel volume rendering on the IBM Blue Gene/P (BG/P), we measure performance of disk I/O, rendering, and compositing on large datasets, and evaluate bottlenecks with respect to system-specific I/O and communication patterns. To extend the scalability of the direct-send image compositing stage of the volume rendering algorithm, we limit the number of compositing cores when many small messages are exchanged. To improve the data-loading stage of the volume renderer, we study the I/O signatures of the algorithm in detail. The results of this research affirm that a distributed-memory computing architecture such as BG/P is a scalable platform for large visualization problems.

AB - In addition to their role as simulation engines, modern supercomputers can be harnessed for scientific visualization. Their extensive concurrency, parallel storage systems, and high-performance interconnects can mitigate the expanding size and complexity of scientific datasets and prepare for in situ visualization of these data. In ongoing research into testing parallel volume rendering on the IBM Blue Gene/P (BG/P), we measure performance of disk I/O, rendering, and compositing on large datasets, and evaluate bottlenecks with respect to system-specific I/O and communication patterns. To extend the scalability of the direct-send image compositing stage of the volume rendering algorithm, we limit the number of compositing cores when many small messages are exchanged. To improve the data-loading stage of the volume renderer, we study the I/O signatures of the algorithm in detail. The results of this research affirm that a distributed-memory computing architecture such as BG/P is a scalable platform for large visualization problems.

KW - Distributed scientific visualization

KW - Image compositing

KW - Parallel I/O

KW - Parallel volume rendering

UR - http://www.scopus.com/inward/record.url?scp=77951496395&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77951496395&partnerID=8YFLogxK

U2 - 10.1109/ICPP.2009.27

DO - 10.1109/ICPP.2009.27

M3 - Conference contribution

SN - 9780769538020

SP - 566

EP - 573

BT - ICPP-2009 - The 38th International Conference on Parallel Processing

ER -