Visibility guided multimodal volume visualization

Lin Zheng, Carlos Correa, Kwan-Liu Ma

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

With the advances in dual medical imaging, the requirements for multimodal and multifield volume visualization begin to emerge. One of the challenges in multimodal visualization is how to simplify the process of generating informative pictures from complementary data. In this paper we present an automatic technique that makes use of dual modality information, such as CT and PET, to produce effective focus+context volume visualization. With volume ray casting, per-ray visibility histograms summarize the contribution of samples along each ray to the final image. By quantifying visibility for the region of interest, indicated by the PET data, occluding tissues can be made just transparent enough to give a clear view of the features in that region while preserving some context. Unlike most previous methods relying on costly-preprocessing and tedious manual tuning, our technique achieves comparable and better results based on on-the-fly processing that still enables interactive visualization. Our work thus offers a powerful visualization technique for examining multimodal volume data. We demonstrate the technique with scenarios for the detection and diagnosis of cancer and other pathologies.

Original languageEnglish (US)
Title of host publicationProceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013
Pages297-304
Number of pages8
DOIs
StatePublished - Dec 1 2013
Event2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013 - Shanghai, China
Duration: Dec 18 2013Dec 21 2013

Other

Other2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013
CountryChina
CityShanghai
Period12/18/1312/21/13

Fingerprint

Visibility
Visualization
Medical imaging
Pathology
Casting
Tuning
Tissue
Processing

ASJC Scopus subject areas

  • Biomedical Engineering

Cite this

Zheng, L., Correa, C., & Ma, K-L. (2013). Visibility guided multimodal volume visualization. In Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013 (pp. 297-304). [6732506] https://doi.org/10.1109/BIBM.2013.6732506

Visibility guided multimodal volume visualization. / Zheng, Lin; Correa, Carlos; Ma, Kwan-Liu.

Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013. 2013. p. 297-304 6732506.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Zheng, L, Correa, C & Ma, K-L 2013, Visibility guided multimodal volume visualization. in Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013., 6732506, pp. 297-304, 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013, Shanghai, China, 12/18/13. https://doi.org/10.1109/BIBM.2013.6732506
Zheng L, Correa C, Ma K-L. Visibility guided multimodal volume visualization. In Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013. 2013. p. 297-304. 6732506 https://doi.org/10.1109/BIBM.2013.6732506
Zheng, Lin ; Correa, Carlos ; Ma, Kwan-Liu. / Visibility guided multimodal volume visualization. Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013. 2013. pp. 297-304
@inproceedings{38403ab0335d474194fae1663b7b9869,
title = "Visibility guided multimodal volume visualization",
abstract = "With the advances in dual medical imaging, the requirements for multimodal and multifield volume visualization begin to emerge. One of the challenges in multimodal visualization is how to simplify the process of generating informative pictures from complementary data. In this paper we present an automatic technique that makes use of dual modality information, such as CT and PET, to produce effective focus+context volume visualization. With volume ray casting, per-ray visibility histograms summarize the contribution of samples along each ray to the final image. By quantifying visibility for the region of interest, indicated by the PET data, occluding tissues can be made just transparent enough to give a clear view of the features in that region while preserving some context. Unlike most previous methods relying on costly-preprocessing and tedious manual tuning, our technique achieves comparable and better results based on on-the-fly processing that still enables interactive visualization. Our work thus offers a powerful visualization technique for examining multimodal volume data. We demonstrate the technique with scenarios for the detection and diagnosis of cancer and other pathologies.",
author = "Lin Zheng and Carlos Correa and Kwan-Liu Ma",
year = "2013",
month = "12",
day = "1",
doi = "10.1109/BIBM.2013.6732506",
language = "English (US)",
isbn = "9781479913091",
pages = "297--304",
booktitle = "Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013",

}

TY - GEN

T1 - Visibility guided multimodal volume visualization

AU - Zheng, Lin

AU - Correa, Carlos

AU - Ma, Kwan-Liu

PY - 2013/12/1

Y1 - 2013/12/1

N2 - With the advances in dual medical imaging, the requirements for multimodal and multifield volume visualization begin to emerge. One of the challenges in multimodal visualization is how to simplify the process of generating informative pictures from complementary data. In this paper we present an automatic technique that makes use of dual modality information, such as CT and PET, to produce effective focus+context volume visualization. With volume ray casting, per-ray visibility histograms summarize the contribution of samples along each ray to the final image. By quantifying visibility for the region of interest, indicated by the PET data, occluding tissues can be made just transparent enough to give a clear view of the features in that region while preserving some context. Unlike most previous methods relying on costly-preprocessing and tedious manual tuning, our technique achieves comparable and better results based on on-the-fly processing that still enables interactive visualization. Our work thus offers a powerful visualization technique for examining multimodal volume data. We demonstrate the technique with scenarios for the detection and diagnosis of cancer and other pathologies.

AB - With the advances in dual medical imaging, the requirements for multimodal and multifield volume visualization begin to emerge. One of the challenges in multimodal visualization is how to simplify the process of generating informative pictures from complementary data. In this paper we present an automatic technique that makes use of dual modality information, such as CT and PET, to produce effective focus+context volume visualization. With volume ray casting, per-ray visibility histograms summarize the contribution of samples along each ray to the final image. By quantifying visibility for the region of interest, indicated by the PET data, occluding tissues can be made just transparent enough to give a clear view of the features in that region while preserving some context. Unlike most previous methods relying on costly-preprocessing and tedious manual tuning, our technique achieves comparable and better results based on on-the-fly processing that still enables interactive visualization. Our work thus offers a powerful visualization technique for examining multimodal volume data. We demonstrate the technique with scenarios for the detection and diagnosis of cancer and other pathologies.

UR - http://www.scopus.com/inward/record.url?scp=84894572801&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84894572801&partnerID=8YFLogxK

U2 - 10.1109/BIBM.2013.6732506

DO - 10.1109/BIBM.2013.6732506

M3 - Conference contribution

SN - 9781479913091

SP - 297

EP - 304

BT - Proceedings - 2013 IEEE International Conference on Bioinformatics and Biomedicine, IEEE BIBM 2013

ER -