A rendering framework for multiscale views of 3D models

Wei Hsien Hsu, Kwan-Liu Ma, Carlos Correa

Research output: Contribution to journalArticle

7 Scopus citations

Abstract

Images that seamlessly combine views at different levels of detail are appealing. However, creating such multiscale images is not a trivial task, and most such illustrations are handcrafted by skilled artists. This paper presents a framework for direct multiscale rendering of geometric and volumetric models. The basis of our approach is a set of non-linearly bent camera rays that smoothly cast through multiple scales. We show that by properly setting up a sequence of conventional pinhole cameras to capture features of interest at different scales, along with image masks specifying the regions of interest for each scale on the projection plane, our rendering framework can generate non-linear sampling rays that smoothly project objects in a scene at multiple levels of detail onto a single image. We address two important issues with non-linear camera projection. First, our streamline-based ray generation algorithm avoids undesired camera ray intersections, which often result in unexpected images. Second, in order to maintain camera ray coherence and preserve aesthetic quality, we create an interpolated 3D field that defines the contribution of each pinhole camera for determining ray orientations. The resulting multiscale camera has three main applications: (1) presenting hierarchical structure in a compact and continuous manner, (2) achieving focus+context visualization, and (3) creating fascinating and artistic images.

Original languageEnglish (US)
Article number131
JournalACM Transactions on Graphics
Volume30
Issue number6
DOIs
Publication statusPublished - Dec 1 2011

    Fingerprint

Keywords

  • Camera model
  • Levels of detail
  • Multiscale views
  • Visualization

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design

Cite this