Dynamic video narratives

Carlos D. Correa, Kwan-Liu Ma

Research output: Contribution to journalArticlepeer-review

58 Scopus citations


This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.

Original languageEnglish (US)
Article number88
JournalACM Transactions on Graphics
Issue number4
StatePublished - Sep 13 2010


  • Graph-cut optimization
  • Image compositing
  • Interactive editing
  • Motion extraction
  • Video exploration

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Dynamic video narratives'. Together they form a unique fingerprint.

Cite this