Dynamic video narratives

Carlos D. Correa, Kwan-Liu Ma

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.

Original languageEnglish (US)
Title of host publicationACM SIGGRAPH 2010 Papers, SIGGRAPH 2010
EditorsHugues Hoppe
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450302104
DOIs
StatePublished - Jul 26 2010
Event37th International Conference and Exhibition on Computer Graphics and Interactive Techniques, SIGGRAPH 2010 - Los Angeles, United States
Duration: Jul 26 2010Jul 30 2010

Other

Other37th International Conference and Exhibition on Computer Graphics and Interactive Techniques, SIGGRAPH 2010
CountryUnited States
CityLos Angeles
Period7/26/107/30/10

Fingerprint

Masks
Processing
Chemical analysis
Costs

Keywords

  • Graph-cut optimization
  • Image compositing
  • Interactive editing
  • Motion extraction
  • Video exploration

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition
  • Software

Cite this

Correa, C. D., & Ma, K-L. (2010). Dynamic video narratives. In H. Hoppe (Ed.), ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010 [88] Association for Computing Machinery, Inc. https://doi.org/10.1145/1778765.1778825

Dynamic video narratives. / Correa, Carlos D.; Ma, Kwan-Liu.

ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010. ed. / Hugues Hoppe. Association for Computing Machinery, Inc, 2010. 88.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Correa, CD & Ma, K-L 2010, Dynamic video narratives. in H Hoppe (ed.), ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010., 88, Association for Computing Machinery, Inc, 37th International Conference and Exhibition on Computer Graphics and Interactive Techniques, SIGGRAPH 2010, Los Angeles, United States, 7/26/10. https://doi.org/10.1145/1778765.1778825
Correa CD, Ma K-L. Dynamic video narratives. In Hoppe H, editor, ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010. Association for Computing Machinery, Inc. 2010. 88 https://doi.org/10.1145/1778765.1778825
Correa, Carlos D. ; Ma, Kwan-Liu. / Dynamic video narratives. ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010. editor / Hugues Hoppe. Association for Computing Machinery, Inc, 2010.
@inproceedings{ba8266ff83014f8cabd5f1e71b72d85d,
title = "Dynamic video narratives",
abstract = "This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.",
keywords = "Graph-cut optimization, Image compositing, Interactive editing, Motion extraction, Video exploration",
author = "Correa, {Carlos D.} and Kwan-Liu Ma",
year = "2010",
month = "7",
day = "26",
doi = "10.1145/1778765.1778825",
language = "English (US)",
editor = "Hugues Hoppe",
booktitle = "ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010",
publisher = "Association for Computing Machinery, Inc",

}

TY - GEN

T1 - Dynamic video narratives

AU - Correa, Carlos D.

AU - Ma, Kwan-Liu

PY - 2010/7/26

Y1 - 2010/7/26

N2 - This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.

AB - This paper presents a system for generating dynamic narratives from videos. These narratives are characterized for being compact, coherent and interactive, as inspired by principles of sequential art. Narratives depict the motion of one or several actors over time. Creating compact narratives is challenging as it is desired to combine the video frames in a way that reuses redundant backgrounds and depicts the stages of a motion. In addition, previous approaches focus on the generation of static summaries and can afford expensive image composition techniques. A dynamic narrative, on the other hand, must be played and skimmed in real-time, which imposes certain cost limitations in the video processing. In this paper, we define a novel process to compose foreground and background regions of video frames in a single interactive image using a series of spatio-temporal masks. These masks are created to improve the output of automatic video processing techniques such as image stitching and foreground segmentation. Unlike hand-drawn narratives, often limited to static representations, the proposed system allows users to explore the narrative dynamically and produce different representations of motion. We have built an authoring system that incorporates these methods and demonstrated successful results on a number of video clips. The authoring system can be used to create interactive posters of video clips, browse video in a compact manner or highlight a motion sequence in a movie.

KW - Graph-cut optimization

KW - Image compositing

KW - Interactive editing

KW - Motion extraction

KW - Video exploration

UR - http://www.scopus.com/inward/record.url?scp=78249289754&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=78249289754&partnerID=8YFLogxK

U2 - 10.1145/1778765.1778825

DO - 10.1145/1778765.1778825

M3 - Conference contribution

AN - SCOPUS:78249289754

BT - ACM SIGGRAPH 2010 Papers, SIGGRAPH 2010

A2 - Hoppe, Hugues

PB - Association for Computing Machinery, Inc

ER -