A Framework for Content-Adaptive Photo Manipulation Macros: Application to Face, Landscape, and Global Manipulations
Floraine Berthouzoz, Wilmot Li, Mira Dontcheva, Maneesh Agrawala
We present a framework for generating content-adaptive macros that can transfer complex photo manipulations to new target images. We demonstrate applications of our framework to face, landscape and global manipulations. To create a content-adaptive macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to learn the dependencies between image features and the parameters of each selection, brush stroke and image processing operation in the macro. Although our approach is limited to learning manipulations where there is a direct dependency between image features and operation parameters, we show that our framework is able to learn a large class of the most commonly-used manipulations using as few as 20 training demonstrations. Our framework also provides interactive controls to help macro authors and users generate training demonstrations and correct errors due to incorrect labeling or poor parameter estimation. We ask viewers to compare images generated using our content-adaptive macros with and without corrections to manually generated ground-truth images and find that they consistently rate both our automatic and corrected results as close in appearance to the ground-truth.We also evaluate the utility of our proposed macro generation workflow via a small informal lab study with professional photographers. The study suggests that our workflow is effective and practical in the context of real-world photo editing.
A Framework for Content-Adaptive Photo Manipulation Macros: Application to Face, Landscape and Global Manipulations from Floraine Berthouzoz on Vimeo.
Supplemental PDF (7.6M)