In computer graphics, non-photorealistic rendering (NPR) describes all the depiction techniques which do not aim to convey photographic realism. The reason why this term is defined by “what it is not” is that through the last decades, the computer graphics community has focused most of its efforts in creating images indistinguishable from reality (like a photograph or photorealistic), thus assimilating the subfield “photorealistic rendering” to the more general term “computer graphics.”
Some of the most well-known NPR techniques are 3D rendering with cel shading to simulate cartoon style, simulation of traditional art media (ink, pencil, watercolor, etc.) and artistic styles (abstract painting, impressionism, etc.), and technical depiction of objects in order to enhance clarity over realism: architecture blueprints, engineering cutaway, and explode views or anatomical dissections.
Origin of the Term
The term was coined by Salesin and Winkenbach in 1994  and although its widespread use by researchers around the world, it is not free from controversy. Since the first International Symposium on Non-Photorealistic Animation and Rendering (the most well-known research conference on this subject) in the year 2000 , there is an ongoing discussion over the looseness of this term, and several alternatives have been proposed such as “computer depiction” . One reason is that non-photorealistic only describes a style of generating images, while the range of topics in NPR research encompasses both rendering aspects (multiple styles of picture generation) and interaction issues (e.g., artistic tools such as a simulated brush).
Related Fields: Art and Perception
Given this range of applications, the NPR field has naturally embraced interdisciplinary collaboration both with the artistic community and human perception researchers: in the last decade as applications like augmented reality move away from photorealism in a wish for conveying additional or specific information (e.g., overlaid pathology data during a surgery intervention), a strong need arises for checking that the depiction that they do provide faithfully communicates the desired information.
This interaction between art, perception, and NPR is widespread in the literature. Researchers have studied how humans paint or draw in order to derive models which predict the success of NPR algorithms . For instance, by combining several depictions of the same object made by different persons, it is known that most of the strokes can be predicted thanks to three-dimensional features such as contours or curvature and thus are automated by line drawing methods like suggestive contours . However it is also evident in these studies that even the most recent and sophisticated methods fail to predict all the elements used by artists to draw an object.
From 3D to 2D: Given a 3D geometric description of the objects, their material properties and light sources, a 2D image is generated according to a non-photorealistic style. For instance, in the cel shading technique, hand-drawn cartoon style is mimicked by adding a black contour line (occluding contour) based in the culling of front faces in the geometric model and illuminating the object with a flat shading obtained by limiting the range of colors to 2 or 3 t. The continuous value yielded by traditional shading formulae (based on the cross product of the vertex normals and the direction of the light source) is thresholded to a limited set of values to produce this effect (Fig. 3). Nowadays, most of these rendering effects are generated as a postprocessing step in image space. Surface normals, depth, and reflectance values are computed per pixel and stored as images in different layers (a technique known as deferred shading). In this context, image processing is used to the same effect. For instance, contour lines can be generated by detecting discontinuities in the normal image with an edge filter such as Canny’s detector .
From 2D to 2D: This is usually based on image processing techniques and no user intervention. Most of the commercially available photo cameras provide at least a few of these transformations. Given a photograph as input, a purely 2D cel shading effect is obtainable by a combination of histogram thresholding (posterize) for color and a contour detector (Canny) for the contour lines. The simulation of brush and pencil strokes by using line integral convolution (LIC)  is another well-known example, which integrates random sampling from the luminance vector field of the original image in order to generate lines emulating the most likely strokes done by artists (see bottom left image in Fig. 2). Salience and feature detection are also taken into account to emphasize those strokes which have greater visual significance.
From 2.5D to 2D: That is, 2D plus additional data such as incomplete 3D information (depth range or Z-buffer) and/or user annotations – semantic tags (this part of the image is a cloud), strokes (which encode information in its shape), marks (this part of the object is on top to the other), etc. Some recent techniques  include perception assumptions (such as global convexity and salience) in order to infer depth from 2D images. This depth can further be edited (sculpted) with brush strokes or directly used in combination with traditional 3D-to-2D algorithms to render NPR images like those in Fig. 4.
- A fourth possibility is from 2D to 3D, that is, from a set of user strokes or images (or a combination of both) of a given scene, a 3D model is derived which allows for novel views. This conforms a field by itself called image-based modeling and rendering (IBMR), which lies between the areas of computer graphics and computer vision.
Since the early adoption of cel shading in the title Jet Set Radio (year 2002), hundreds of examples of NPR techniques have emerged in video games. Likewise, NPR is ubiquitous in animation and films. The South Park series is an unexpected example of 2D cartoon style derived from 3D models. A case of 2D real-life footage to 2D hand-drawn animation style is shown in the film A Scanner Darkly, where key frames were rotoscoped and vectorized, automatically interpolating the frames in between. However, although some automation processes were introduced, this production required a significant amount of user input.
- 1.Winkenbach, G., Salesin, D.H.: Computer-generated pen-and-ink illustration. In: ACM Transactions on Graphics (TOG) – Proceedings of ACM SIGGRAPH, pp. 91–100 New Orleans, Louisiana, USA (2008)Google Scholar
- 2.Fekete, J-D., Salesin D. (eds.): Proceedings of the 1st International Symposium on Non-photorealistic Animation and Rendering. Annecy, France. ACM (2000)Google Scholar
- 3.Durand, F.: An invitation to discuss computer depiction. In: Proceedings of the 2nd International Symposium on Non-photorealistic Animation and Rendering. Annecy, France, pp. 111–124. ACM (2002)Google Scholar
- 7.Zhao, M., Zhu, S.: Sisley the abstract painter. In: Proceedings of the 8th International Symposium on Non-photorealistic Animation and Rendering Annecy, France, pp. 111–124. ACM (2010)Google Scholar
- 10.Cabral, B., Leedom, L.: Imaging vector fields using line integral convolution. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH ’93, pp. 263–270. Anaheim (1993)Google Scholar