This project investigates automatic non-photorealistic image processing techniques for the creation of simplified stylistic illustrations from color images, videos and 3D renderings based on generalizations of the Kuwahara filter.
Photorealistic depictions often contain more information than necessary to communicate intended information. Artists therefore typically remove detail and use abstraction for effective visual communication. A typical approach to automatically create stylized abstractions from images or videos is the use of an edge-preserving filter. Popular examples of edge-preserving filters used for image abstraction are the bilateral filter and mean shift. Both smooth low-contrast regions while preserving high-contrast edges. Therefore, they may fail for high-contrast images where either no abstraction is performed or relevant information is removed because of the thresholds used. They also often fail for low-contrast images where typically too much information is removed.
An edge-preserving filter that overcomes this limitation is the Kuwahara filter. Based on local area flattening, the Kuwahara filter properly removes details even in a highcontrast region, and protects shape boundaries even in low-contrast regions. Hence, it helps to maintain a roughly uniform level of abstraction across the image while providing an overall painting-style look. Unfortunately, the Kuwahara filter is unstable in the presence of noise and suffers from block artifacts. Several extensions and modifications have been proposed to improve the original Kuwahara filter. The most recent work by Papari et al.  introduces new weighting windows and a new combination rule. Even though this improves the output quality significantly, clustering artifacts are still noticeable.
In our work [Kyprianidis et al. 2009] we present a generalization of the Kuwahara filter that avoids clustering artifacts by adapting shape, scale and orientation of the filter to the local structure of the input. Due to this adaption of the filter to the local structure, directional image features are better preserved and emphasized. This results in overall sharper edges and a more feature-abiding painterly effect. Local orientation and a measure for the anisotropy are derived from the eigenvalues and eigenvectors of the smoothed structure tensor [Brox et al. 2006]. Then structure-aware smoothing is performed using a novel nonlinear filter. This nonlinear filter uses weighting functions defined over sectors of an ellipse whose shape is based on the local orientation and anisotropy. The filter response is defined as a weighed sum of the local averages where more weight is given to those averages with low standard deviation. Our approach shows excellent temporal coherence without requiring expensive video motion estimation. Implemented on the GPU the algorithm processes video in real-time.