AbstractWe present Graphite, an iOS mobile app that enables users to transform photos into drawings and illustrations with ease. Graphite implements a novel flow-aligned rendering approach that is based on the analysis of local image-feature directions. A stroke-based image stylization pipeline is parameterized to compute realistic directional hatching and contouring effects in real-time. Its art-direction enables users to selectively and locally fine-tune design mechanisms and variables—such as the level of detail, stroke granularity, degree of smudging, and sketchiness—using the Apple Pencil or touch gestures. In this respect, the looks of manifold artistic media can be simulated, including pencil, pen-and-ink, pastel, and blueprint illustrations. Graphite is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, interactive editing can be performed in real-time by utilizing the dedicated Neural Engine and GPU. Providing an in-app printing service, Graphite serves as a unique tool for creating personalized prints of the user's own digital artworks.
Consistent Filtering of Videos and Dense Light-Fields Without Optic-Flow.Shekhar, Sumit; Semmo, Amir; Trapp, Matthias; Tursun, Okan Tarhan; Pasewaldt, Sebastian; Myszkowski, Karol; Döllner, Jürgen H. -J. Schulz, Teschner, M., Wimmer, M. (reds.) (2019).
AbstractThis paper presents an approach and performance evaluation of performing service-based image processing using software rendering implemented using Mesa3D. Due to recent advances in cloud computing technology (with respect to both, hardware and software) as well as increased demands of image processing and analysis techniques, often within an eco-system of devices, it is feasible to research and quantify the impact of service-based approaches in this domain with respect to cost-performance relation. For it, we provide a performance comparison for service-based processing using GPU-accelerated and software rendering.
Techniques for GPU-based Color Quantization.Trapp, Matthias; Pasewaldt, Sebastian; Döllner, Jürgen (2019).
AbstractThis paper presents a GPU-based approach to color quantization by mapping of arbitrary color palettes to input images using LUTs. For it, different types of LUTs, their GPU-based generation, representation, and respective mapping implementations are described and their run-time performance is evaluated and compared.
Rendering Procedural Textures for Visualization of Thematic Data in 3D Geovirtual Environments.Trapp, Matthias; Schlegel, Frank; Pasewaldt, Sebastian; Döllner, Jürgen (2019).
AbstractMobile expressive rendering gained increasing popularity amongst users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, the neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles and media without deep prior knowledge of photo processing or editing. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization, e.g., with respect to image feature semantics or the user's ideas and interest. The goal of this work is to implement and enhance state-of-the-art neural style transfer techniques, providing a generalized user interface with interactive tools for local control that facilitate a creative editing process on mobile devices. At this, we first propose a problem characterization consisting of three goals that represent a trade-off between visual quality, run-time performance and ease of control. We then present MaeSTrO, a mobile app for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to direct a semantics-based composition and perform location-based filtering. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
Approaches for Local Artistic Control of Mobile Neural Style Transfer.Reimann, Max; Klingbeil, Mandy; Pasewaldt, Sebastian; Semmo, Amir; Döllner, Jürgen; Trapp, Matthias (2018).
AbstractThis work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
MaeSTrO: Mobile-Style Transfer Orchestration Using Adaptive Neural Networks.Reimann, Max; Semmo, Amir; Pasewaldt, Sebastian; Klingbeil, Mandy; Döllner, Jürgen (2018).
AbstractWe present MaeSTrO, a mobile app for image stylization that empowers users to direct, edit and perform a neural style transfer with creative control. The app uses iterative style transfer, multi-style generative and adaptive networks to compute and apply flexible yet comprehensive style models of arbitrary images at run-time. Compared to other mobile applications, MaeSTrO introduces an interactive user interface that empowers users to orchestrate style transfers in a two-stage process for an individual visual expression: first, initial semantic segmentation of a style image can be complemented by on-screen painting to direct sub-styles in a spatially-aware manner. Second, semantic masks can be virtually drawn on top of a content image to adjust neural activations within local image regions, and thus direct the transfer of learned sub-styles. This way, the general feed-forward neural style transfer is evolved towards an interactive tool that is able to consider composition variables and mechanisms of general artwork production, such as color, size and location-based filtering. MaeSTrO additionally enables users to define new styles directly on a device and synthesize high-quality images based on prior segmentations via a service-based implementation of compute-intensive iterative style transfer techniques.
Teaching Image-Processing Programming for Mobile Devices: A Software Development Perspective.Trapp, Matthias; Pasewaldt, Sebastian; Dürschmid, Tobias; Semmo, Amir; Döllner, Jürgen F. Post, Žára, J. (reds.) (2018).
AbstractIn this paper we present a concept of a research course that teaches students in image processing as a building block of mobile applications. Our goal with this course is to teach theoretical foundations, practical skills in software development as well as scientific working principles to qualify graduates to start as fully-valued software developers or researchers. The course includes teaching and learning focused on the nature of small team research and development as encountered in the creative industries dealing with computer graphics, computer animation and game development. We discuss our curriculum design and issues in conducting undergraduate and graduate research that we have identified through four iterations of the course. Joint scientific demonstrations and publications of the students and their supervisors as well as quantitative and qualitative evaluation by students underline the success of the proposed concept. In particular, we observed that developing using a common software framework helps the students to jump start their course projects, while industry software processes such as branching coupled with a three-tier breakdown of project features helps them to structure and assess their progress.
Demo: Pictory - Neural Style Transfer and Editing with CoreML.Pasewaldt, Sebastian; Semmo, Amir; Klingbeil, Mandy; Döllner, Jürgen (2017).
AbstractThis work presents advances in the design and implementation of Pictory, an iOS app for artistic neural style transfer and interactive image editing using the CoreML and Metal APIs. Pictory combines the benefits of neural style transfer, e.g., high degree of abstraction on a global scale, with the interactivity of GPU-accelerated stateof-the-art image-based artistic rendering on a local scale. Thereby, the user is empowered to create high-resolution, abstracted renditions in a two-stage approach. First, a photo is transformed using a pre-trained convolutional neural network to obtain an intermediate stylized representation. Second, image-based artistic rendering techniques (e.g., watercolor, oil paint or toon filtering) are used to further stylize the image. Thereby, fine-scale texture noise—introduced by the style transfer—is filtered and interactive means are provided to individually adjust the stylization effects at run-time. Based on qualitative and quantitative user studies, Pictory has been redesigned and optimized to support casual users as well as mobile artists by providing effective, yet easy to understand, tools to facilitate image editing at multiple levels of control.
Challenges in User Experience Design of Image Filtering Apps.Klingbeil, Mandy; Pasewaldt, Sebastian; Semmo, Amir; Döllner, Jürgen (2017).
AbstractPhoto filtering apps successfully deliver image-based stylization techniques to a broad audience, in particular in the ubiquitous domain (e.g., smartphones, tablet computers). Interacting with these inherently complex techniques has so far mostly been approached in two different ways: (1) by exposing many (technical) parameters to the user, resulting in a professional application that typically requires expert domain knowledge, or (2) by hiding the complexity via presets that only allows the application of filters but prevents creative expression thereon. In this work, we outline challenges of and present approaches for providing interactive image filtering on mobile devices, thereby focusing on how to make them usable for people in their daily life. This is discussed by the example of BeCasso, a user-centric app for assisted image stylization that targets two user groups: mobile artists and users seeking casual creativity. Through user research, qualitative and quantitative user studies, we identify and outline usability issues that showed to prevent both user groups from reaching their objectives when using the app. On the one hand, user-group-targeting has been improved by an optimized user experience design. On the other hand, multiple level of controls have been implemented to ease the interaction and hide the underlying complex technical parameters. Evaluations underline that the presented approach can increase the usability of complex image stylization techniques for mobile apps.
Interactive Oil Paint Filtering On Mobile Devices.Semmo, Amir; Trapp, Matthias; Pasewaldt, Sebastian; Döllner, Jürgen (2016).
AbstractImage stylization enjoys a growing popularity on mobile devices to foster casual creativity. However, the implementation and provision of high-quality image filters for artistic rendering is still faced by the inherent limitations of mobile graphics hardware such as computing power and memory resources. This work presents a mobile implementation of a filter that transforms images into an oil paint look, thereby highlighting concepts and techniques on how to perform multi-stage nonlinear image filtering on mobile devices. The proposed implementation is based on OpenGL ES and the OpenGL ES shading language, and supports on-screen painting to interactively adjust the appearance in local image regions, e.g., to vary the level of abstraction, brush, and stroke direction. Evaluations of the implementation indicate interactive performance and results that are of similar aesthetic quality than its original desktop variant.
BeCasso: Artistic Image Processing and Editing on Mobile Devices.Pasewaldt, Sebastian; Semmo, Amir; Döllner, Jürgen; Schlegel, Frank (2016).
AbstractBeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control [Isenberg 2016] to serve the different needs of non-experts and digital artists. References: Isenberg, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In Proc. NPAR, The Eurographics Association, Goslar, Germany, 89–96
Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices.Semmo, Amir; Dürschmid, Tobias; Trapp, Matthias; Klingbeil, Mandy; Döllner, Jürgen; Pasewaldt, Sebastian (2016).
AbstractWith the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.
Interactive Multi-scale Oil Paint Filtering on Mobile Devices.Semmo, Amir; Trapp, Matthias; Dürschmid, Tobias; Döllner, Jürgen; Pasewaldt, Sebastian (2016).
AbstractThis work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.
Multi-Perspective 3D Panoramas.Pasewaldt, Sebastian; Semmo, Amir; Trapp, Matthias; Döllner, Jürgen in International Journal of Geographical Information Science (IJGIS) (2014). 28(10) 2030-2051.
AbstractThis article presents multi-perspective 3D panoramas that focus on visualizing 3D geovirtual environments (3D GeoVEs) for navigation and exploration tasks. Their key element, a multi-perspective view, seamlessly combines what is seen from multiple viewpoints into a single image. This approach facilitates thepresentation of information for virtual 3D city and landscape models, particularly by reducing occlusions, increasing screen-space utilization, and providing additional context within a single image. We complement multi-perspective views with cartographic visualization techniques to stylize features according to their semantics and highlight important or prioritized information. When combined, both techniques constitute the core implementation of interactive, multi-perspective 3D panoramas. They offer a large number of effective means for visual communication of 3D spatial information, a high degree of customization with respect to cartographic design, and manifold applications in different domains. We discuss design decisions of 3D panoramas for the exploration of and navigation in 3D GeoVEs. We also discuss a preliminary user study that indicates that 3D panoramas are a promising approach for navigation systems using 3D GeoVEs.
Multi-Perspective Detail+Overview Visualization for 3D Building Exploration.Pasewaldt, Sebastian; Trapp, Matthias; Döllner, Jürgen S. Czanner, Tang, W. (reds.) (2013). 57--64.
AbstractThis paper presents a multi-perspective rendering technique that enables detail+overview visualization and interactive exploration of virtual 3D building model. Virtual 3D building models, as main elements of virtual 3D city models, are used in a growing number of application domains, such as geoanalysis, disaster management and architectural planning. Visualization systems for such building models often rely on perspective or orthogonal projections using a single viewpoint. Therefore, the exploration of a complete model requires a user to change the viewpoint multiple times and to memorize the content of each view to obtain a comprehensive mental model. Since this is usually a time-consuming task, which implies context switching, current visualization systems use multiple viewports to simultaneously depict an object from different perspectives. Our approach extends the idea of multiple viewports by combining two linked views for the interactive exploration of virtual 3D buildings model and their facades. In contrast to traditional approaches, we automatically generate a multi-perspective view that simultaneously depicts all facades of the building in one overview image. This facilitates the process of obtaining overviews and supports fast and direct navigation to various points-of-interest. We describe the concept and implementations of our Multiple-Center-of-Projection camera model for real-time multi-perspective image synthesis. Further, we provide insights into different interaction techniques for linked multi-perspective views and outline approaches of future work.
Towards Comprehensible Digital 3D Maps.Pasewaldt, Sebastian; Semmo, Amir; Trapp, Matthias; Döllner, Jürgen M. Jobst (red.) (2012). 261-276.
AbstractDigital mapping services have become fundamental tools in economy and society to provide domain experts and non-experts with customized, multi-layered map contents. In particular because of the continuous advancements in the acquisition, provision, and visualization of virtual 3D city and landscape models, 3D mapping services, today, represent key components to a growing number of applications, like car navigation, education, or disaster management. However, current systems and applications providing digital 3D maps are faced by drawbacks and limitations, such as occlusion, visual clutter, or insufficient use of screen space, that impact an effective comprehension of geoinformation. To this end, cartographers and computer graphics engineers developed design guidelines, rendering and visualization techniques that aim to increase the effectiveness and expressiveness of digital 3D maps, but whose seamless combination has yet to be achieved. This work discusses potentials of digital 3D maps that are based on combining cartography-oriented rendering techniques and multi-perspective views. For this purpose, a classification of cartographic design principles, visualization techniques, as well as suitable combinations are identified that aid comprehension of digital 3D maps. According to this classification, a prototypical implementation demonstrates the benefits of multi-perspective and non-photorealistic rendering techniques for visualization of 3D map contents. In particular, it enables (1) a seamless combination of cartography-oriented and photorealistic graphic styles while (2) increasing screen-space utilization, and (3) simultaneously directing a viewer’s gaze to important or prioritized information.
An Immersive Visualization System for Virtual 3D City Models.Engel, Juri; Pasewaldt, Sebastian; Trapp, Matthias; Döllner, Jürgen (2012).
AbstractVirtual 3D city models are essential visualization tools for effective communication of complex urban spatial information. Immersive visualization of virtual 3D city models offers an intuitive access to and an effective way of realization of urban spatial information, enabling new collaborative applications and decision-support systems. This paper discusses techniques for and usage of fully immersive environments for visualizing virtual 3D city models by advanced 3D rendering techniques. Fully immersive environments imply a number of specific requirements for both hardware and software, which are discussed in detail. Further, we identify and outline conceptual and technical challenges as well as possible solution approaches by visualization system prototypes for large-scale, fully immersive environments. We evaluate the presented concepts using two application examples and discuss the results.
Multiscale Visualization of 3D Geovirtual Environments Using View-Dependent Multi-Perspective Views.Pasewaldt, Sebastian; Trapp, Matthias; Döllner, Jürgen in Journal of WSCG, (V. Skala, red.) (2011). 19(3) 111-118.
Abstract3D geovirtual environments (GeoVEs), such as virtual 3D city models or landscape models, are essential visualization tools for effectively communicating complex spatial information. In this paper, we discuss how these environments can be visualized using multi-perspective projections based on view-dependent global deformations. Multi-perspective projections enable 3D visualization similar to panoramic maps, increasing overview and information density in depictions of 3D GeoVEs. To make multi-perspective views an effective medium, they must adjust to the orientation of the virtual camera controlled by the user and constrained by the environment. Thus, changing multi-perspective camera configurations typically require the user to manually adapt the global deformation — an error prone, non-intuitive, and often time-consuming task. Our main contribution comprises a concept for the automatic and view-dependent interpolation of different global deformation preset configurations. Applications and systems that implement such view-dependent global deformations, allow users to smoothly and steadily interact with and navigate through multi-perspective 3D GeoVEs.
Interactive Rendering Techniques for Highlighting in 3D Geovirtual Environments.Trapp, Matthias; Beesk, Christian; Pasewaldt, Sebastian; Jürgen, Döllner in Lecture Notes in Geoinformation & Cartography (2010).
Abstract3D geovirtual environments (GeoVE), such as virtual 3D city and landscape models became an important tool for the visualization of geospatial information. Highlighting is an important component within a visualization framework and is essential for the user interaction within many applications. It enables the user to easily perceive active or selected objects in the context of the current interaction task. With respect to 3D GeoVE, it has a number of applications, such as the visualization of user selections, data base queries, as well as navigation aid by highlighting way points, routes, or to guide the user attention. The geometrical complexity of 3D GeoVE often requires specialized rendering techniques for the real-time image synthesis. This paper presents a framework that unifies various highlighting techniques and is especially suitable for the interactive rendering 3D GeoVE of high geometrical complexity.
"Multiscale Visualization of 3D Geovirtual Environments Using View-Dependent Multi-Perspective Views" at the WSCG 2011, Plzen - Czech Republic (02/2011)
"Multiscale, Multi-Perspective Visualization of 3D City Models in Immersive Environments" at the opening ceremony of the 3D Lab Golm, Potsdam - Germany (06/2011)
"Multiperspective Visualizations of 3D Geovirtual Environments" - Joint Workshop of the HPI Research School for "Service-Oriented Systems Engineering", Cape Town - South Africa (04/2012)
"Towards Comprehensible Digital 3D Maps" - Symposium on Service-Oriented Mapping 2012 (SOMAP 2012), Vienna - Austria (11/2012)
"A Service-Oriented Platform for Interactive 3D Web Mapping" - Symposium on Service-Oriented Mapping 2012 (SOMAP 2012), Vienna - Austria (11/2012)