Wagner, L., Limberger, D., Scheibel, W., Trapp, M. and Döllner, J. 2020. A Framework for Interactive Exploration of Clusters in Massive Data Using 3D Scatter Plots and WebGL. ACM.
This paper presents a rendering framework for the visualization of massive point datasets in the web. It includes highly interactive point rendering, cluster visualization, basic interaction methods, and importance-based labeling, while being available for both mobile and desktop browsers. The rendering style is customizable. Our evaluation indicates that the framework facilitates interactive visualization of tens of millions of raw data points even without dynamic filtering or aggregation.
Weitere Informationen
AbstractThis paper presents a rendering framework for the visualization of massive point datasets in the web. It includes highly interactive point rendering, cluster visualization, basic interaction methods, and importance-based labeling, while being available for both mobile and desktop browsers. The rendering style is customizable. Our evaluation indicates that the framework facilitates interactive visualization of tens of millions of raw data points even without dynamic filtering or aggregation.
Semmo, A. and Pasewaldt, S. 2020. Graphite: Interactive Photo-to-Drawing Stylization on Mobile Devices. Proceedings SIGGRAPH Appy Hour (New York, 2020), 3:1--3:2.
We present Graphite, an iOS mobile app that enables users to transform photos into drawings and illustrations with ease. Graphite implements a novel flow-aligned rendering approach that is based on the analysis of local image-feature directions. A stroke-based image stylization pipeline is parameterized to compute realistic directional hatching and contouring effects in real-time. Its art-direction enables users to selectively and locally fine-tune design mechanisms and variables—such as the level of detail, stroke granularity, degree of smudging, and sketchiness—using the Apple Pencil or touch gestures. In this respect, the looks of manifold artistic media can be simulated, including pencil, pen-and-ink, pastel, and blueprint illustrations. Graphite is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, interactive editing can be performed in real-time by utilizing the dedicated Neural Engine and GPU. Providing an in-app printing service, Graphite serves as a unique tool for creating personalized prints of the user's own digital artworks.
Weitere Informationen
AbstractWe present Graphite, an iOS mobile app that enables users to transform photos into drawings and illustrations with ease. Graphite implements a novel flow-aligned rendering approach that is based on the analysis of local image-feature directions. A stroke-based image stylization pipeline is parameterized to compute realistic directional hatching and contouring effects in real-time. Its art-direction enables users to selectively and locally fine-tune design mechanisms and variables—such as the level of detail, stroke granularity, degree of smudging, and sketchiness—using the Apple Pencil or touch gestures. In this respect, the looks of manifold artistic media can be simulated, including pencil, pen-and-ink, pastel, and blueprint illustrations. Graphite is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, interactive editing can be performed in real-time by utilizing the dedicated Neural Engine and GPU. Providing an in-app printing service, Graphite serves as a unique tool for creating personalized prints of the user's own digital artworks.
Ehrig, L., Atzberger, D., Hagedorn, B., Klimke, J. and Döllner, J. 2020. Customizable Asymmetric Loss Functions for Machine Learning-based Predictive Maintenance. 8th International Conference on Condition Monitoring and Diagnosis (2020).
In many predictive maintenance scenarios, the costs for not accurately detecting or anticipating faults can be considerably higher than the cumulative costs for inspections or premature maintenance. However, conventional symmetric loss functions widely used in machine learning cannot reflect such different costs. In this paper, we propose a method to construct asymmetric loss functions for regression tasks that are capable to better reflect this cost imbalance during the training of machine learning models and that allow for modeling the loss function to a) precisely match the cost relation for both kinds of errors where they can be estimated, b) control the impact of outliers, and c) manage the risk of over- or underestimation of the target variable, even when exact costs for (at least) one side are not known. We demonstrate on a realistic data set that the customized asymmetric loss functions can significantly reduce the impact of overestimations of the remaining useful life and can help to take more informed decisions on maintenance planning, leading to more cost-efficient production processes.
Weitere Informationen
AbstractIn many predictive maintenance scenarios, the costs for not accurately detecting or anticipating faults can be considerably higher than the cumulative costs for inspections or premature maintenance. However, conventional symmetric loss functions widely used in machine learning cannot reflect such different costs. In this paper, we propose a method to construct asymmetric loss functions for regression tasks that are capable to better reflect this cost imbalance during the training of machine learning models and that allow for modeling the loss function to a) precisely match the cost relation for both kinds of errors where they can be estimated, b) control the impact of outliers, and c) manage the risk of over- or underestimation of the target variable, even when exact costs for (at least) one side are not known. We demonstrate on a realistic data set that the customized asymmetric loss functions can significantly reduce the impact of overestimations of the remaining useful life and can help to take more informed decisions on maintenance planning, leading to more cost-efficient production processes.
Stojanovic, V., Hagedorn, B., Trapp, M. and Döllner, J. 2020. Ontology-Driven Analytics for Indoor Point Clouds. RE: Anthropocene, Proceedings of the 25th International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA) (2020).
Automated processing, semantic-enrichment and visual analytics methods for point clouds are often use-case specific for a given domain (e.g, for Facility Management (FM) applications). Currently, this means that applicable processing techniques, semantics and visual analytics methods need to be selected, generated or implemented by human domain experts, which is an error-prone, subjective and non-interoperable process. An ontology-driven analytics approach can be used to solve this problem by creating and maintaining a Knowledge Base, and utilizing an ontology for automatically suggesting optimal selection of processing and analytics techniques for point clouds. We present an approach of an ontology-driven analytics concept and system design, which supports smart representation, exploration, and processing of indoor point clouds. We present and provide an overview of high-level concept and architecture for such a system, along with related key technologies and approaches based on previously published case studies. We also describe key requirements for system components, and discuss the feasibility of their implementation within a Service-Oriented Architecture (SOA).
Weitere Informationen
AbstractAutomated processing, semantic-enrichment and visual analytics methods for point clouds are often use-case specific for a given domain (e.g, for Facility Management (FM) applications). Currently, this means that applicable processing techniques, semantics and visual analytics methods need to be selected, generated or implemented by human domain experts, which is an error-prone, subjective and non-interoperable process. An ontology-driven analytics approach can be used to solve this problem by creating and maintaining a Knowledge Base, and utilizing an ontology for automatically suggesting optimal selection of processing and analytics techniques for point clouds. We present an approach of an ontology-driven analytics concept and system design, which supports smart representation, exploration, and processing of indoor point clouds. We present and provide an overview of high-level concept and architecture for such a system, along with related key technologies and approaches based on previously published case studies. We also describe key requirements for system components, and discuss the feasibility of their implementation within a Service-Oriented Architecture (SOA).
Wagner, F.T., Döllner, J. and Trapp, M. 2020. Real-time Service-based Stream-processing of High-resolution Videos. 28. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2020 (2020).
Scheibel, W., Trapp, M., Limberger, D. and Döllner, J. 2020. A Taxonomy of Treemap Visualization Techniques. IVAPP 2020 - 11th International Conference on Information Visualization Theory and Applications (2020).
A treemap is a visualization that has been specifically designed to facilitate the exploration of tree-structured data and hierarchically structured data. The family of visualization techniques that use a visual metaphor for parent-child relationships based "on the property of containment" (Johnson, 1993) is commonly referred to as treemaps. However, as the number of variations of treemaps increases, it becomes increasingly important to distinguish clearly between techniques and their specific characteristics. This paper proposes to discern between Space-filling Treemap T_S , Containment Treemap T_C , Implicit Edge Representation Tree T_IE , and Mapped Tree T_MT for classification of hierarchy visualization techniques and highlights their respective properties. This taxonomy is created as hyponymy, i.e., its classes have an is-a relationship to each other: T_S ⊂ T_C ⊂ T_IE ⊂ T_MT . With this proposal, we intend to stimulate a discussion on a more unambiguous classification of treemaps and, furthermore, broaden what is understood by the concept of treemap itself.
Weitere Informationen
AbstractA treemap is a visualization that has been specifically designed to facilitate the exploration of tree-structured data and hierarchically structured data. The family of visualization techniques that use a visual metaphor for parent-child relationships based "on the property of containment" (Johnson, 1993) is commonly referred to as treemaps. However, as the number of variations of treemaps increases, it becomes increasingly important to distinguish clearly between techniques and their specific characteristics. This paper proposes to discern between Space-filling Treemap T_S , Containment Treemap T_C , Implicit Edge Representation Tree T_IE , and Mapped Tree T_MT for classification of hierarchy visualization techniques and highlights their respective properties. This taxonomy is created as hyponymy, i.e., its classes have an is-a relationship to each other: T_S ⊂ T_C ⊂ T_IE ⊂ T_MT . With this proposal, we intend to stimulate a discussion on a more unambiguous classification of treemaps and, furthermore, broaden what is understood by the concept of treemap itself.
Söchting, M. and Trapp, M. 2020. Controlling Image-Stylization Techniques using Eye Tracking. HUCAPP 2020 - 4th International Conference on Human Computer Interaction Theory and Applications (2020).
With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The conscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.
Weitere Informationen
AbstractWith the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The conscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.
Besançon, L., Semmo, A., Biau, D., Frachet, B., Pineau, V., Sariali, E.H., Soubeyrand, M., Taouachi, R., Isenberg, T. and Dragicevic, P. 2020. Reducing Affective Responses to Surgical Images and Videos through Stylization. Computer Graphics Forum. 39, 1 (2020), 462--483. DOI:https://doi.org/10.1111/cgf.13886.
Glöckner, D.-A.J., Ihde, L., Döllner, J. and Trapp, M. 2020. Intermediate Representations for Vectorization of Stylized Images. 28. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2020. (2020).
Döllner, J. 2019. Geospatial Artificial Intelligence: Potentials of Machine Learning for 3D Point Clouds and Geospatial Digital Twins. Hasso Plattner Institute.
Artificial Intelligence (AI) is changing fundamentally the way how IT solutions are built and operated across all application domains, including the geospatial domain. In this article, we briefly reflect on the term “AI” and outline the factors such as Machine Learning (ML) and Deep Learning (DL) that contribute to applying AI successfully for IT solutions. In the main part we discuss AI for the geospatial domain (GeoAI) focussing on 3D point clouds as a key category of geodata, describe their properties and discuss its suitability for ML and DL. In particular, we conclude that 3D point clouds constitute a corpus with similar properties than natural language corpora and formulate a naturalness hypothesis for 3D point clouds. We then outline concepts and examples of ML-based interpretation approaches that compute domain-specific and application-specific semantics for 3D point clouds without having to create explicit spatial models or explicit rule sets. Finally, we will show how ML enables us to efficiently build and maintain base data for digital twins of our environment such as virtual 3D city models, indoor models, or building information models.
Weitere Informationen
AbstractArtificial Intelligence (AI) is changing fundamentally the way how IT solutions are built and operated across all application domains, including the geospatial domain. In this article, we briefly reflect on the term “AI” and outline the factors such as Machine Learning (ML) and Deep Learning (DL) that contribute to applying AI successfully for IT solutions. In the main part we discuss AI for the geospatial domain (GeoAI) focussing on 3D point clouds as a key category of geodata, describe their properties and discuss its suitability for ML and DL. In particular, we conclude that 3D point clouds constitute a corpus with similar properties than natural language corpora and formulate a naturalness hypothesis for 3D point clouds. We then outline concepts and examples of ML-based interpretation approaches that compute domain-specific and application-specific semantics for 3D point clouds without having to create explicit spatial models or explicit rule sets. Finally, we will show how ML enables us to efficiently build and maintain base data for digital twins of our environment such as virtual 3D city models, indoor models, or building information models.
Klimke, J. 2019. Web-Based Provisioning and Application of Large-Scale virtual 3D City Models.
Trapp, M. and Döllner, J. 2019. Interactive Close-Up Rendering for Detail+Overview Visualization of 3D Digital Terrain Models. 23rd International Conference Information Visualisation (IV 2019) (2019).
This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as level-of-detail and level-of-abstraction used. The presented 3D close-up approach enables in-situ comparison of multiple region-of-interest simultaneously. We present a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
Weitere Informationen
AbstractThis paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as level-of-detail and level-of-abstraction used. The presented 3D close-up approach enables in-situ comparison of multiple region-of-interest simultaneously. We present a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time.
Scheibel, W., Hartmann, J. and Döllner, J. 2019. Design and Implementation of Web-Based Hierarchy Visualization Services. 10th International Conference on Information Visualization Theory and Applications (IVAPP) (2019).
There is a rapidly growing, cross-domain demand for interactive, high-quality visualization techniques as components of web-based applications and systems. In this context, a key question is how visualization services can be designed, implemented, and operated based on Software-as-a-Service as software delivery model. In this paper, we present concepts and design of a SaaS framework and API of visualization techniques for tree-structured data, called HiViSer. Using representational state transfer (REST), the API supports different data formats, data manipulations, visualization techniques, and output formats. In particular, the API defines base resource types for all components required to create an image or a virtual scene of a hierarchy visualization. We provide a treemap visualization service as prototypical implementation for which subtypes of the proposed API resources have been created. The approach generally serves as a blue-print for fully web-based, high-end visualization services running on thin clients in a standard browser environment.
Weitere Informationen
AbstractThere is a rapidly growing, cross-domain demand for interactive, high-quality visualization techniques as components of web-based applications and systems. In this context, a key question is how visualization services can be designed, implemented, and operated based on Software-as-a-Service as software delivery model. In this paper, we present concepts and design of a SaaS framework and API of visualization techniques for tree-structured data, called HiViSer. Using representational state transfer (REST), the API supports different data formats, data manipulations, visualization techniques, and output formats. In particular, the API defines base resource types for all components required to create an image or a virtual scene of a hierarchy visualization. We provide a treemap visualization service as prototypical implementation for which subtypes of the proposed API resources have been created. The approach generally serves as a blue-print for fully web-based, high-end visualization services running on thin clients in a standard browser environment.
Stojanovic, V., Trapp, M., Hagedorn, B., Klimke, J., Richter, R. and Döllner, J. 2019. Sensor Data Visualization for Indoor Point Clouds. 15th International Conference on Location Based Services (LBS 2019) (2019).
Integration and analysis of real-time and historic sensor data provides important insights into the operational status of buildings. There is a need for the integration of sensor data and digital representations of the built environment for furthering stakeholder engagement within the realms of Real Estate 4.0 and Facility Management (FM), especially in a spatial representation context. In this paper, we propose a general system architecture that integrates point cloud data and sensor data for visualization and analysis. We further present a prototypical web-based implementation of that architecture and demonstrate its application for the integration and visualization of sensor data from a typical office building, with the aim to communicate and analyze occupant comfort. The empirical results obtained from our prototypical implementation demonstrate the feasibility of our approach for the provisioning of light-weight software components for the service-oriented integration of Building Information Modeling (BIM), Building Automation Systems (BASs), Integrated Workplace Management Systems (IWMSs), and future Digital Twin (DT) platforms.
Weitere Informationen
AbstractIntegration and analysis of real-time and historic sensor data provides important insights into the operational status of buildings. There is a need for the integration of sensor data and digital representations of the built environment for furthering stakeholder engagement within the realms of Real Estate 4.0 and Facility Management (FM), especially in a spatial representation context. In this paper, we propose a general system architecture that integrates point cloud data and sensor data for visualization and analysis. We further present a prototypical web-based implementation of that architecture and demonstrate its application for the integration and visualization of sensor data from a typical office building, with the aim to communicate and analyze occupant comfort. The empirical results obtained from our prototypical implementation demonstrate the feasibility of our approach for the provisioning of light-weight software components for the service-oriented integration of Building Information Modeling (BIM), Building Automation Systems (BASs), Integrated Workplace Management Systems (IWMSs), and future Digital Twin (DT) platforms.
Shekhar, S., Semmo, A., Trapp, M., Tursun, O.T., Pasewaldt, S., Myszkowski, K. and Döllner, J. 2019. Consistent Filtering of Videos and Dense Light-Fields Without Optic-Flow. (2019).
Weitere Informationen
HerausgeberSchulz, H.-J. and Teschner, M. and Wimmer, M.
Florio, A., Trapp, M. and Döllner, J. 2019. Semantic-driven Visualization Techniques for Interactive Exploration of 3D Indoor Models. 23rd International Conference Information Visualisation (IV 2019) (2019).
The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific object-of-interests or important building elements. This requires approaches to filtering building parts and techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semantically-driven configuration in the context of 3D indoor models.
Weitere Informationen
AbstractThe availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific object-of-interests or important building elements. This requires approaches to filtering building parts and techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semantically-driven configuration in the context of 3D indoor models.
Trapp, M. and Döllner, J. 2019. Real-time Screen-space Geometry Draping for Digital Terrain Models. 23rd International Conference Information Visualisation (IV 2019) (2019).
A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
Weitere Informationen
AbstractA fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
Listemann, M., Trapp, M. and Döllner, J. 2019. Lens-based Focus+Context Visualization Techniques for Interactive Exploration of Web-based Reachability Maps. 27th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2019) (2019).
Reachability maps are powerful means to help making location-based decisions, such as choosing convenient sites for subsidiaries or planning vacation trips. Existing visualization approaches, however, introduce drawbacks concerning an effective acquisition of relevant information and an efficient data processing. In this paper, we introduce the first approach known so far to apply focus+context techniques to web-based reachability maps. We therefore propose a real-time enabled combination of an isochrone-based and a network-based representation of travel times obtained from multi-modal routing analysis using interactive lenses. We furthermore present a GPU-accelerated image-based method to compute isochrones based on a travel time-attributed network on client-side and thus achieve reduced data transmission efforts while yielding isochrones of higher precision, compared to generalized geometry-based approaches.
Weitere Informationen
AbstractReachability maps are powerful means to help making location-based decisions, such as choosing convenient sites for subsidiaries or planning vacation trips. Existing visualization approaches, however, introduce drawbacks concerning an effective acquisition of relevant information and an efficient data processing. In this paper, we introduce the first approach known so far to apply focus+context techniques to web-based reachability maps. We therefore propose a real-time enabled combination of an isochrone-based and a network-based representation of travel times obtained from multi-modal routing analysis using interactive lenses. We furthermore present a GPU-accelerated image-based method to compute isochrones based on a travel time-attributed network on client-side and thus achieve reduced data transmission efforts while yielding isochrones of higher precision, compared to generalized geometry-based approaches.
Wegen, O., Trapp, M., Döllner, J. and Pasewaldt, S. 2019. Performance Evaluation and Comparison of Service-based Image Processing based on Software Rendering. 27th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2019) (2019).
This paper presents an approach and performance evaluation of performing service-based image processing using software rendering implemented using Mesa3D. Due to recent advances in cloud computing technology (with respect to both, hardware and software) as well as increased demands of image processing and analysis techniques, often within an eco-system of devices, it is feasible to research and quantify the impact of service-based approaches in this domain with respect to cost-performance relation. For it, we provide a performance comparison for service-based processing using GPU-accelerated and software rendering.
Weitere Informationen
AbstractThis paper presents an approach and performance evaluation of performing service-based image processing using software rendering implemented using Mesa3D. Due to recent advances in cloud computing technology (with respect to both, hardware and software) as well as increased demands of image processing and analysis techniques, often within an eco-system of devices, it is feasible to research and quantify the impact of service-based approaches in this domain with respect to cost-performance relation. For it, we provide a performance comparison for service-based processing using GPU-accelerated and software rendering.
Stojanovic, V., Trapp, M., Richter, R. and Döllner, J. 2019. Generation of Approximate 2D and 3D Floor Plans from 3D Point Clouds. 14th International Conference on Computer Graphics Theory and Applications (GRAPP 2019) (2019).
Limberger, D., Trapp, M. and Döllner, J. 2019. In-Situ Comparison for 2.5D Treemaps. 10th International Conference on Information Visualization Theory and Applications (IVAPP 2019) (2019).
Trapp, M., Schlegel, F., Pasewaldt, S. and Döllner, J. 2019. Rendering Procedural Textures for Visualization of Thematic Data in 3D Geovirtual Environments. 10th International Conference on Information Visualization Theory and Applications (IVAPP 2019) (2019).
Semmo, A., Reimann, M., Klingbeil, M., Shekhar, S., Trapp, M. and Döllner, J. 2019. ViVid: Depicting Dynamics in Stylized Live Photos. Proceedings SIGGRAPH Appy Hour (New York, 2019), 8:1-8:2.
We present ViVid, a mobile app for iOS that empowers users to express dynamics in stylized Live Photos. This app uses state-of-the-art computer-vision techniques based on convolutional neural networks to estimate motion in the video footage that is captured together with a photo. Based on these analytics and best practices of contemporary art, photos can be stylized as a pencil drawing or cartoon look that includes design elements to visually suggest motion, such as ghosts, motion lines and halos. Its interactive parameterizations enable users to filter and art-direct composition variables, such as color, size and opacity, of the stylization process. ViVid is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, the motion estimation is scheduled to utilize the dedicated neural engine and GPU in parallel, while shading-based image stylization is able to process the video footage in real-time. This way, the app provides a unique tool for creating lively photo stylizations with ease.
Weitere Informationen
AbstractWe present ViVid, a mobile app for iOS that empowers users to express dynamics in stylized Live Photos. This app uses state-of-the-art computer-vision techniques based on convolutional neural networks to estimate motion in the video footage that is captured together with a photo. Based on these analytics and best practices of contemporary art, photos can be stylized as a pencil drawing or cartoon look that includes design elements to visually suggest motion, such as ghosts, motion lines and halos. Its interactive parameterizations enable users to filter and art-direct composition variables, such as color, size and opacity, of the stylization process. ViVid is based on Apple's CoreML, Metal and PhotoKit APIs for optimized on-device processing. Thus, the motion estimation is scheduled to utilize the dedicated neural engine and GPU in parallel, while shading-based image stylization is able to process the video footage in real-time. This way, the app provides a unique tool for creating lively photo stylizations with ease.
Trapp, M., Pasewaldt, S. and Döllner, J. 2019. Techniques for GPU-based Color Quantization. 27th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2019) (2019).
This paper presents a GPU-based approach to color quantization by mapping of arbitrary color palettes to input images using LUTs. For it, different types of LUTs, their GPU-based generation, representation, and respective mapping implementations are described and their run-time performance is evaluated and compared.
Weitere Informationen
AbstractThis paper presents a GPU-based approach to color quantization by mapping of arbitrary color palettes to input images using LUTs. For it, different types of LUTs, their GPU-based generation, representation, and respective mapping implementations are described and their run-time performance is evaluated and compared.
Stojanovic, V., Trapp, M., Richter, R., Hagedorn, B. and Döllner, J. 2019. Semantic Enrichment of Indoor Point Clouds: An Overview of Progress towards Digital Twinning. 37th eCAADe Conference.
This paper presents an approach towards the development of a service-oriented platform for semantic enrichment of indoor point clouds. It mainly focuses on integrated methods for the capture of as-is 3D point clouds using commodity mobile hardware, classification of point cloud clusters using a multiview-based method, geometric reconstruction of room boundaries, interactive 3D visualization, sensor data visualization, and tracking of spatial changes and user annotations via a secure ledger. Implementing the methods in a prototypical web-based application, we demonstrate our approach for the semantic enrichment of indoor point clouds and the generation of base data for Digital Twin representation.
Weitere Informationen
AbstractThis paper presents an approach towards the development of a service-oriented platform for semantic enrichment of indoor point clouds. It mainly focuses on integrated methods for the capture of as-is 3D point clouds using commodity mobile hardware, classification of point cloud clusters using a multiview-based method, geometric reconstruction of room boundaries, interactive 3D visualization, sensor data visualization, and tracking of spatial changes and user annotations via a secure ledger. Implementing the methods in a prototypical web-based application, we demonstrate our approach for the semantic enrichment of indoor point clouds and the generation of base data for Digital Twin representation.
Discher, S., Richter, R., Trapp, M. and Döllner, J. 2019. Service-Oriented Processing and Analysis of Massive Point Clouds in Geoinformation Management. Service-Oriented Mapping: Changing Paradigm in Map Production and Geoinformation Management. J. Döllner, Jobst, M., and Schmitz, P., eds. Springer International Publishing. 43--61.
Today, landscapes, cities, and infrastructure networks are commonly captured at regular intervals using LiDAR or image-based remote sensing technologies. The resulting point clouds, representing digital snapshots of the reality, are used for a growing number of applications, such as urban development, environmental monitoring, and disaster management. Multi-temporal point clouds, i.e., 4D point clouds, result from scanning the same site at different points in time and open up new ways to automate common geoinformation management workflows, e.g., updating and maintaining existing geodata such as models of terrain, infrastructure, building, and vegetation. However, existing GIS are often limited by processing strategies and storage capabilities that generally do not scale for massive point clouds containing several terabytes of data. We demonstrate and discuss techniques to manage, process, analyze, and provide large-scale, distributed 4D point clouds. All techniques have been implemented in a system that follows service-oriented design principles, thus, maximizing its interoperability and allowing for a seamless integration into existing workflows and systems. A modular service-oriented processing pipeline is presented that uses out-of-core and GPU-based processing approaches to efficiently handle massive 4D point clouds and to reduce processing times significantly. With respect to the provision of analysis results, we present web-based visualization techniques that apply real-time rendering algorithms and suitable interaction metaphors. Hence, users can explore, inspect, and analyze arbitrary large and dense point clouds. The approach is evaluated based on several real-world applications and datasets featuring different densities and characteristics. Results show that it enables the management, processing, analysis, and distribution of massive 4D point clouds as required by a growing number of applications and systems.
Weitere Informationen
HerausgeberDöllner, Jürgen and Jobst, Markus and Schmitz, Peter
AbstractToday, landscapes, cities, and infrastructure networks are commonly captured at regular intervals using LiDAR or image-based remote sensing technologies. The resulting point clouds, representing digital snapshots of the reality, are used for a growing number of applications, such as urban development, environmental monitoring, and disaster management. Multi-temporal point clouds, i.e., 4D point clouds, result from scanning the same site at different points in time and open up new ways to automate common geoinformation management workflows, e.g., updating and maintaining existing geodata such as models of terrain, infrastructure, building, and vegetation. However, existing GIS are often limited by processing strategies and storage capabilities that generally do not scale for massive point clouds containing several terabytes of data. We demonstrate and discuss techniques to manage, process, analyze, and provide large-scale, distributed 4D point clouds. All techniques have been implemented in a system that follows service-oriented design principles, thus, maximizing its interoperability and allowing for a seamless integration into existing workflows and systems. A modular service-oriented processing pipeline is presented that uses out-of-core and GPU-based processing approaches to efficiently handle massive 4D point clouds and to reduce processing times significantly. With respect to the provision of analysis results, we present web-based visualization techniques that apply real-time rendering algorithms and suitable interaction metaphors. Hence, users can explore, inspect, and analyze arbitrary large and dense point clouds. The approach is evaluated based on several real-world applications and datasets featuring different densities and characteristics. Results show that it enables the management, processing, analysis, and distribution of massive 4D point clouds as required by a growing number of applications and systems.
Limberger, D., Scheibel, W., Döllner, J. and Trapp, M. 2019. Advanced Visual Metaphors and Techniques for Software Maps. 12th International Symposium on Visual Information Communication and Interaction (VINCI 2019) (2019).
Trapp, M., Dumke, F. and Döllner, J. 2019. Occlusion Management Techniques for the Visualization of Transportation Networks in Virtual 3D City Models. 12th International Symposium on Visual Information Communication and Interaction (VINCI 2019) (2019).
Stojanovic, V., Trapp, M., Richter, R., Hagedorn, B. and Döllner, J. 2019. Semantic Enrichment of Indoor Point Clouds: An Overview of Progress towards Digital Twinning. 37th eCAADe Conference (2019).
Schoedon, A., Trapp, M., Hollburg, H., Gerber, D. and Döllner, J. 2019. Web-based Visualization of Transportation Networks for Mobility Analytics. 12th International Symposium on Visual Information Communication and Interaction (VINCI 2019) (2019).
Stojanovic, V., Trapp, M., Richter, R. and Döllner, J. 2019. Classification of Indoor Point Clouds Using Multiview. 24th International ACM Conference on 3D Web Technology (Web3D 2019) (2019).
Stojanovic, V., Trapp, M., Hagedorn, B., Klimke, J., Richter, R. and Döllner, J. 2019. Sensor Data Visualization for Indoor Point Clouds. 15th International Conference on Location Based Services (LBS 2019) (2019).
Discher, S., Richter, R. and Döllner, J. 2019. Concepts and techniques for web-based visualization and processing of massive 3D point clouds with semantics. Graphical Models. 104, (2019), 101036. DOI:https://doi.org/https://doi.org/10.1016/j.gmod.2019.101036.
3D point cloud technology facilitates the automated and highly detailed acquisition of real-world environments such as assets, sites, and countries. We present a web-based system for the interactive exploration and inspection of arbitrary large 3D point clouds. Our approach is able to render 3D point clouds with billions of points using spatial data structures and level-of-detail representations. Point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering, e.g., based on semantics. A set of interaction techniques allows users to collaboratively work with the data (e.g., measuring distances and annotating). Additional value is provided by the system’s ability to display additional, context-providing geodata alongside 3D point clouds and to integrate processing and analysis operations. We have evaluated the presented techniques and in case studies and with different data sets from aerial, mobile, and terrestrial acquisition with up to 120 billion points to show their practicality and feasibility.
Weitere Informationen
Abstract3D point cloud technology facilitates the automated and highly detailed acquisition of real-world environments such as assets, sites, and countries. We present a web-based system for the interactive exploration and inspection of arbitrary large 3D point clouds. Our approach is able to render 3D point clouds with billions of points using spatial data structures and level-of-detail representations. Point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering, e.g., based on semantics. A set of interaction techniques allows users to collaboratively work with the data (e.g., measuring distances and annotating). Additional value is provided by the system’s ability to display additional, context-providing geodata alongside 3D point clouds and to integrate processing and analysis operations. We have evaluated the presented techniques and in case studies and with different data sets from aerial, mobile, and terrestrial acquisition with up to 120 billion points to show their practicality and feasibility.
Scheibel, W., Hartmann, J., Limberger, D. and Döllner, J. 2019. Visualization of Tree-structured Data using Web Service Composition. VISIGRAPP 2019: Computer Vision, Imaging and Computer Graphics Theory and Applications. (2019), 227--252. DOI:https://doi.org/10.1007/978-3-030-41590-7_10.
Reimann, M., Klingbeil, M., Pasewaldt, S., Semmo, A., Trapp, M. and Döllner, J. 2019. Locally Controllable Neural Style Transfer on Mobile Devices. The Visual Computer. (2019). DOI:https://doi.org/10.1007/s00371-019-01654-1.
Stojanovic, V., Trapp, M., Richter, R. and Döllner, J. 2019. Service-Oriented Semantic Enrichment of Indoor Point Clouds using Octree-Based Multiview Classification. Graphical Models. (2019). DOI:https://doi.org/10.1016/j.gmod.2019.101039.
The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation — based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices.
Weitere Informationen
HerausgeberPeters, Jörg
AbstractThe use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation — based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices.
Scheibel, W., Weyand, C. and Döllner, J. 2018. EvoCells – A Treemap Layout Algorithm for Evolving Tree Data. 9th International Conference on Information Visualization Theory and Applications (IVAPP) (2018).
Trapp, M., Pasewaldt, S., Dürschmid, T., Semmo, A. and Döllner, J. 2018. Teaching Image-Processing Programming for Mobile Devices: A Software Development Perspective. Proceedings Eurographics Education Papers (Delft, Netherlands, 2018).
In this paper we present a concept of a research course that teaches students in image processing as a building block of mobile applications. Our goal with this course is to teach theoretical foundations, practical skills in software development as well as scientific working principles to qualify graduates to start as fully-valued software developers or researchers. The course includes teaching and learning focused on the nature of small team research and development as encountered in the creative industries dealing with computer graphics, computer animation and game development. We discuss our curriculum design and issues in conducting undergraduate and graduate research that we have identified through four iterations of the course. Joint scientific demonstrations and publications of the students and their supervisors as well as quantitative and qualitative evaluation by students underline the success of the proposed concept. In particular, we observed that developing using a common software framework helps the students to jump start their course projects, while industry software processes such as branching coupled with a three-tier breakdown of project features helps them to structure and assess their progress.
Weitere Informationen
HerausgeberPost, Frits and Žára, Jirí
AbstractIn this paper we present a concept of a research course that teaches students in image processing as a building block of mobile applications. Our goal with this course is to teach theoretical foundations, practical skills in software development as well as scientific working principles to qualify graduates to start as fully-valued software developers or researchers. The course includes teaching and learning focused on the nature of small team research and development as encountered in the creative industries dealing with computer graphics, computer animation and game development. We discuss our curriculum design and issues in conducting undergraduate and graduate research that we have identified through four iterations of the course. Joint scientific demonstrations and publications of the students and their supervisors as well as quantitative and qualitative evaluation by students underline the success of the proposed concept. In particular, we observed that developing using a common software framework helps the students to jump start their course projects, while industry software processes such as branching coupled with a three-tier breakdown of project features helps them to structure and assess their progress.
Richter, M., Söchting, M., Semmo, A., Döllner, J. and Trapp, M. 2018. Service-based Processing and Provisioning of Image-Abstraction Techniques. Proceedings International Conference on Computer Graphics, Visualization and Computer Vision (WSCG) (Plzen, Czech Republic, 2018).
Digital images and image streams represent two major categories of media captured, delivered, and shared on the Web. Techniques for their analysis, classification, and processing are fundamental building blocks in today's digital media applications ranging from mobile image transformation apps to professional digital production suites. To efficiently process such digital media (1) independent of hardware requirements, (2) at different data complexity scales, (3) to yield high-quality results, poses several challenges for software frameworks and hardware systems, in particular for mobile devices. With respect to these aspects, using service-based architectures are a common approach to strive for. However, unlike geodata, there is currently no standard approach for service definition, implementation, and orchestration in the domain of digital images and videos. This paper presents an approach for service-based image processing and provisioning of processing techniques by the example of image-abstraction techniques. The generality and feasibility of the proposed system is demonstrated by different client applications that have been implemented for the Android Operation System, for Google's G-Suite Software-as-a-Service Infrastructure, as well as for Desktop systems. The performance of the system is discussed at the example of complex, resource-intensive image abstraction techniques, such as watercolor rendering.
Weitere Informationen
AbstractDigital images and image streams represent two major categories of media captured, delivered, and shared on the Web. Techniques for their analysis, classification, and processing are fundamental building blocks in today's digital media applications ranging from mobile image transformation apps to professional digital production suites. To efficiently process such digital media (1) independent of hardware requirements, (2) at different data complexity scales, (3) to yield high-quality results, poses several challenges for software frameworks and hardware systems, in particular for mobile devices. With respect to these aspects, using service-based architectures are a common approach to strive for. However, unlike geodata, there is currently no standard approach for service definition, implementation, and orchestration in the domain of digital images and videos. This paper presents an approach for service-based image processing and provisioning of processing techniques by the example of image-abstraction techniques. The generality and feasibility of the proposed system is demonstrated by different client applications that have been implemented for the Android Operation System, for Google's G-Suite Software-as-a-Service Infrastructure, as well as for Desktop systems. The performance of the system is discussed at the example of complex, resource-intensive image abstraction techniques, such as watercolor rendering.
Besançon, L., Semmo, A., Biau, D., Frachet, B., Pineau, V., Ariali, E.H., Taouachi, R., Isenberg, T. and Dragicevic, P. 2018. Reducing Affective Responses to Surgical Images through Color Manipulation and Stylization. Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) (Victoria, BC, Canada, 2018).
We present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective.
Weitere Informationen
HerausgeberAydın, Tunc and Sýkora, Daniel
AbstractWe present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective.
Montesdeoca, S., Seah, H.S., Semmo, A., Bénard, P., Vergne, R., Thollot, J. and Benvenuti, D. 2018. MNPR: A Framework for Real-Time Expressive Non-Photorealistic Rendering of 3D Computer Graphics. Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) (Victoria, BC, Canada, 2018).
We propose a framework for expressive non-photorealistic rendering of 3D computer graphics. Our work focuses on enabling stylization pipelines with a wide range of control, thereby covering the interaction spectrum with real-time feedback. In addition, we introduce control semantics that allow cross-stylistic art-direction, which is demonstrated through our implemented watercolor, oil and charcoal stylizations. Our generalized control semantics and their style-specific mappings are designed to be extrapolated to other styles, by adhering to the same control scheme. We then share our implementation details by breaking down the framework and elaborating on its inner workings. Finally, we evaluate the usefulness of each level of control through a user study involving 20 experienced artists and engineers in the industry, who have collectively spent over 245 hours using our system. Our framework is implemented in Autodesk Maya and open-sourced through this publication, to facilitate adoption by artists and further development by the expressive research and development community.
Weitere Informationen
HerausgeberAydın, Tunc and Sýkora, Daniel
AbstractWe propose a framework for expressive non-photorealistic rendering of 3D computer graphics. Our work focuses on enabling stylization pipelines with a wide range of control, thereby covering the interaction spectrum with real-time feedback. In addition, we introduce control semantics that allow cross-stylistic art-direction, which is demonstrated through our implemented watercolor, oil and charcoal stylizations. Our generalized control semantics and their style-specific mappings are designed to be extrapolated to other styles, by adhering to the same control scheme. We then share our implementation details by breaking down the framework and elaborating on its inner workings. Finally, we evaluate the usefulness of each level of control through a user study involving 20 experienced artists and engineers in the industry, who have collectively spent over 245 hours using our system. Our framework is implemented in Autodesk Maya and open-sourced through this publication, to facilitate adoption by artists and further development by the expressive research and development community.
Stojanovic, V., Trapp, M., Richter, R., Hagedorn, B. and Döllner, J. 2018. Towards the Generation of Digital Twins for Facility Management Based on 3D Point Clouds. ARCOM 2018 (2018).
Advances versus adaptation of Industry 4.0 practices in Facility Management (FM) have created usage demand for up-to-date digitized building assets. The use of Building Information Modelling (BIM) for FM in the Operation and Maintenance (O&M) stages of the building lifecycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing and forecasting the state of the built environment in real-time. To accommodate this, BIM data needs to be constantly updated with the current state of the built environment. However, generation of as-is BIM data for a digital representation of a building is a labor intensive process. While some software applications offer a degree of automation for the generation of as-is BIM data, they can be impractical to use for routinely updating digital FM documentation. Current approaches for capturing the built environment using remote sensing and photometry-based methods allow for the creation of 3D point clouds that can be used as basis data for a Digital Twin (DT), along with existing BIM and FM documentation. 3D point clouds themselves do not contain any semantics or specific information about the building components they represent physically, but using machine learning methods they can be enhanced with semantics that would allow for reconstruction of as-is BIM and basis DT data. This paper presents current research and development progress of a service-oriented platform for generation of semantically rich 3D point cloud representations of indoor environments. A specific focus is placed on the reconstruction and visualization of the captured state of the built environment for increasing FM stakeholder engagement and facilitating collaboration. The preliminary results of a prototypical web-based application demonstrate the feasibility of such a platform for FM using a service-oriented paradigm.
Weitere Informationen
AbstractAdvances versus adaptation of Industry 4.0 practices in Facility Management (FM) have created usage demand for up-to-date digitized building assets. The use of Building Information Modelling (BIM) for FM in the Operation and Maintenance (O&M) stages of the building lifecycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing and forecasting the state of the built environment in real-time. To accommodate this, BIM data needs to be constantly updated with the current state of the built environment. However, generation of as-is BIM data for a digital representation of a building is a labor intensive process. While some software applications offer a degree of automation for the generation of as-is BIM data, they can be impractical to use for routinely updating digital FM documentation. Current approaches for capturing the built environment using remote sensing and photometry-based methods allow for the creation of 3D point clouds that can be used as basis data for a Digital Twin (DT), along with existing BIM and FM documentation. 3D point clouds themselves do not contain any semantics or specific information about the building components they represent physically, but using machine learning methods they can be enhanced with semantics that would allow for reconstruction of as-is BIM and basis DT data. This paper presents current research and development progress of a service-oriented platform for generation of semantically rich 3D point cloud representations of indoor environments. A specific focus is placed on the reconstruction and visualization of the captured state of the built environment for increasing FM stakeholder engagement and facilitating collaboration. The preliminary results of a prototypical web-based application demonstrate the feasibility of such a platform for FM using a service-oriented paradigm.
Reimann, M., Klingbeil, M., Pasewaldt, S., Semmo, A., Trapp, M. and Döllner, J. 2018. MaeSTrO: A Mobile App for Style Transfer Orchestration using Neural Networks. Proceedings International Conference on Cyberworlds (2018), 9-16.
Mobile expressive rendering gained increasing popularity amongst users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, the neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles and media without deep prior knowledge of photo processing or editing. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization, e.g., with respect to image feature semantics or the user's ideas and interest. The goal of this work is to implement and enhance state-of-the-art neural style transfer techniques, providing a generalized user interface with interactive tools for local control that facilitate a creative editing process on mobile devices. At this, we first propose a problem characterization consisting of three goals that represent a trade-off between visual quality, run-time performance and ease of control. We then present MaeSTrO, a mobile app for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to direct a semantics-based composition and perform location-based filtering. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
Weitere Informationen
AbstractMobile expressive rendering gained increasing popularity amongst users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, the neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles and media without deep prior knowledge of photo processing or editing. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization, e.g., with respect to image feature semantics or the user's ideas and interest. The goal of this work is to implement and enhance state-of-the-art neural style transfer techniques, providing a generalized user interface with interactive tools for local control that facilitate a creative editing process on mobile devices. At this, we first propose a problem characterization consisting of three goals that represent a trade-off between visual quality, run-time performance and ease of control. We then present MaeSTrO, a mobile app for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to direct a semantics-based composition and perform location-based filtering. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
Reimann, M., Semmo, A., Pasewaldt, S., Klingbeil, M. and Döllner, J. 2018. MaeSTrO: Mobile-Style Transfer Orchestration Using Adaptive Neural Networks. Proceedings SIGGRAPH Appy Hour (New York, 2018).
We present MaeSTrO, a mobile app for image stylization that empowers users to direct, edit and perform a neural style transfer with creative control. The app uses iterative style transfer, multi-style generative and adaptive networks to compute and apply flexible yet comprehensive style models of arbitrary images at run-time. Compared to other mobile applications, MaeSTrO introduces an interactive user interface that empowers users to orchestrate style transfers in a two-stage process for an individual visual expression: first, initial semantic segmentation of a style image can be complemented by on-screen painting to direct sub-styles in a spatially-aware manner. Second, semantic masks can be virtually drawn on top of a content image to adjust neural activations within local image regions, and thus direct the transfer of learned sub-styles. This way, the general feed-forward neural style transfer is evolved towards an interactive tool that is able to consider composition variables and mechanisms of general artwork production, such as color, size and location-based filtering. MaeSTrO additionally enables users to define new styles directly on a device and synthesize high-quality images based on prior segmentations via a service-based implementation of compute-intensive iterative style transfer techniques.
Weitere Informationen
AbstractWe present MaeSTrO, a mobile app for image stylization that empowers users to direct, edit and perform a neural style transfer with creative control. The app uses iterative style transfer, multi-style generative and adaptive networks to compute and apply flexible yet comprehensive style models of arbitrary images at run-time. Compared to other mobile applications, MaeSTrO introduces an interactive user interface that empowers users to orchestrate style transfers in a two-stage process for an individual visual expression: first, initial semantic segmentation of a style image can be complemented by on-screen painting to direct sub-styles in a spatially-aware manner. Second, semantic masks can be virtually drawn on top of a content image to adjust neural activations within local image regions, and thus direct the transfer of learned sub-styles. This way, the general feed-forward neural style transfer is evolved towards an interactive tool that is able to consider composition variables and mechanisms of general artwork production, such as color, size and location-based filtering. MaeSTrO additionally enables users to define new styles directly on a device and synthesize high-quality images based on prior segmentations via a service-based implementation of compute-intensive iterative style transfer techniques.
Reimann, M., Klingbeil, M., Pasewaldt, S., Semmo, A., Döllner, J. and Trapp, M. 2018. Approaches for Local Artistic Control of Mobile Neural Style Transfer. Proceedings of the Joint Symposium on Computational Aesthetics, Sketch-Based Interfaces and Modeling, and Non-Photorealistic Animation and Rendering (Expressive) (New York, 2018).
This work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
Weitere Informationen
AbstractThis work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
Discher, S., Richter, R., Trapp, M. and Döllner, J. 2018. Service-Oriented Processing and Analysis of Massive Point Clouds in Geoinformation Management. Service Oriented Mapping: Changing Paradigm in Map Production and Geoinformation Management. J. Döllner, Jobst, M., and Schmitz, P., eds. Springer.
Weitere Informationen
HerausgeberDöllner, Jürgen and Jobst, Markus and Schmitz, Peter
Stojanovic, V., Trapp, M., Richter, R. and Döllner, J. 2018. A Service-oriented Approach for Classifying 3D Points Clouds by Example of Office Furniture Classification. 23rd International ACM Conference on 3D Web Technology (Web3D 2018) (2018).
Thiel, F., Discher, S., Richter, R. and Döllner, J. 2018. Interaction and Locomotion Techniques for the Exploration of Massive 3D Point Clouds in VR Environments. Proceedings of ISPRS Technical Commission IV Symposium 2018. (2018).
Emerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90 fps instead of 30 - 60 fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach.
Weitere Informationen
AbstractEmerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90 fps instead of 30 - 60 fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach.
Wolf, J., Discher, S., Masopust, L., Schulz, S., Richter, R. and Döllner, J. 2018. Combined Visual Exploration of 2D Ground Radar and 3D Point Cloud Data for Road Environments. Proceedings of 3D GeoInfo 2018. (2018).
Ground-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization is a valuable tool for infrastructure maintenance tasks. This paper presents visualization techniques developed for the combined visual exploration of the data captured in road environments. Main challenges are the positioning of the ground radar data within the 3D environment and the reduction of occlusion for individual data sets. By projecting the measured ground radar data onto the precise trajectory of the scan, it can be displayed within the context of the 3D point cloud representation of the road environment. We show that customizable overlay, filtering, and cropping techniques enable insightful data exploration. A 3D renderer combines both data sources. To enable an inspection of areas of interest, ground radar data can be elevated above ground level for better visibility. An interactive lens approach enables to visualize data sources that are currently occluded by others. The visualization techniques prove to be a valuable tool for ground layer anomaly inspection and were evaluated in a real-world data set. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information (e. g., about manholes in the street) that can be directly accessed during evaluation.
Weitere Informationen
AbstractGround-penetrating 2D radar scans are captured in road environments for examination of pavement condition and below-ground variations such as lowerings and developing pot-holes. 3D point clouds captured above ground provide a precise digital representation of the road’s surface and the surrounding environment. If both data sources are captured for the same area, a combined visualization is a valuable tool for infrastructure maintenance tasks. This paper presents visualization techniques developed for the combined visual exploration of the data captured in road environments. Main challenges are the positioning of the ground radar data within the 3D environment and the reduction of occlusion for individual data sets. By projecting the measured ground radar data onto the precise trajectory of the scan, it can be displayed within the context of the 3D point cloud representation of the road environment. We show that customizable overlay, filtering, and cropping techniques enable insightful data exploration. A 3D renderer combines both data sources. To enable an inspection of areas of interest, ground radar data can be elevated above ground level for better visibility. An interactive lens approach enables to visualize data sources that are currently occluded by others. The visualization techniques prove to be a valuable tool for ground layer anomaly inspection and were evaluated in a real-world data set. The combination of 2D ground radar scans with 3D point cloud data improves data interpretation by giving context information (e. g., about manholes in the street) that can be directly accessed during evaluation.
Discher, S., Richter, R. and Döllner, J. 2018. A Scalable WebGL-based Approach for Visualizing Massive 3D Point Clouds using Semantics-Dependent Rendering Techniques. Proceedings of Web3D ’18. (2018).
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
Weitere Informationen
Abstract3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
Discher, S., Masopust, L., Schulz, S., Richter, R. and Döllner, J. 2018. A Point-Based and Image-Based Multi-Pass Rendering Technique for Visualizing Massive 3D Point Clouds in VR Environments. Proceedings of WSCG 2018. (2018).
Real-time rendering for 3D point clouds allows for interactively exploring and inspecting real-world assets, sites, or regions on a broad range of devices but has to cope with their vastly different computing capabilities. Virtual reality (VR) applications rely on high frame rates (i.e., around 90 fps as opposed to 30 - 60 fps) and show high sensitivity to any kind of visual artifacts, which are typical for 3D point cloud depictions (e.g., holey surfaces or visual clutter due to inappropriate point sizes). We present a novel rendering system that allows for an immersive, nausea-free exploration of arbitrary large 3D point clouds on state-of-the-art VR devices such as HTC Vive and Oculus Rift. Our approach applies several point-based and image-based rendering techniques that are combined using a multipass rendering pipeline. The approach does not require to derive generalized, mesh-based representations in a preprocessing step and preserves precision and density of the raw 3D point cloud data. The presented techniques have been implemented and evaluated with massive real-world data sets from aerial, mobile, and terrestrial acquisition campaigns containing up to 2.6 billion points to show the practicability and scalability of our approach.
Weitere Informationen
AbstractReal-time rendering for 3D point clouds allows for interactively exploring and inspecting real-world assets, sites, or regions on a broad range of devices but has to cope with their vastly different computing capabilities. Virtual reality (VR) applications rely on high frame rates (i.e., around 90 fps as opposed to 30 - 60 fps) and show high sensitivity to any kind of visual artifacts, which are typical for 3D point cloud depictions (e.g., holey surfaces or visual clutter due to inappropriate point sizes). We present a novel rendering system that allows for an immersive, nausea-free exploration of arbitrary large 3D point clouds on state-of-the-art VR devices such as HTC Vive and Oculus Rift. Our approach applies several point-based and image-based rendering techniques that are combined using a multipass rendering pipeline. The approach does not require to derive generalized, mesh-based representations in a preprocessing step and preserves precision and density of the raw 3D point cloud data. The presented techniques have been implemented and evaluated with massive real-world data sets from aerial, mobile, and terrestrial acquisition campaigns containing up to 2.6 billion points to show the practicability and scalability of our approach.
Vollmer, J.O., Trapp, M., Schumann, H. and Döllner, J. 2018. Hierarchical Spatial Aggregation for Level-of-Detail Visualization of 3D Thematic Data. ACM Transactions on Spatial Algorithms and Systems. 4, 3 (2018), 9:1--9:23. DOI:https://doi.org/10.1145/3234506.
Thematic maps are a common tool to visualize semantic data with a spatial reference. Combining thematic data with a geometric representation of their natural reference frame aids the viewer’s ability in gaining an overview, as well as perceiving patterns with respect to location; however, as the amount of data for visualization continues to increase, problems such as information overload and visual clutter impede perception, requiring data aggregation and level-of-detail visualization techniques. While existing aggregation techniques for thematic data operate in a 2D reference frame (i.e., map), we present two aggregation techniques for 3D spatial and spatiotemporal data mapped onto virtual city models that hierarchically aggregate thematic data in real time during rendering to support on-the-fly and on-demand level-of-detail generation. An object-based technique performs aggregation based on scene-specific objects and their hierarchy to facilitate per-object analysis, while the scene-based technique aggregates data solely based on spatial locations, thus supporting visual analysis of data with arbitrary reference geometry. Both techniques can apply different aggregation functions (mean, minimum, and maximum) for ordinal, interval, and ratio-scaled data and can be easily extended with additional functions. Our implementation utilizes the programmable graphics pipeline and requires suitably encoded data, i.e., textures or vertex attributes. We demonstrate the application of both techniques using real-world datasets, including solar potential analyses and the propagation of pressure waves in a virtual city model.
Weitere Informationen
AbstractThematic maps are a common tool to visualize semantic data with a spatial reference. Combining thematic data with a geometric representation of their natural reference frame aids the viewer’s ability in gaining an overview, as well as perceiving patterns with respect to location; however, as the amount of data for visualization continues to increase, problems such as information overload and visual clutter impede perception, requiring data aggregation and level-of-detail visualization techniques. While existing aggregation techniques for thematic data operate in a 2D reference frame (i.e., map), we present two aggregation techniques for 3D spatial and spatiotemporal data mapped onto virtual city models that hierarchically aggregate thematic data in real time during rendering to support on-the-fly and on-demand level-of-detail generation. An object-based technique performs aggregation based on scene-specific objects and their hierarchy to facilitate per-object analysis, while the scene-based technique aggregates data solely based on spatial locations, thus supporting visual analysis of data with arbitrary reference geometry. Both techniques can apply different aggregation functions (mean, minimum, and maximum) for ordinal, interval, and ratio-scaled data and can be easily extended with additional functions. Our implementation utilizes the programmable graphics pipeline and requires suitably encoded data, i.e., textures or vertex attributes. We demonstrate the application of both techniques using real-world datasets, including solar potential analyses and the propagation of pressure waves in a virtual city model.
Pasewaldt, S., Semmo, A., Klingbeil, M. and Döllner, J. 2017. Demo: Pictory - Neural Style Transfer and Editing with CoreML. Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) (Bangkok, Thailand, 2017).
This work presents advances in the design and implementation of Pictory, an iOS app for artistic neural style transfer and interactive image editing using the CoreML and Metal APIs. Pictory combines the benefits of neural style transfer, e.g., high degree of abstraction on a global scale, with the interactivity of GPU-accelerated stateof-the-art image-based artistic rendering on a local scale. Thereby, the user is empowered to create high-resolution, abstracted renditions in a two-stage approach. First, a photo is transformed using a pre-trained convolutional neural network to obtain an intermediate stylized representation. Second, image-based artistic rendering techniques (e.g., watercolor, oil paint or toon filtering) are used to further stylize the image. Thereby, fine-scale texture noise—introduced by the style transfer—is filtered and interactive means are provided to individually adjust the stylization effects at run-time. Based on qualitative and quantitative user studies, Pictory has been redesigned and optimized to support casual users as well as mobile artists by providing effective, yet easy to understand, tools to facilitate image editing at multiple levels of control.
Weitere Informationen
AbstractThis work presents advances in the design and implementation of Pictory, an iOS app for artistic neural style transfer and interactive image editing using the CoreML and Metal APIs. Pictory combines the benefits of neural style transfer, e.g., high degree of abstraction on a global scale, with the interactivity of GPU-accelerated stateof-the-art image-based artistic rendering on a local scale. Thereby, the user is empowered to create high-resolution, abstracted renditions in a two-stage approach. First, a photo is transformed using a pre-trained convolutional neural network to obtain an intermediate stylized representation. Second, image-based artistic rendering techniques (e.g., watercolor, oil paint or toon filtering) are used to further stylize the image. Thereby, fine-scale texture noise—introduced by the style transfer—is filtered and interactive means are provided to individually adjust the stylization effects at run-time. Based on qualitative and quantitative user studies, Pictory has been redesigned and optimized to support casual users as well as mobile artists by providing effective, yet easy to understand, tools to facilitate image editing at multiple levels of control.
Semmo, A., Isenberg, T. and Döllner, J. 2017. Neural Style Transfer: A Paradigm Shift for Image-based Artistic Rendering?. Proceedings International Symposium on Non-Photorealistic Animation and Rendering (Los Angeles, California, 2017), 5:1--5:13.
In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.
Weitere Informationen
HerausgeberWinnemöller, Holger and Bartram, Lyn
AbstractIn this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.
Hahn, S., Bethge, J. and Döllner, J. 2017. Relative Direction Change: A Topology-based Metric for Layout Stability in Treemaps. Proceedings of the 8th International Conference of Information Visualization Theory and Applications (IVAPP 2017) (2017).
This paper presents a topology-based metric for layout stability in treemaps—the Relative Direction Change (RDC). The presented metric considers the adjacency and arrangement of single shapes in a treemap, and allows for a rotation-invariant description of layout changes between two snapshots of a dataset depicted with treemaps. A user study was conducted that shows the applicability of the Relative Direction Change in comparison and addition to established layout metrics, such as Average Distance Change (ADC) and Average Aspect Ratio (AAR), with respect to human perception of treemaps. This work contributes to the establishment of a more precise model for the replicable and reliable comparison of treemap layout algorithms © The Authors 2017. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 8th International Conference on Information Visualization Theory and Applications (IVAPP 2017).
Weitere Informationen
AbstractThis paper presents a topology-based metric for layout stability in treemaps—the Relative Direction Change (RDC). The presented metric considers the adjacency and arrangement of single shapes in a treemap, and allows for a rotation-invariant description of layout changes between two snapshots of a dataset depicted with treemaps. A user study was conducted that shows the applicability of the Relative Direction Change in comparison and addition to established layout metrics, such as Average Distance Change (ADC) and Average Aspect Ratio (AAR), with respect to human perception of treemaps. This work contributes to the establishment of a more precise model for the replicable and reliable comparison of treemap layout algorithms © The Authors 2017. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 8th International Conference on Information Visualization Theory and Applications (IVAPP 2017).
Limberger, D., Pursche, M., Klimke, J. and Döllner, J. 2017. Progressive High-Quality Rendering for Interactive Information Cartography using WebGL. Proceedings of the 22nd International Conference on 3D Web Technology (2017), 4.
Information cartography services provided via web-based clients using real-time rendering do not always necessitate a continuous stream of updates in the visual display. This paper shows how progressive rendering by means of multi-frame sampling and frame accumulation can introduce high-quality visual effects using robust and straightforward implementations. For it, (1) a suitable rendering loop is described, (2) WebGL limitations are discussed, and (3) an adaption of THREE.js featuring progressive anti-aliasing, screen-space ambient occlusion, and depth of field is detailed. Furthermore, sampling strategies are discussed and rendering performance is evaluated, emphasizing the low per-frame costs of this approach.
Weitere Informationen
AbstractInformation cartography services provided via web-based clients using real-time rendering do not always necessitate a continuous stream of updates in the visual display. This paper shows how progressive rendering by means of multi-frame sampling and frame accumulation can introduce high-quality visual effects using robust and straightforward implementations. For it, (1) a suitable rendering loop is described, (2) WebGL limitations are discussed, and (3) an adaption of THREE.js featuring progressive anti-aliasing, screen-space ambient occlusion, and depth of field is detailed. Furthermore, sampling strategies are discussed and rendering performance is evaluated, emphasizing the low per-frame costs of this approach.
Hahn, S. and Döllner, J. 2017. Hybrid-Treemap Layouting. Proceedings of EuroVis 2017 - Short Papers (2017).
This paper presents an approach for hybrid treemaps, which applies and combines several different layout principles within a single tree map in contrast to traditional treemap variants based on a single layout concept. To this end, we analyze shortcomings of state-of-the-art treemap algorithms such as Moore, Voronoi and Strip layouts. Based on a number of identified edge cases, we propose a combination of these different layout algorithms, individually selected for and applied on each sub-hierarchy of the given treemap data. The selection decision is based on the number of items to be layouted as well as the aspect-ratio of the containing visual elements. Futhermore, a layout quality score based on existing treemap layout metrics (e.g., average distance change, relative direction change, average aspect ratio) has been used to evaluate the results of the proposed hybrid layout algorithm and to demonstrate its usefulness applied on representative hierarchical data sets. © The Authors 2017. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the EuroVis2017.
Weitere Informationen
AbstractThis paper presents an approach for hybrid treemaps, which applies and combines several different layout principles within a single tree map in contrast to traditional treemap variants based on a single layout concept. To this end, we analyze shortcomings of state-of-the-art treemap algorithms such as Moore, Voronoi and Strip layouts. Based on a number of identified edge cases, we propose a combination of these different layout algorithms, individually selected for and applied on each sub-hierarchy of the given treemap data. The selection decision is based on the number of items to be layouted as well as the aspect-ratio of the containing visual elements. Futhermore, a layout quality score based on existing treemap layout metrics (e.g., average distance change, relative direction change, average aspect ratio) has been used to evaluate the results of the proposed hybrid layout algorithm and to demonstrate its usefulness applied on representative hierarchical data sets. © The Authors 2017. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the EuroVis2017.
Limberger, D., Scheibel, W., Trapp, M. and Döllner, J. 2017. Mixed-Projection Treemaps: A Novel Approach Mixing 2D and 2.5D Treemaps. Proceedings of the International Conference on Information Visualization 2017 (2017).
2D treemaps are a space-filling visualization technique that facilitate exploration of non-spatial, attributed, tree-structured data using the visual variables size and color. In extension thereto, 2.5D treemaps introduce height for additional information display. This extension entails challenges such as increased rendering effort, occlusion, or the need for navigation techniques that counterbalance the advantages of 2D treemaps to a certain degree. This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for complex treemaps and contributes new concepts for real-time rendering of and interaction with mixed-projection treemaps. The technique operates by tilting up inner nodes using affine transformations and animated state transitions. The mixed use of orthogonal and perspective projections is discussed and application examples that facilitate exploration of multi-variate data and benefit from the reduced interaction overhead are demonstrated.
Weitere Informationen
Abstract2D treemaps are a space-filling visualization technique that facilitate exploration of non-spatial, attributed, tree-structured data using the visual variables size and color. In extension thereto, 2.5D treemaps introduce height for additional information display. This extension entails challenges such as increased rendering effort, occlusion, or the need for navigation techniques that counterbalance the advantages of 2D treemaps to a certain degree. This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for complex treemaps and contributes new concepts for real-time rendering of and interaction with mixed-projection treemaps. The technique operates by tilting up inner nodes using affine transformations and animated state transitions. The mixed use of orthogonal and perspective projections is discussed and application examples that facilitate exploration of multi-variate data and benefit from the reduced interaction overhead are demonstrated.
Semmo, A., Trapp, M., Döllner, J. and Klingbeil, M. 2017. Pictory: Combining Neural Style Transfer and Image Filtering. Proceedings SIGGRAPH Appy Hour (Los Angeles, California, 2017), 5:1--5:2.
This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.
Weitere Informationen
AbstractThis work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.
Dürschmid, T., Söchting, M., Semmo, A., Trapp, M. and Döllner, J. 2017. ProsumerFX: Mobile Design of Image Stylization Components. Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) (Bangkok, Thailand, 2017).
With the continuous advances of mobile graphics hardware, high-quality image stylization—e.g., based on image filtering, stroke-based rendering, and neural style transfer—is becoming feasible and increasingly used in casual creativity apps. The creative expression facilitated by these mobile apps, however, is typically limited with respect to the usage and application of pre-defined visual styles, which ultimately do not include their design and composition—an inherent requirement of prosumers. We present ProsumerFX, a GPU-based app that enables to interactively design parameterizable image stylization components on-device by reusing building blocks of image processing effects and pipelines. Furthermore, the presentation of the effects can be customized by modifying the icons, names, and order of parameters and presets. Thereby, the customized visual styles are defined as platform-independent effects and can be shared with other users via a web-based platform and database. Together with the presented mobile app, this system approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Thus, it satisfies the needs for creative expression of both professionals as well as the general public.
Weitere Informationen
AbstractWith the continuous advances of mobile graphics hardware, high-quality image stylization—e.g., based on image filtering, stroke-based rendering, and neural style transfer—is becoming feasible and increasingly used in casual creativity apps. The creative expression facilitated by these mobile apps, however, is typically limited with respect to the usage and application of pre-defined visual styles, which ultimately do not include their design and composition—an inherent requirement of prosumers. We present ProsumerFX, a GPU-based app that enables to interactively design parameterizable image stylization components on-device by reusing building blocks of image processing effects and pipelines. Furthermore, the presentation of the effects can be customized by modifying the icons, names, and order of parameters and presets. Thereby, the customized visual styles are defined as platform-independent effects and can be shared with other users via a web-based platform and database. Together with the presented mobile app, this system approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Thus, it satisfies the needs for creative expression of both professionals as well as the general public.
Bethge, J., Hahn, S. and Döllner, J. 2017. Improving Layout Quality by Mixing Treemap-Layouts Based on Data-Change Characteristics. Vision, Modeling & Visualization (2017).
This paper presents a hybrid treemap layout approach that optimizes layout-quality metrics by combining state-of-the-art treemap layout algorithms. It utilizes machine learning to predict those metrics based on data metrics describing the characteristics and changes of the dataset. For this, the proposed approach uses a neural network which is trained on artificially generated dataset,s containing a total of 15.8 million samples. The resulting model is integrated into an approach called Smart-Layouting. This approach is evaluated on real-world data from 100 publicly available software repositories. Compared to other state-of-the-art treemap algorithms it reaches an overall better result. Additionally, this approach can be customized by an end user’s needs. The customization allows for specifying weights for the importance of each layout-quality metric. The results indicate, that the algorithm is able to adapt successfully towards a given set of weights.
Weitere Informationen
HerausgeberHullin, Matthias and Klein, Reinhard and Schultz, Thomas and Yao, Angela
AbstractThis paper presents a hybrid treemap layout approach that optimizes layout-quality metrics by combining state-of-the-art treemap layout algorithms. It utilizes machine learning to predict those metrics based on data metrics describing the characteristics and changes of the dataset. For this, the proposed approach uses a neural network which is trained on artificially generated dataset,s containing a total of 15.8 million samples. The resulting model is integrated into an approach called Smart-Layouting. This approach is evaluated on real-world data from 100 publicly available software repositories. Compared to other state-of-the-art treemap algorithms it reaches an overall better result. Additionally, this approach can be customized by an end user’s needs. The customization allows for specifying weights for the importance of each layout-quality metric. The results indicate, that the algorithm is able to adapt successfully towards a given set of weights.
Klingbeil, M., Pasewaldt, S., Semmo, A. and Döllner, J. 2017. Challenges in User Experience Design of Image Filtering Apps. Proceedings SIGGRAPH ASIA Mobile Graphics and Interactive Applications (MGIA) (Bangkok, Thailand, 2017).
Photo filtering apps successfully deliver image-based stylization techniques to a broad audience, in particular in the ubiquitous domain (e.g., smartphones, tablet computers). Interacting with these inherently complex techniques has so far mostly been approached in two different ways: (1) by exposing many (technical) parameters to the user, resulting in a professional application that typically requires expert domain knowledge, or (2) by hiding the complexity via presets that only allows the application of filters but prevents creative expression thereon. In this work, we outline challenges of and present approaches for providing interactive image filtering on mobile devices, thereby focusing on how to make them usable for people in their daily life. This is discussed by the example of BeCasso, a user-centric app for assisted image stylization that targets two user groups: mobile artists and users seeking casual creativity. Through user research, qualitative and quantitative user studies, we identify and outline usability issues that showed to prevent both user groups from reaching their objectives when using the app. On the one hand, user-group-targeting has been improved by an optimized user experience design. On the other hand, multiple level of controls have been implemented to ease the interaction and hide the underlying complex technical parameters. Evaluations underline that the presented approach can increase the usability of complex image stylization techniques for mobile apps.
Weitere Informationen
AbstractPhoto filtering apps successfully deliver image-based stylization techniques to a broad audience, in particular in the ubiquitous domain (e.g., smartphones, tablet computers). Interacting with these inherently complex techniques has so far mostly been approached in two different ways: (1) by exposing many (technical) parameters to the user, resulting in a professional application that typically requires expert domain knowledge, or (2) by hiding the complexity via presets that only allows the application of filters but prevents creative expression thereon. In this work, we outline challenges of and present approaches for providing interactive image filtering on mobile devices, thereby focusing on how to make them usable for people in their daily life. This is discussed by the example of BeCasso, a user-centric app for assisted image stylization that targets two user groups: mobile artists and users seeking casual creativity. Through user research, qualitative and quantitative user studies, we identify and outline usability issues that showed to prevent both user groups from reaching their objectives when using the app. On the one hand, user-group-targeting has been improved by an optimized user experience design. On the other hand, multiple level of controls have been implemented to ease the interaction and hide the underlying complex technical parameters. Evaluations underline that the presented approach can increase the usability of complex image stylization techniques for mobile apps.
Limberger, D., Scheibel, W., Hahn, S. and Döllner, J. 2017. Reducing Visual Complexity in Software Maps using Importance-based Aggregation of Nodes. Proceedings of the 8th International Conference on Information Visualization Theory and Applications.
Depicting massive software system data using software maps can result in visual clutter and increased cognitive load. This paper introduces an adaptive level-of-detail (LoD) technique that uses scoring for interactive aggregation on a per-node basis. The scoring approximates importance by degree-of-interest measures as well as screen and user-interaction scores. The technique adheres to established aggregation guidelines and was evaluated by means of two user studies. The first user study investigates task completion time in visual search. The second evaluates the readability of the presented nesting level contouring for aggregates. With the adap- tive LoD technique software maps allow for multi-resolution depictions of software system information. It facilitates efficient identification of important nodes and allows for additional annotation. © The Authors 2017. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 8th International Conference on Information Visualization Theory and Applications (IVAPP 2017).
Weitere Informationen
AbstractDepicting massive software system data using software maps can result in visual clutter and increased cognitive load. This paper introduces an adaptive level-of-detail (LoD) technique that uses scoring for interactive aggregation on a per-node basis. The scoring approximates importance by degree-of-interest measures as well as screen and user-interaction scores. The technique adheres to established aggregation guidelines and was evaluated by means of two user studies. The first user study investigates task completion time in visual search. The second evaluates the readability of the presented nesting level contouring for aggregates. With the adap- tive LoD technique software maps allow for multi-resolution depictions of software system information. It facilitates efficient identification of important nodes and allows for additional annotation. © The Authors 2017. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 8th International Conference on Information Visualization Theory and Applications (IVAPP 2017).
Döllner, J. 2017. Vom Fotorealismus zum Nichtfotorealismus. Hands on: Kunstgeschichte: Methodik und Unterrichtsbeispiele der gestaltungspraktischen Kunstrezeption. J. Penzel, ed. copaed Verlag.
Weitere Informationen
HerausgeberPenzel, Joachim
Döllner, J. 2017. Hinter den Bildern von Andreas Schiller. Andreas Schiller: global backup II. J. Penzel, ed. Prestel Verlag. 75-79.
Weitere Informationen
HerausgeberPenzel, Joachim
Scheibel, W., Buschmann, S., Trapp, M. and Döllner, J. 2017. Attributed Vertex Clouds. GPU Zen. W. Engel and Oat, C., eds.
In todays computer graphics applications, large 3D scenes are rendered which consist of polygonal geometries such as triangle meshes. Using state- of-the-art techniques, this geometry is often represented on the GPU using vertex and index buffers, as well as additional auxiliary data such as tex- tures or uniform buffers. For polygonal meshes of arbitrary complexity, the described approach is indispensable. However, there are several types of simpler geometries (e.g., cuboids, spheres, tubes, or splats) that can be generated procedurally. We present an efficient data representation and render- ing concept for such geometries, denoted as attributed vertex clouds (AVCs). Using this approach, geometry is generated on the GPU during execution of the programmable rendering pipeline. Each vertex is used as the argument for a function that procedurally generates the target geometry. This function is called a transfer function, and it is implemented using shader programs and therefore executed as part of the rendering process. This approach allows for compact geometry representation and results in reduced memory footprints in comparison to traditional representations. By shifting geometry generation to the GPU, the resulting volatile geometry can be controlled flexibly, i.e., its position, parameteri- zation, and even the type of geometry can be modified without requiring suggests improved rendering times and reduced memory transmission through the rendering pipeline.
Weitere Informationen
HerausgeberEngel, Wolfgang and Oat, Christopher
AbstractIn todays computer graphics applications, large 3D scenes are rendered which consist of polygonal geometries such as triangle meshes. Using state- of-the-art techniques, this geometry is often represented on the GPU using vertex and index buffers, as well as additional auxiliary data such as tex- tures or uniform buffers. For polygonal meshes of arbitrary complexity, the described approach is indispensable. However, there are several types of simpler geometries (e.g., cuboids, spheres, tubes, or splats) that can be generated procedurally. We present an efficient data representation and render- ing concept for such geometries, denoted as attributed vertex clouds (AVCs). Using this approach, geometry is generated on the GPU during execution of the programmable rendering pipeline. Each vertex is used as the argument for a function that procedurally generates the target geometry. This function is called a transfer function, and it is implemented using shader programs and therefore executed as part of the rendering process. This approach allows for compact geometry representation and results in reduced memory footprints in comparison to traditional representations. By shifting geometry generation to the GPU, the resulting volatile geometry can be controlled flexibly, i.e., its position, parameteri- zation, and even the type of geometry can be modified without requiring suggests improved rendering times and reduced memory transmission through the rendering pipeline.
Dürschmid, T., Trapp, M. and Döllner, J. 2017. Towards Architectural Styles for Android App Software Product Lines. 2017 IEEE/ACM 4th International Conference on Mobile Software Engineering and Systems (MOBILESoft), (2017).
Limberger, D., Scheibel, W., Trapp, M. and Döllner, J. 2017. Mixed-Projection Treemaps: A Novel Approach Mixing 2D and 2.5D Treemaps. Proceedings of the International Conference on Information Visualization 2017 (2017).
Stojanovic, V., Richter, R., Döllner, J. and Trapp, M. 2017. Comparative Visualization of BIM Geometry and Corresponding Point Clouds. International Journal of Sustainable Development and Planning. 13, 1 (2017). DOI:https://doi.org/10.2495/SDP-V13-N1-12-23.
Hagedorn, B. 2016. Konzepte und Techniken zur servicebasierten Visualisierung von geovirtuellen 3D-Umgebungen.
Im Allgemeinen liegen Geodaten und georeferenzierte Daten verteilt vor, sind heterogen in ihrem Inhalt und ihrer Form, umfassen sehr große Datenmengen und müssen in verschiedenen IT-Informationssystemen integriert und in verschiedenen Anwendungskontexten genutzt werden. Im Mittelpunkt dieser Arbeit stehen deshalb Konzepte und Techniken für die Integration, Visualisierung, Analyse, Bereitstellung und Nutzung von 2D- und 3D-Geodaten sowie georeferenzierten Daten. Hierbei wird ein Ansatz verfolgt, der zum einen virtuelle 3D-Umgebungen als konzeptionellen und technischen Rahmen nutzt und zum anderen auf serviceorientierten Softwarearchitekturen und Geo-Standards basiert. Die vorgestellten Konzepte und Verfahren stellen damit Schlüsselbausteine dar, um neuartige IT-Lösungen und Anwendungen für 3D-Geoinformationen, z.B. als Bestandteile von Geodateninfrastrukturen, zu realisieren. Im Bereich der servicebasierten 3DGeovisualisierung beschreibt diese Arbeit, wie virtuelle 3D-Stadtmodelle für die Integration heterogener und verteilter Geodatenquellen genutzt werden können. Dazu werden Anforderungen für die Integration identifiziert und ein Konzept für die Integration auf Datenebene und die Integration auf Visualisierungsebene entworfen und deren Umsetzung am Beispiel komplexer 3D-Bauwerksinformationsmodelle beschrieben und demonstriert. Im Bereich servicebasierter, bildbasierter 3DPortrayal-Services wird mit dem Web View Service (WVS) ein spezialisierter SoftwareService für die Visualisierung, Interaktion und Analyse von geovirtuellen 3D-Umgebungen konzipiert und entwickelt. Kernkonzept dieses Services sind die serverseitige Datenintegration und -verwaltung sowie die ebenfalls serverseitige Bilderzeugung. Mit diesem durchgehend serverseitigen Ansatz können sehr große Mengen an 3D-Geodaten auch auf solchen Endgeräte bereitgestellt werden, die nicht über ausreichend Speicher und Rechenleistung für die Vorhaltung, die Verarbeitung und das Rendering von 3D-Modellen verfügen. Der entwickelte WVS ermöglicht so u.a. auch auf Tablet-PCs und im Webbrowser die interaktive Erkundung und Analyse von 3D-Geodaten. Anhand einer Referenzimplementierung des WVS und einer Client-Anwendung wird die praktische Anwendung des WVS demonstriert. Im Bereich der Komposition von Web View Services wird untersucht, wie der WVS als Baustein einer komplexen, verteilten Visualisierungs- und Renderingpipeline eingesetzt werden kann. Durch die Komposition des WVS mit anderen Darstellungs- und Verarbeitungsservices können z. B. komplexe Renderingeffekte erzielt oder einzelne 3D-Objekte nachträglich in eine 3D-Ansicht integriert werden. Hierzu wird ein Konzept zur servicebasierten, Tiefenbildbasierten Bildkomposition beschrieben und am Beispiel der verdeckungsfreien Annotation von 3D-Ansichten umgesetzt und demonstriert. Im Bereich der Interaktion mit Web View Services liefert diese Arbeit Grundlagen, Konzept und Umsetzung für intelligente, assistierende und automatisierte Interaktions- und Navigationstechniken, die auf dem Angebotscharakter (der Affordanz) von 3D-Szenenobjekten sowie auf der skizzen- und gestenbasierten Eingabe von Nutzerintentionen basiert. Diese Eingaben werden hinsichtlich ihrer Form und unter Berücksichtigung der Semantik der 3DSzenenobjekte ausgewertet und interpretiert und anschließend in anwendungsspezifische Navigationskommandos übersetzt, aus denen teilautomatische Kamerafahrten abgeleitet werden.
Weitere Informationen
AbstractIm Allgemeinen liegen Geodaten und georeferenzierte Daten verteilt vor, sind heterogen in ihrem Inhalt und ihrer Form, umfassen sehr große Datenmengen und müssen in verschiedenen IT-Informationssystemen integriert und in verschiedenen Anwendungskontexten genutzt werden. Im Mittelpunkt dieser Arbeit stehen deshalb Konzepte und Techniken für die Integration, Visualisierung, Analyse, Bereitstellung und Nutzung von 2D- und 3D-Geodaten sowie georeferenzierten Daten. Hierbei wird ein Ansatz verfolgt, der zum einen virtuelle 3D-Umgebungen als konzeptionellen und technischen Rahmen nutzt und zum anderen auf serviceorientierten Softwarearchitekturen und Geo-Standards basiert. Die vorgestellten Konzepte und Verfahren stellen damit Schlüsselbausteine dar, um neuartige IT-Lösungen und Anwendungen für 3D-Geoinformationen, z.B. als Bestandteile von Geodateninfrastrukturen, zu realisieren. Im Bereich der servicebasierten 3DGeovisualisierung beschreibt diese Arbeit, wie virtuelle 3D-Stadtmodelle für die Integration heterogener und verteilter Geodatenquellen genutzt werden können. Dazu werden Anforderungen für die Integration identifiziert und ein Konzept für die Integration auf Datenebene und die Integration auf Visualisierungsebene entworfen und deren Umsetzung am Beispiel komplexer 3D-Bauwerksinformationsmodelle beschrieben und demonstriert. Im Bereich servicebasierter, bildbasierter 3DPortrayal-Services wird mit dem Web View Service (WVS) ein spezialisierter SoftwareService für die Visualisierung, Interaktion und Analyse von geovirtuellen 3D-Umgebungen konzipiert und entwickelt. Kernkonzept dieses Services sind die serverseitige Datenintegration und -verwaltung sowie die ebenfalls serverseitige Bilderzeugung. Mit diesem durchgehend serverseitigen Ansatz können sehr große Mengen an 3D-Geodaten auch auf solchen Endgeräte bereitgestellt werden, die nicht über ausreichend Speicher und Rechenleistung für die Vorhaltung, die Verarbeitung und das Rendering von 3D-Modellen verfügen. Der entwickelte WVS ermöglicht so u.a. auch auf Tablet-PCs und im Webbrowser die interaktive Erkundung und Analyse von 3D-Geodaten. Anhand einer Referenzimplementierung des WVS und einer Client-Anwendung wird die praktische Anwendung des WVS demonstriert. Im Bereich der Komposition von Web View Services wird untersucht, wie der WVS als Baustein einer komplexen, verteilten Visualisierungs- und Renderingpipeline eingesetzt werden kann. Durch die Komposition des WVS mit anderen Darstellungs- und Verarbeitungsservices können z. B. komplexe Renderingeffekte erzielt oder einzelne 3D-Objekte nachträglich in eine 3D-Ansicht integriert werden. Hierzu wird ein Konzept zur servicebasierten, Tiefenbildbasierten Bildkomposition beschrieben und am Beispiel der verdeckungsfreien Annotation von 3D-Ansichten umgesetzt und demonstriert. Im Bereich der Interaktion mit Web View Services liefert diese Arbeit Grundlagen, Konzept und Umsetzung für intelligente, assistierende und automatisierte Interaktions- und Navigationstechniken, die auf dem Angebotscharakter (der Affordanz) von 3D-Szenenobjekten sowie auf der skizzen- und gestenbasierten Eingabe von Nutzerintentionen basiert. Diese Eingaben werden hinsichtlich ihrer Form und unter Berücksichtigung der Semantik der 3DSzenenobjekte ausgewertet und interpretiert und anschließend in anwendungsspezifische Navigationskommandos übersetzt, aus denen teilautomatische Kamerafahrten abgeleitet werden.
Semmo, A. 2016. Design and Implementation of Non-Photorealistic Rendering Techniques for 3D Geospatial Data.
Geospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner. Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units. This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution. Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Weitere Informationen
AbstractGeospatial data has become a natural part of a growing number of information systems and services in the economy, society, and people's personal lives. In particular, virtual 3D city and landscape models constitute valuable information sources within a wide variety of applications such as urban planning, navigation, tourist information, and disaster management. Today, these models are often visualized in detail to provide realistic imagery. However, a photorealistic rendering does not automatically lead to high image quality, with respect to an effective information transfer, which requires important or prioritized information to be interactively highlighted in a context-dependent manner. Approaches in non-photorealistic renderings particularly consider a user's task and camera perspective when attempting optimal expression, recognition, and communication of important or prioritized information. However, the design and implementation of non-photorealistic rendering techniques for 3D geospatial data pose a number of challenges, especially when inherently complex geometry, appearance, and thematic data must be processed interactively. Hence, a promising technical foundation is established by the programmable and parallel computing architecture of graphics processing units. This thesis proposes non-photorealistic rendering techniques that enable both the computation and selection of the abstraction level of 3D geospatial model contents according to user interaction and dynamically changing thematic information. To achieve this goal, the techniques integrate with hardware-accelerated rendering pipelines using shader technologies of graphics processing units for real-time image synthesis. The techniques employ principles of artistic rendering, cartographic generalization, and 3D semiotics—unlike photorealistic rendering—to synthesize illustrative renditions of geospatial feature type entities such as water surfaces, buildings, and infrastructure networks. In addition, this thesis contributes a generic system that enables to integrate different graphic styles—photorealistic and non-photorealistic—and provide their seamless transition according to user tasks, camera view, and image resolution. Evaluations of the proposed techniques have demonstrated their significance to the field of geospatial information visualization including topics such as spatial perception, cognition, and mapping. In addition, the applications in illustrative and focus+context visualization have reflected their potential impact on optimizing the information transfer regarding factors such as cognitive load, integration of non-realistic information, visualization of uncertainty, and visualization on small displays.
Semmo, A., Trapp, M., Pasewaldt, S. and Döllner, J. 2016. Interactive Oil Paint Filtering On Mobile Devices. Expressive 2016 - Posters, Artworks, and Bridging Papers (2016).
Image stylization enjoys a growing popularity on mobile devices to foster casual creativity. However, the implementation and provision of high-quality image filters for artistic rendering is still faced by the inherent limitations of mobile graphics hardware such as computing power and memory resources. This work presents a mobile implementation of a filter that transforms images into an oil paint look, thereby highlighting concepts and techniques on how to perform multi-stage nonlinear image filtering on mobile devices. The proposed implementation is based on OpenGL ES and the OpenGL ES shading language, and supports on-screen painting to interactively adjust the appearance in local image regions, e.g., to vary the level of abstraction, brush, and stroke direction. Evaluations of the implementation indicate interactive performance and results that are of similar aesthetic quality than its original desktop variant.
Weitere Informationen
AbstractImage stylization enjoys a growing popularity on mobile devices to foster casual creativity. However, the implementation and provision of high-quality image filters for artistic rendering is still faced by the inherent limitations of mobile graphics hardware such as computing power and memory resources. This work presents a mobile implementation of a filter that transforms images into an oil paint look, thereby highlighting concepts and techniques on how to perform multi-stage nonlinear image filtering on mobile devices. The proposed implementation is based on OpenGL ES and the OpenGL ES shading language, and supports on-screen painting to interactively adjust the appearance in local image regions, e.g., to vary the level of abstraction, brush, and stroke direction. Evaluations of the implementation indicate interactive performance and results that are of similar aesthetic quality than its original desktop variant.
Semmo, A., Döllner, J. and Schlegel, F. 2016. BeCasso: Image Stylization by Interactive Oil Paint Filtering on Mobile Devices. Proceedings ACM SIGGRAPH Appy Hour (2016).
BeCasso is a mobile app that enables users to transform photos into an oil paint look that is inspired by traditional painting elements. In contrast to stroke-based approaches, the app uses state-of-the-art nonlinear image filtering techniques based on smoothed structure information to interactively synthesize oil paint renderings with soft color transitions. BeCasso empowers users to easily create aesthetic oil paint renderings by implementing a two-fold strategy. First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Second, it introduces a novel interaction approach that operates within the parameter spaces of the stylization effect to facilitate creative control over the visual output: on-screen painting enables users to locally adjust the appearance in image regions, e.g., to vary the level of abstraction, brush and stroke direction. This way, the app provides tools for both higher-level interaction and low-level control [Isenberg 2016] to serve the different needs of non-experts and digital artists. References: Isenberg, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In Proc. NPAR, The Eurographics Association, Goslar, Germany, 89–96
Weitere Informationen
AbstractBeCasso is a mobile app that enables users to transform photos into an oil paint look that is inspired by traditional painting elements. In contrast to stroke-based approaches, the app uses state-of-the-art nonlinear image filtering techniques based on smoothed structure information to interactively synthesize oil paint renderings with soft color transitions. BeCasso empowers users to easily create aesthetic oil paint renderings by implementing a two-fold strategy. First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Second, it introduces a novel interaction approach that operates within the parameter spaces of the stylization effect to facilitate creative control over the visual output: on-screen painting enables users to locally adjust the appearance in image regions, e.g., to vary the level of abstraction, brush and stroke direction. This way, the app provides tools for both higher-level interaction and low-level control [Isenberg 2016] to serve the different needs of non-experts and digital artists. References: Isenberg, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In Proc. NPAR, The Eurographics Association, Goslar, Germany, 89–96
Semmo, A., Trapp, M., Dürschmid, T., Döllner, J. and Pasewaldt, S. 2016. Interactive Multi-scale Oil Paint Filtering on Mobile Devices. Proceedings ACM SIGGRAPH Posters (2016).
This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.
Weitere Informationen
AbstractThis work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.
Semmo, A., Dürschmid, T., Trapp, M., Klingbeil, M., Döllner, J. and Pasewaldt, S. 2016. Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices. Proceedings ACM SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (2016).
With the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.
Weitere Informationen
AbstractWith the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.
Schoedon, A., Trapp, M., Hollburg, H. and Döllner, J. 2016. Interactive Web-based Visualization for Accessibility Mapping of Transportation Networks. Proceedings of EuroVis 2016 - Short Papers (2016).
Accessibility is a fundamental aspect in transportation, routing, and spare-time activity planning concerning traveling in modern cities. In this context, interactive web-based accessibility-map visualization techniques and systems are important tools for provisioning, exploration, analysis, and assessment of multi-modal and location-based travel time data and routing information. To enable their effective application, such interactive visualization techniques demands for flexible mappings with respect to user-adjustable parameters such as maximum travel times, the types of transportation used, or used color schemes. However, traditional approaches for web-based visualization of accessibility-maps do not allow this degree of parametrization without significant latencies introduced by required data processing and transmission between the routing server and the visualization client. This paper presents a novel web-based visualization technique that allows for efficient client-side mapping and rendering of accessibility data onto transportation networks using WebGL and the OpenGL transmission format. A performance evaluation and comparison shows the superior performance of the approach over alternative implementations.
Weitere Informationen
AbstractAccessibility is a fundamental aspect in transportation, routing, and spare-time activity planning concerning traveling in modern cities. In this context, interactive web-based accessibility-map visualization techniques and systems are important tools for provisioning, exploration, analysis, and assessment of multi-modal and location-based travel time data and routing information. To enable their effective application, such interactive visualization techniques demands for flexible mappings with respect to user-adjustable parameters such as maximum travel times, the types of transportation used, or used color schemes. However, traditional approaches for web-based visualization of accessibility-maps do not allow this degree of parametrization without significant latencies introduced by required data processing and transmission between the routing server and the visualization client. This paper presents a novel web-based visualization technique that allows for efficient client-side mapping and rendering of accessibility data onto transportation networks using WebGL and the OpenGL transmission format. A performance evaluation and comparison shows the superior performance of the approach over alternative implementations.
Vollmer, J.O., Trapp, M. and Döllner, J. 2016. Interactive GPU-based Image Deformation for Mobile Devices. Computer Graphics and Visual Computing (CGVC) (2016).
Interactive image deformation is an important feature of modern image processing pipelines. It is often used to create caricatures and animation for input images, especially photos. State-of-the-art image deformation techniques are based on transforming vertices of a mesh, which is textured by the input image, using affine transformations such as translation, and scaling. However, the resulting visual quality of the output image depends on the geometric resolution of the mesh. Performing these transformations on the CPU often further inhibits performance and quality. This is especially problematic on mobile devices where the limited computational power reduces the maximum achievable quality. To overcome these issue, we propose the concept of an intermediate deformation buffer that stores deformation information at a resolution independent of the mesh resolution. This allows the combination of a high-resolution buffer with a low-resolution mesh for interactive preview, as well as a high-resolution mesh to export the final image. Further, we present a fully GPU-based implementation of this concept, taking advantage of modern OpenGL ES features, such as compute shaders.
Weitere Informationen
AbstractInteractive image deformation is an important feature of modern image processing pipelines. It is often used to create caricatures and animation for input images, especially photos. State-of-the-art image deformation techniques are based on transforming vertices of a mesh, which is textured by the input image, using affine transformations such as translation, and scaling. However, the resulting visual quality of the output image depends on the geometric resolution of the mesh. Performing these transformations on the CPU often further inhibits performance and quality. This is especially problematic on mobile devices where the limited computational power reduces the maximum achievable quality. To overcome these issue, we propose the concept of an intermediate deformation buffer that stores deformation information at a resolution independent of the mesh resolution. This allows the combination of a high-resolution buffer with a low-resolution mesh for interactive preview, as well as a high-resolution mesh to export the final image. Further, we present a fully GPU-based implementation of this concept, taking advantage of modern OpenGL ES features, such as compute shaders.
Limberger, D., Fiedler, C., Hahn, S., Trapp, M. and Döllner, J. 2016. Evaluation of Sketchiness as a Visual Variable for 2.5D Treemaps. Proceedings of the 20th International Conference of Information Visualization (IV'16) (2016).
Interactive 2.5D treemaps serve as an effective tool for the visualization of attributed hierarchies, enabling exploration of non-spatial, multi-variate, hierarchical data. In this paper the suitability of sketchiness as a visual variable, e.g., for uncertainty, is evaluated. Therefore, a design space for sketchy rendering in 2.5D and integration details for real-time applications are presented. The results of three user studies indicate, that sketchiness is a promising candidate for a visual variable that can be used independently and in addition to others, e.g., color and height. © The Authors 2016. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 20th International Conference on Information Visualization (IV'16).
Weitere Informationen
AbstractInteractive 2.5D treemaps serve as an effective tool for the visualization of attributed hierarchies, enabling exploration of non-spatial, multi-variate, hierarchical data. In this paper the suitability of sketchiness as a visual variable, e.g., for uncertainty, is evaluated. Therefore, a design space for sketchy rendering in 2.5D and integration details for real-time applications are presented. The results of three user studies indicate, that sketchiness is a promising candidate for a visual variable that can be used independently and in addition to others, e.g., color and height. © The Authors 2016. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 20th International Conference on Information Visualization (IV'16).
Discher, S., Richter, R. and Döllner, J. 2016. Interactive and View-Dependent See-Through Lenses for Massive 3D Point Clouds. Advances in 3D Geoinformation (2016).
Pasewaldt, S., Semmo, A., Döllner, J. and Schlegel, F. 2016. BeCasso: Artistic Image Processing and Editing on Mobile Devices. Proceedings ACM SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications (Demo) (2016).
BeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control [Isenberg 2016] to serve the different needs of non-experts and digital artists. References: Isenberg, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In Proc. NPAR, The Eurographics Association, Goslar, Germany, 89–96
Weitere Informationen
AbstractBeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control [Isenberg 2016] to serve the different needs of non-experts and digital artists. References: Isenberg, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In Proc. NPAR, The Eurographics Association, Goslar, Germany, 89–96
Scheibel, W., Trapp, M. and Döllner, J. 2016. Interactive Revision Exploration using Small Multiples of Software Maps. 7th International Conference on Information Visualization Theory and Applications (2016), 131-138.
To explore and to compare different revisions of complex software systems is a challenging task as it requires to constantly switch between different revisions and the corresponding information visualization. This paper proposes to combine the concept of small multiples and focus+context techniques for software maps to facilitate the comparison of multiple software map themes and revisions simultaneously on a single screen. This approach reduces the amount of switches and helps to preserve the mental map of the user. Given a software project the small multiples are based on a common dataset but are specialized by specific revisions and themes. The small multiples are arranged in a matrix where rows and columns represents different themes and revisions, respectively. To ensure scalability of the visualization technique we also discuss two rendering pipelines to ensure interactive frame-rates. The capabilities of the proposed visualization technique are demonstrated in a collaborative exploration setting using a high-resolution, multi-touch display.
Weitere Informationen
AbstractTo explore and to compare different revisions of complex software systems is a challenging task as it requires to constantly switch between different revisions and the corresponding information visualization. This paper proposes to combine the concept of small multiples and focus+context techniques for software maps to facilitate the comparison of multiple software map themes and revisions simultaneously on a single screen. This approach reduces the amount of switches and helps to preserve the mental map of the user. Given a software project the small multiples are based on a common dataset but are specialized by specific revisions and themes. The small multiples are arranged in a matrix where rows and columns represents different themes and revisions, respectively. To ensure scalability of the visualization technique we also discuss two rendering pipelines to ensure interactive frame-rates. The capabilities of the proposed visualization technique are demonstrated in a collaborative exploration setting using a high-resolution, multi-touch display.
Limberger, D., Scheibel, W., Lemme, S. and Jürgen, D. 2016. Dynamic 2.5D Treemaps using Declarative 3D on the Web. Proceedings of the 21st International Conference on Web3D Technology (Web3D) (2016), 33--36.
The 2.5D treemap represents a general purpose visualization technique to map multi-variate hierarchical data in a scalable, interactive, and consistent way used in a number of application fields. In this paper, we explore the capabilities of Declarative 3D for the web-based implementation of 2.5D treemap clients. Particularly, we investigate how X3DOM and XML3D can be used to implement clients with equivalent features that interactively display 2.5D treemaps with dynamic mapping of attributes. We also show a first step towards a glTF-based implementation. These approaches are benchmarked focusing on their interaction capabilities with respect to rendering and speed of dynamic data mapping. We discuss the results for our representative example of a complex 3D interactive visualization technique and summerize recommendations for improvements towards operational web clients.
Weitere Informationen
AbstractThe 2.5D treemap represents a general purpose visualization technique to map multi-variate hierarchical data in a scalable, interactive, and consistent way used in a number of application fields. In this paper, we explore the capabilities of Declarative 3D for the web-based implementation of 2.5D treemap clients. Particularly, we investigate how X3DOM and XML3D can be used to implement clients with equivalent features that interactively display 2.5D treemaps with dynamic mapping of attributes. We also show a first step towards a glTF-based implementation. These approaches are benchmarked focusing on their interaction capabilities with respect to rendering and speed of dynamic data mapping. We discuss the results for our representative example of a complex 3D interactive visualization technique and summerize recommendations for improvements towards operational web clients.
Semmo, A., Limberger, D., Kyprianidis, J.E. and Döllner, J. 2016. Image Stylization by Interactive Oil Paint Filtering. Computers & Graphics. 55, (2016), 157--171. DOI:https://doi.org/10.1016/j.cag.2015.12.001.
This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.
Weitere Informationen
AbstractThis paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.
Trapp, M. and Döllner, J. 2015. Geometry Batching Using Texture-Arrays. Proceedings of the 10th International Conference on Computer Graphics Theory and Applications (GRAPP 2015) (2015), 239-246.
High-quality rendering of 3D virtual environments typically depends on high-quality 3D models with significant geometric complexity and texture data. One major bottleneck for real-time image-synthesis represents the number of state changes, which a specific rendering API has to perform. To improve performance, batching can be used to group and sort geometric primitives into batches to reduce the number of required state changes, whereas the size of the batches determines the number of required draw-calls, and therefore, is critical for rendering performance. For example, in the case of texture atlases, which provide an approach for efficient texture management, the batch size is limited by the efficiency of the texture-packing algorithm and the texture resolution itself. This paper presents a pre-processing approach and rendering technique that overcomes these limitations by further grouping textures or texture atlases and thus enables the creation of larger geometry batches. It is based on texture arrays in combination with an additional indexing schema that is evaluated at run-time using shader programs. This type of texture management is especially suitable for real-time rendering of large-scale texture-rich 3D virtual environments, such as virtual city and landscape models.
Weitere Informationen
AbstractHigh-quality rendering of 3D virtual environments typically depends on high-quality 3D models with significant geometric complexity and texture data. One major bottleneck for real-time image-synthesis represents the number of state changes, which a specific rendering API has to perform. To improve performance, batching can be used to group and sort geometric primitives into batches to reduce the number of required state changes, whereas the size of the batches determines the number of required draw-calls, and therefore, is critical for rendering performance. For example, in the case of texture atlases, which provide an approach for efficient texture management, the batch size is limited by the efficiency of the texture-packing algorithm and the texture resolution itself. This paper presents a pre-processing approach and rendering technique that overcomes these limitations by further grouping textures or texture atlases and thus enables the creation of larger geometry batches. It is based on texture arrays in combination with an additional indexing schema that is evaluated at run-time using shader programs. This type of texture management is especially suitable for real-time rendering of large-scale texture-rich 3D virtual environments, such as virtual city and landscape models.
Würfel, H., Trapp, M., Limberger, D. and Döllner, J. 2015. Natural Phenomena as Metaphors for Visualization of Trend Data in Interactive Software Maps. Computer Graphics and Visual Computing (CGVC) (2015).
Software maps are a commonly used tool for code quality monitoring in software-development projects and decision making processes. While providing an important visualization technique for the hierarchical system structure of a single software revision, they lack capabilities with respect to the visualization of changes over multiple revisions. This paper presents a novel technique for visualizing the evolution of the software system structure based on software metric trends. These trend maps extend software maps by using real-time rendering techniques for natural phenomena yielding additional visual variables that can be effectively used for the communication of changes. Therefore, trend data is automatically computed by hierarchically aggregating software metrics. We demonstrate and discuss the presented technique using two real world data sets of complex software systems.
Weitere Informationen
AbstractSoftware maps are a commonly used tool for code quality monitoring in software-development projects and decision making processes. While providing an important visualization technique for the hierarchical system structure of a single software revision, they lack capabilities with respect to the visualization of changes over multiple revisions. This paper presents a novel technique for visualizing the evolution of the software system structure based on software metric trends. These trend maps extend software maps by using real-time rendering techniques for natural phenomena yielding additional visual variables that can be effectively used for the communication of changes. Therefore, trend data is automatically computed by hierarchically aggregating software metrics. We demonstrate and discuss the presented technique using two real world data sets of complex software systems.
Hahn, S., Trapp, M., Wuttke, N. and Döllner, J. 2015. ThreadCity: Combined Visualization of Structure and Activity for the Exploration of Multi-threaded Software Systems. Proceedings of the 19th International Conference of Information Visualization (IV'15) (2015).
This paper presents a novel visualization technique for the interactive exploration of multi-threaded software systems. It combines the visualization of static system structure based on the EvoStreets approach with an additional traffic metaphor to communicate the runtime characteristics of multiple threads simultaneously. To improve visual scalability with respect to the visualization of complex software systems, we further present an effective level-of-detail visualization based on hierarchical aggregation of system components by taking viewing parameters into account. We demonstrate our technique by means of a prototypical implementation and compare our result with existing visualization techniques. © The Authors 2015. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 19th International Conference on Information Visualization (IV'15).
Weitere Informationen
AbstractThis paper presents a novel visualization technique for the interactive exploration of multi-threaded software systems. It combines the visualization of static system structure based on the EvoStreets approach with an additional traffic metaphor to communicate the runtime characteristics of multiple threads simultaneously. To improve visual scalability with respect to the visualization of complex software systems, we further present an effective level-of-detail visualization based on hierarchical aggregation of system components by taking viewing parameters into account. We demonstrate our technique by means of a prototypical implementation and compare our result with existing visualization techniques. © The Authors 2015. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the 19th International Conference on Information Visualization (IV'15).
Oehlke, C., Richter, R. and Döllner, J. 2015. Automatic Detection and Large-Scale Visualization of Trees for Digital Landscapes and City Models based on 3D Point Clouds. 16th Conference on Digital Landscape Architecture (DLA 2015) (2015), 151-160.
Buschmann, S., Trapp, M. and Döllner, J. 2015. Real-Time Visualization of Massive Movement Data in Digital Landscapes. 16th Conference on Digital Landscape Architecture (DLA 2015) (2015), 213-220.
Due to continuing advances in sensor technology and increasing availability of digital infrastructure that allows for acquisition, transfer, and storage of big data sets, large amounts of movement data (e.g., road, naval, or air-traffic) become available. In the near future, movement data such as traffic data may even be available in real-time. In a growing number of application fields (e.g., landscape planning and design, urban development, and infrastructure planning), movement data enables new analysis and simulation applications. In this paper, we present an interactive technique for visualizing massive 3D movement trajectories. It is based on mapping massive movement data to graphics primitives and their visual variables in real-time, supporting a number of visualization schemes such as sphere, line, or tube-based trajectories, including animations of direction and speed. This generic technique enhances the functionality of VR and interactive 3D systems using virtual environments such as digital landscape models, city models, or virtual globes by adding support for this important category of spatio-temporal data.
Weitere Informationen
AbstractDue to continuing advances in sensor technology and increasing availability of digital infrastructure that allows for acquisition, transfer, and storage of big data sets, large amounts of movement data (e.g., road, naval, or air-traffic) become available. In the near future, movement data such as traffic data may even be available in real-time. In a growing number of application fields (e.g., landscape planning and design, urban development, and infrastructure planning), movement data enables new analysis and simulation applications. In this paper, we present an interactive technique for visualizing massive 3D movement trajectories. It is based on mapping massive movement data to graphics primitives and their visual variables in real-time, supporting a number of visualization schemes such as sphere, line, or tube-based trajectories, including animations of direction and speed. This generic technique enhances the functionality of VR and interactive 3D systems using virtual environments such as digital landscape models, city models, or virtual globes by adding support for this important category of spatio-temporal data.
Meier, B.-H., Trapp, M. and Döllner, J. 2015. VideoMR: A Map and Reduce Framework for Real-time Video Processing. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2015) (2015).
This paper presents VideoMR: a novel map and reduce framework for real-time video processing on graphic processing units (GPUs). Using the advantages of implicit parallelism and bounded memory allocation, our approach enables developers to focus on implementing video operations without taking care of GPU memory handling or the details of code parallelization. Therefore, a new concept for map and reduce is introduced, redefining both operations to fit to the specific requirements of video processing. A prototypical implementation using OpenGL facilitates various operating platforms, including mobile development, and will be widely interoperable with other state-of-the-art video processing frameworks.
Weitere Informationen
AbstractThis paper presents VideoMR: a novel map and reduce framework for real-time video processing on graphic processing units (GPUs). Using the advantages of implicit parallelism and bounded memory allocation, our approach enables developers to focus on implementing video operations without taking care of GPU memory handling or the details of code parallelization. Therefore, a new concept for map and reduce is introduced, redefining both operations to fit to the specific requirements of video processing. A prototypical implementation using OpenGL facilitates various operating platforms, including mobile development, and will be widely interoperable with other state-of-the-art video processing frameworks.
Discher, and Sören, 2015. Echtzeit-Rendering-Techniken für 3D-Punktwolken basierend auf semantischen und topologischen Attributen. Shortlist Karl Kraus Young Scientists Award 2015, 35. Wissenschaftlich-Technische Jahrestagung der DGPF (2015).
Semmo, A., Limberger, D., Kyprianidis, J.E. and Döllner, J. 2015. Image Stylization by Oil Paint Filtering using Color Palettes. Proceedings International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe) (2015), 149--158.
This paper presents an approach for transforming images into an oil paint look. To this end, a color quantization scheme is proposed that performs feature-aware recolorization using the dominant colors of the input image. In addition, an approach for real-time computation of paint textures is presented that builds on the smoothed structure adapted to the main feature contours of the quantized image. Our stylization technique leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. © The Authors 2015. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe'15).
Weitere Informationen
AbstractThis paper presents an approach for transforming images into an oil paint look. To this end, a color quantization scheme is proposed that performs feature-aware recolorization using the dominant colors of the input image. In addition, an approach for real-time computation of paint textures is presented that builds on the smoothed structure adapted to the main feature contours of the quantized image. Our stylization technique leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. © The Authors 2015. This is the authors' version of the work. It is posted here for your personal use. Not for redistribution. The definitive version will be published in Proceedings of the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe'15).
Trapp, M., Semmo, A. and Döllner, J. 2015. Interactive Rendering and Stylization of Transportation Networks Using Distance Fields. Proceedings of the 10th International Conference on Computer Graphics Theory and Applications (GRAPP 2015) (2015), 207-219.
Transportation networks, such as streets, railroads or metro systems, constitute primary elements in cartography for reckoning and navigation. In recent years, they have become an increasingly important part of 3D virtual environments for the interactive analysis and communication of complex hierarchical information, for example in routing, logistics optimization, and disaster management. A variety of rendering techniques have been proposed that deal with integrating transportation networks within these environments, but have so far neglected the many challenges of an interactive design process to adapt their spatial and thematic granularity (i.e., level-of-detail and level-of-abstraction) according to a user's context. This paper presents an efficient real-time rendering technique for the view-dependent rendering of geometrically complex transportation networks within 3D virtual environments. Our technique is based on distance fields using deferred texturing that shifts the design process to the shading stage for real-time stylization. We demonstrate and discuss our approach by means of street networks using cartographic design principles for context-aware stylization, including view-dependent scaling for clutter reduction, contour-lining to provide figure-ground, handling of street crossings via shading-based blending, and task-dependent colorization. Finally, we present potential usage scenarios and applications together with a performance evaluation of our implementation.
Weitere Informationen
AbstractTransportation networks, such as streets, railroads or metro systems, constitute primary elements in cartography for reckoning and navigation. In recent years, they have become an increasingly important part of 3D virtual environments for the interactive analysis and communication of complex hierarchical information, for example in routing, logistics optimization, and disaster management. A variety of rendering techniques have been proposed that deal with integrating transportation networks within these environments, but have so far neglected the many challenges of an interactive design process to adapt their spatial and thematic granularity (i.e., level-of-detail and level-of-abstraction) according to a user's context. This paper presents an efficient real-time rendering technique for the view-dependent rendering of geometrically complex transportation networks within 3D virtual environments. Our technique is based on distance fields using deferred texturing that shifts the design process to the shading stage for real-time stylization. We demonstrate and discuss our approach by means of street networks using cartographic design principles for context-aware stylization, including view-dependent scaling for clutter reduction, contour-lining to provide figure-ground, handling of street crossings via shading-based blending, and task-dependent colorization. Finally, we present potential usage scenarios and applications together with a performance evaluation of our implementation.