Program

The link to the Digital Library of EuroGraphics (EG) containing all publications for this year’s EuroVis can be found here.

All times are given in CET (UTC+1) Europe/Berlin.

Legend

CGF: Invited CGF-Paper
TVCG: Invited TVCG-Paper

| Monday, 12 June, 2023

08:00 – 17:00 Registration
09:00 – 10:30 EuroVA 1: Opening, Keynote speech, and Best Paper Fast Forward Video
Chair: Marco Angelini
Room 1A
Opening
Keynote Bridging AI and Visual Analytics Paper
Alvitta Ottley

Abstract: Visualization research has long been dedicated to finding innovative approaches to represent complex data sets and convey insights to analysts. However, the advent of artificial intelligence (AI) introduces a paradigm shift, presenting new opportunities for visual analytics. This talk will examine the role of machine learning (ML) algorithms in expediting visual analysis, revealing data patterns, and fostering the discovery of novel insights. However, as we embrace the potential of AI, we must also confront the challenges and limitations it introduces, such as data bias, interpretability, and user trust. We will discuss these and other ethical considerations that we should consider when developing AI-powered visualizations. Overall, this talk aims to demonstrate the potential of AI and ML research to transform visual analytics and provide insights into how researchers and developers can leverage these techniques to create more impactful and engaging tools.

Best Paper Award
Best Paper ShaRP: Shape-Regularized Multidimensional Projections Paper
Alister Machado dos Reis, Alexandru Telea, and Michael Behrisch

Abstract: Projections, or dimensionality reduction methods, are techniques of choice for the visual exploration of high-dimensional data. Many such techniques exist, each one of them having a distinct visual signature — i.e., a recognizable way to arrange points in the resulting scatterplot. Such signatures are implicit consequences of algorithm design, such as whether the method focuses on local vs global data pattern preservation; optimization techniques; and hyperparameter settings. We present a novel projection technique — ShaRP — that provides users explicit control over the visual signature of the created scatterplot, which can cater better to interactive visualization scenarios. ShaRP scales well with dimensionality and dataset size, generically handles any quantitative dataset, and provides this extended functionality of controlling projection shapes at a small, user-controllable cost in terms of quality metrics.

Presenter: Alister Machado dos Reis

 
 
09:00 – 10:30 EnvirVis 1: Analysis and Exploration of Hydrological Data Fast Forward Video
Chair: Michael Böttinger
Room 1B
pyParaOcean: A System for Visual Analysis of Ocean Data Paper
Toshit Jain, Varun Singh, Vijay Kumar Boda, Upkar Singh, Ingrid Hotz, P.N. Vinayachandran, and Vijay Natarajan

Abstract: Visual analysis is well adopted within the field of oceanography for the analysis of model simulations, detection of different phenomena and events, and tracking of dynamic processes. With increasing data sizes and the availability of multivariate dynamic data, there is a growing need for scalable and extensible tools for visualization and interactive exploration. We describe pyParaOcean, a visualization system that supports several tasks routinely used in the visual analysis of ocean data. The system is available as a plugin to Paraview and is hence able to leverage its distributed computing capabilities and its rich set of generic analysis and visualization functionalities. pyParaOcean provides modules to support different visual analysis tasks specific to ocean data, such as eddy identification and salinity movement tracking. These modules are available as Paraview filters and this seamless integration results in a system that is easy to install and use. A case study on the Bay of Bengal illustrates the utility of the system for the study of ocean phenomena and processes.

Presenter: Toshit Jain

A hybrid 3D eddy detection technique based on sea surface height and velocity field Paper
Weiping Hua, Karen Bemis, Dujuan Kang, Sedat Ozer, and Deborah Silver

Abstract: Eddy detection is a critical task for ocean scientists to understand and analyze ocean circulation. In this paper, we introduce a hybrid eddy detection approach that combines sea surface height (SSH) and velocity fields with geometric criteria defining eddy behavior. Our approach searches for SSH minima and maxima, which oceanographers expect to find at the center of eddies. Geometric criteria are used to verify expected velocity field properties, such as net rotation and symmetry, by tracing velocity components along a circular path surrounding each eddy center. Progressive searches outward and into deeper layers yield each eddy’s 3D region of influence. Isolation of each eddy structure from the dataset, using it’s cylindrical footprint, facilitates visualization of internal eddy structures using horizontal velocity, vertical velocity, temperature and salinity. A quantitative comparison of Okubo-Weiss vorticity (OW) thresholding, the standard winding angle, and this new SSH-velocity hybrid methods of eddy detection as applied to the Red Sea dataset suggests that detection results are highly dependent on the choices of method, thresholds, and criteria. Our new SSH-velocity hybrid detection approach has the advantages of providing eddy structures with verified rotation properties, 3D visualization of the internal structure of physical properties, and rapid efficient estimations of eddy footprints without calculating streamlines. Our approach combines visualization of internal structure and tracking overall movement to support the study of the transport mechanisms key to understanding the interaction of nutrient distribution and ocean circulation. Our method is applied to three different datasets to showcase the generality of its application.

Presenter: Karen Bemis

Spatially Immersive Visualization Domes as a Marine Geoscientific Research Tool Paper
Tom Kwasnitschka, Markus Schlüter, Jens Klimmeck, Armin Bernstetter, Felix Gross, and Isabella Peters

Abstract: This paper describes the development of a series of four spatially immersive visualization environments featuring dome projection screens, a concept borrowed from digital planetariums and science theatres. We outline the potential offered by domes as an architecture and a mature visualization technology in light of current challenges in marine geosciences. Still though, science visualization in domes has historically been focused on narrative rather than exploratory workflows required by scientific visualization. The lasting advantage proven by all of our spatially immersive setups is their potential to catalyze scientific communication.

Presenter: Armin Bernstetter

Visualization Environment for Analyzing Extreme Rainfall Events: A Case Study Paper
James Kress, Shehzad Afzal, Hari Prasad Dasari, Sohaib Ghani, Arjan Zamreeq, Ayman Ghulam, and Ibrahim Hoteit

Abstract: Extreme rainfall events can devastate infrastructure and public life and potentially induce substantial financial and life losses. Although weather alert systems generate early rainfall warnings, predicting the impact areas, duration, magnitude, occurrence, and characterization as an extreme event is challenging. Scientists analyze previous extreme rainfall events to examine the factors such as meteorological conditions, large-scale features, relationships and interactions between large-scale features and mesoscale features, and the success of simulation models in capturing these conditions at different resolutions and their parameterizations. In addition, they may also be interested in understanding the sources of anomalous amounts of moisture that may fuel such events. Many factors play a role in the development of these events, which vary depending on the locations. In this work, we implement a visualization environment that supports domain scientists in analyzing simulation model outputs configured to predict and analyze extreme precipitation events. This environment enables visualization of important local features and facilitates understanding the mechanisms contributing to such events. We present a case study of the Jeddah extreme precipitation event on November 24, 2022, which caused great flooding and infrastructure damage. We also present a detailed discussion about the study’s results, feedback from the domain experts, and future extensions.

Presenter: Shehzad Afzal

 
 
09:00 – 10:30 FAIRvis 1: Overview and Perspectives
Chair: Heike Leitte
Room 1C
Opening and Goals
Christoph Garth
Keynote 1: FAIR Visualization in Plant Research
Timo Mühlhaus
Keynote 2: Toward FAIR Visualization of Visualization Research
Tobias Isenberg

Abstract: Findable, accessible, interoperable, and reproducible research is gaining increasing importance in our field. As visualization researchers this often relates to the data that we work with and the software we write to analyze it. In the past, my colleagues and I have tried to analyze our field of visualization itself by looking at its evaluation practices, its use of keywords in papers, and the papers we publish and present at our conferences. So in this talk I will share some of our experiences in doing so, and the challenges we faced and are facing to make and keep the respective data and software accessible to the community. I will talk about our projects [KeyVis](keyvis.org), [Vispubdata](vispubdata.org), and, most recently, [VIS30K](visimagenavigator.github.io). I will report on the challenges to acquire the data, share it, and create and maintain software tools to work with them.

 
 
09:00 – 10:30 MolVA 1: First Session
Chair: Jan Byška
Room 1D
Opening
Invited Talk: 3D Modeling of Cellular Mesoscale Paper
Ivan Viola

Abstract: In this talk I will present the vision of reconstructing entire biological systems, observable on the level of cellular mesoscale, from microscopy imaging experiments to geometric full-atom models. Cellular mesoscale can be now created through procedural modeling, by expressing the spatial-rule set controlling the placement of full-atom molecular models. In addition to procedural modeling, cellular mesoscale can be directly modelled from cryo-electron microscopy and cryo-electron tomography imaging data. First, optimization-based tomographic reconstruction methods are used to convert the tilt-series micrographs into volumetric representation of the mesoscale. On a more detailed magnification, structurally-identical instances of the same protein are identified and superimposed. These mesoscale or molecular details can be visually inspected using volume visualization. Volume visualization of mesoscale can be further improved by integrating detail-rich reconstructed molecular volumetric detail. Molecular volumetric representation is used to estimate the full-atom structure of a given protein, which can be automatically placed in the scene to progressively create a full-atom representation of cellular mesoscale. While these methods still require substantial human involvement, in the future I foresee many of the microscopy interpretation tasks to be automated.

Moliverse: Contextually embedding the microcosm into the universe Paper
Mathis Brossier, Robin Skånberg, Lonni Besançon, Mathieu Linares, Tobias Isenberg, Anders Ynnerman, and Alexander Bock

Abstract: We present Moliverse, an integration of the molecular visualization framework VIAMD into the astronomical visualization software OpenSpace, allowing us to bridge the two extreme ends of the scale spectrum to show, for example, the gas composition in a planet’s atmosphere or molecular structures in comet trails and can empower the creation of educational exhibitions. For that purpose we do not use a linear scale traversal but break the scale continuity and show molecular simulations as focus in the context of celestial bodies. We demonstrate the application of our concept in two storytelling scenarios and envision the application both for science presentations to lay audiences and for dedicated exploration, potentially also in a molecule-only environment.

 
 
09:00 – 10:30 EGPGV 1: First Session
Chair: Roxana Bujack
Room 2AB
Opening
Keynote Rethinking High Performance Data Analysis for the Exascale and Beyond Paper
Gunther Weber

Abstract: Transitioning towards exascale computing exacerbates existing challenges in supercomputing, such as the widening gap between compute power and I/O bandwidth, as well as the need to move to even higher levels of parallel processing. Limitations imposed by I/O bandwidth creates a growing necessity for adopting in situ analysis techniques. Topological data analysis provides a versatile framework to define high-level features across a broad spectrum of simulations and to automate the selection of visualization parameters such as isovalues for isosurface extraction. The global nature of topological methods hinders their parallelization, posing additional difficulties. A fresh perspective on topological data analysis enables the development of novel algorithms that effectively leverage exascale machines. Looking forward and beyond simulations, the need to analyze the ever-increasing volume of experimental and observational provides additional challenges and opportunities for high-performance computing.

A GPU-based out-of-core architecture for interactive visualization of AMR time series data Paper
Welcome Alexandre-Barff, Hervé Deleau, Jonathan Sarton, Franck Ledoux, and Laurent Lucas

Abstract: This paper presents a scalable approach for large-scale Adaptive Mesh Refinement (AMR) time series interactive visualization.
We can define AMR data as a dynamic gridding format of cells hierarchically refined from a computational domain described in this study as a regular Cartesian grid.
This adaptive feature is essential for tracking time-dependent evolutionary phenomena and makes the AMR format an essential representation for 3D numerical simulations.
However, the visualization of numerical simulation data highlights one critical issue: the significant increases in generated data memory footprint reaching petabytes, thus greatly exceeding the memory capabilities of the most recent graphics hardware.
Therefore, the question is how to access this massive data – AMR time series in particular – for interactive visualization on a simple workstation.
To overcome this main problem, we present an out-of-core GPU-based architecture.
Our proposal is a cache system based on an ad-hoc bricking identified by a Space-Filling Curve (SFC) indexing and managed by a GPU-based page table that loads required AMR data on-the-fly from disk to GPU memory.

 
 
10:30 – 11:00 Coffee Break
Lobby
 
 
11:00 – 12:30 EuroVA 2: Paper session 1: Patterns and Multidimensional Projections Fast Forward Video
Chair: Jürgen Bernard
Room 1A
Human-Based and Automatic Feature Ideation for Time Series Data: A Comparative Study Paper
Johanna Schmidt, Harald Piringer, Thomas Mühlbacher, and Jürgen Bernard

Abstract: Feature ideation is a crucial early step in the feature extraction process, where new features are extracted from raw data. For phenomena existing in time series data, this often includes the ideation of statistical parameters, representations of trends and periodicity, or other geometrical and shape-based characteristics. The strengths of automatic feature ideation methods are their generalizability, applicability, and robustness across cases, whereas human-based feature ideation is most useful in uncharted real-world applications, where incorporating domain knowledge is key. Naturally, both types of methods have proven their right to exist. The motivation for this work is our observation that for time series data, surprisingly few human-based feature ideation approaches exist. In this work, we discuss requirements for human-based feature ideation for VA applications and outline a set of characteristics to assess the goodness of feature sets. Ultimately, we present the results of a comparative study of human-based and automated feature ideation methods, for time series data in a real-world Industry 4.0 setting. One of our results and discussion items is a call to arms for more human-based feature ideation approaches.

Presenter: Johanna Schmidt

ChatKG: Visualizing Temporal Patterns as Knowledge Graph Paper
Leonardo Christino and Fernando V Paulovich

Abstract: Line-chart visualizations of temporal data enable users to identify interesting patterns for the user to inquire about. Using oracles, such as chat AIs, Visual Analytic tools can automatically uncover explicit knowledge related information to said patterns. Yet, visualizing the association of data, patterns, and knowledge is not straightforward. In this paper, we present ChatKG, a novel visualization strategy that allows exploratory data analysis of a Knowledge Graph which associates a dataset of temporal sequences, the patterns found in each sequence, the temporal overlap between patterns, and related explicit knowledge to each given pattern. We exemplify and informally evaluate ChatKG by analyzing the world’s life expectancy. For this, we implement an oracle that automatically extracts relevant or interesting patterns, inquires chatGPT for related information, and populates the Knowledge Graph which is visualized. Our tests and an interview conducted showed that ChatKG is well suited for temporal analysis of temporal patterns and their related knowledge when applied to history studies.

Presenter: Leonardo Christino

Extracting Movement-based Topics for Analysis of Space Use Paper
Gennady Andrienko, Natalia Andrienko, and Dirk Hecker

Abstract: We present a novel approach to analyze spatio-temporal movement patterns using topic modeling. Our approach represents trajectories as sequences of place visits and moves, applies topic modeling separately to each collection of sequences, and synthesizes results. This supports the identification of dominant topics for both place visits and moves, the exploration of spatial and temporal patterns of movement, enabling understanding of space use. The approach is applied to two real-world data sets of car movements in Milan and UK road traffic, demonstrating the ability to uncover meaningful patterns and insights.

Presenter: Gennady Andrienko

Multi-Ensemble Visual Analytics via Fuzzy Sets Paper
Nikolaus Piccolotto, Markus Bögl, and Silvia Miksch

Abstract: Analysis of ensemble datasets, i.e., collections of complex elements such as geochemical maps, is widespread in science and industry. The elements’ complexity arises from the data they capture, which are often multivariate or spatio-temporal. We speak of multi-ensemble datasets when the analysis pertains to multiple ensembles. While many visualization approaches were suggested for ensemble datasets, multi-ensemble datasets remain comparatively underexplored. Our years-long collaboration with statisticians and geochemists taught us that they frame many questions about multi-ensemble data as set operations. E.g., what are the most common members (intersection of ensembles), or what features exist in one member but not another (difference of members)? As classical crisp set relations cannot account for the elements’ complexity, we propose to model multi-ensembles as fuzzy relations. We provide examples of fuzzy set-based queries on a multi-ensemble of geochemical maps and integrate this approach into an existing ensemble visualization pipeline. We evaluated two visualizations obtained by applying this pipeline with experts in geochemistry and statistics. The experts confirmed known information and got directions for further research, which is one Visual Analytics (VA) goal. Hence, our proposal is highly promising for an interactive VA approach.

Presenter: Nikolaus Piccolotto

Nonparametric Dimensionality Reduction Quality Assessment based on Sortedness of Unrestricted Neighborhood Paper
Davi Pereira-Santos, Tácito Trindade de Araújo Tiburtino Neves, André de Carvalho, and Fernando V Paulovich

Abstract: High-dimensional data are known to be challenging to explore visually. Dimensionality Reduction (DR) techniques are good
options for making high-dimensional data sets more interpretable and computationally tractable. An inherent question regard-
ing their use is how much relevant information is lost during the layout generation process. In this study, we aim to provide
means to objectively quantify the quality of a DR layout regarding the intuitive notion of sortedness of the data points. For
such, we propose a straightforward measure with Kendall τ at its core to provide a meaningful value of quality. The two major
variations presented: sortedness and pairwise sortedness, are suitable as a replacement over, respectively, trustworthiness and
stress when assessing projection quality. We present the formulation, its rationale and scope, and experimental results showing
their strength compared to the state-of-the-art.

Presenter: Dr. Fernando V Paulovich

 
 
11:00 – 12:30 EnvirVis 2: Urban Planning and Renaturation Fast Forward Video
Chair: Stefan Jänicke
Room 1B
Seeing Clearly: A Situated Air Quality Visualization with AR Egocentric Viewpoint Extension Paper
Nuno Martins, Bernardo Marques, Paulo Dias, Beatriz Sousa Santos, and Sandra Rafael

Abstract: Raising public awareness about air quality is crucial for promoting individual and collective actions to mitigate the harmful effects of air pollution and achieve a healthier and more sustainable environment. This article presents an application that uses Augmented Reality (AR) and Situated Visualization (SV) to increase public awareness of air quality-related issues. The application, created according to the Human-Centered Design (HCD) methodology, overlays a visual representation of real-time air quality data onto the user’s immediate environment, taking advantage of SV’s contextualization capabilities. However, this kind of AR application faces some challenges, namely the AR egocentric viewpoint limitation of users when using SV. The application incorporates two solutions to mitigate this problem: multi-dynamic camera feeds (using the front and rear cameras of the mobile phone to extend the user’s field of view) and side-by-side dynamic AR and Virtual Reality (VR) camera feeds (a transitional interface with an AR camera and a 3D virtual/digital representation of the area where the user is). Finally, the article evaluates the usability of the application and proposes solutions to mitigate egocentric viewpoint limitations. A study was conducted with seven participants with no prior experience in air quality visualization or AR to complete a task that involved pollution information retrieval using only the AR camera, as well as the side-by-side dynamic AR and VR camera feeds. The results showed that by using the solutions, the task completion time decreased by 42\%. Additionally, the application received positive feedback regarding ease of understanding, complexity, and involvement, suggesting that it can be truly helpful.

Presenter: Nuno Cid Martins

Potential of 3D Visualisation and VR as Boundary Object for Redesigning Green Infrastructure – a Case Study Paper
Carolin Helbig, Janine Poessneck, Daniel Hertel, and Özgür Ozan Şen

Abstract: Faced with most various challenges (e.g. climate change, biodiversity loss, population growth) that will affect people’s future lives in cities, analysis, planning and communication tools that bring together data from different areas and thus create a holistic picture of the environment are needed. This includes data with a socio-economic background as well as data on urban structure, vegetation, climate data and many others. The integration of heterogeneous data and their visualisation are part of the presented case study. The aim was to create a boundary object that facilitates the communication between actors with different social and disciplinary backgrounds in the process of redesigning green infrastructure. 3D visualisation and virtual reality were demonstrated to various stakeholders in transfer events. They confirmed the visualisation’s potential to serve as a boundary object. It represents an appropriate group-specific communication tool for a thematic Digital Twin that supports the transformation to a sustainable and resilient city in light of future changes.

Presenter: Janine Poessneck

A Visual-Scenario-Based Environmental Analysis Approach to the Model-Based Management of Water Extremes in Urban Regions Paper
Özgür Ozan Şen, Lars Backhaus, Siamak Farrokhzadeh, Nico Graebling, Sara Guemar, Feliks K. Kiszkurno, Peter Krebs, Diego Novoa Vazquez, Jürgen Stamm, Olaf Kolditz, and Karsten Rink

Abstract: Due to the present climate crisis, the increasing frequency of the water extreme events around urban regions in river basins may result in drastic losses. One of the most effective preventive measures is a prior analysis of the eventual effects to comprehend the future risks of such water extremes. As well as analysis of historical impacts, the model-based management of water extremes have also a crucial role. Therefore, we present a 3-dimensional visual-scenario-based environmental analysis framework by utilising a Virtual Geographic Environment for the visualisation and the exploration of model-based management of hydrological events in urban regions. Within the study, we focused on the City of Dresden in eastern Germany located in the basin of the Elbe River. We integrated a large set of historical observation data and the results of numerical simulations to explore the consequences of modelled heavy precipitation events within different scenarios. Utilising a framework developed in Unity, the resulting visualisation of different scenarios dealing with water extremes simulated with coupled numerical models constitute the overall focus of this particular study. The resulting application is intended as a collaboration platform in terms of the knowledge transfer among domain scientists, stakeholders and the interested public.

Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Virtual reality

Presenter: Özgür Ozan Şen

An Interactive Decision Support System for Analyzing Time Related Restrictions in Renaturation and Redevelopment Planning Projects Paper
Yves Annanias, Christofer Meinecke, and Daniel Wiegreffe

Abstract: The operation of open-cast lignite mines is a large intervention in nature, making the areas uninhabitable even after closing the mines without renaturation processes.
Renaturation of these large areas requires a regional planning process which is tied to many conditions and restrictions, such as environmental protection laws.
The related information is available only as unstructured text in a variety of documents.
Associated temporal aspects and the geographical borders to these textual information have to be linked manually so far.
This process is highly time-consuming, error-prone, and tedious.
Therefore, the knowledge of experts is often used, but this does not necessarily include all the relevant information.
In this paper, we present a system to support the experts in decision-making of urban planning, renaturation, and redevelopment projects.
The system allows to plan new projects, while considering spatial and temporal restrictions extracted from text documents.
With this, our presented system can also be used to verify compliance with certain legal regulations, such as nature conservation laws.

Presenter: Yves Annanias

 
 
11:00 – 12:30 FAIRvis 2: Insights and Discussion
Chair: Christina Gillmann
Room 1C
Lightning Talk 1: Making Vis FAIR – Experiences from Computational Biology
Daniel Wiegreffe

Abstract: Visualization has many important tasks and is often used for example for exploration and analysis of data in computational biology. In this field of research, many experiments are conducted multiple times, sometimes experiments are repeated years later. The software for analyzing the data derived from these experiments must therefore meet high quality standards, so that the results of experiments are reproducible. Therefore, the FAIR principles are often applied to the software used in this field. In my talk I want to share my experiences on the implementation of the FAIR principle for visualization software in the field of computational biology. These are to a large extent also transferable to general visualization software.

Lightning Talk 2: Towards FAIR visualization of FAIR climate data
Michael Böttinger
Lightning Talk 3: Rules, Regulations, and the “I” in FAIR
Guido Reina

Abstract: We should make source code a mandatory part of submissions where appropriate. It is not fit for a discipline as close to computer science as ours to ignore this when other domains have been requiring complete reproducibility for years. Having the code to reproduce paper figures, at least, is not an unreasonable requirement. Any improvement on that can be considered a net win. Code is also critical when considering Interoperability in the FAIR principles: abstractly specifying visualizations has been researched in the past, but the building blocks of visualizations and visualization tools are so diverse that mixing and matching them is still going to be a challenge for a while, especially when aiming for scalability. Even if combining them were easier, the data itself presents challenges. Many of our current problems do not scale with easily portable exchange formats, and container file formats give a false sense of accomplishment. Specialized, high-performance formats bring the discussion full circle to specialized approaches and the necessity of publishing sources.

Lightning Talk 4: The Data Science Infrastructure Project
James P. Ahrens

Abstract: The Data Science Infrastructure (DSI) project focuses on automated data-driven collection approaches to make metadata and data more readily available for use in scientific workflows. DSI provides an API for storing, searching, and accessing metadata and associated data including collections of simulation and experimental runs across different filesystems and computing environments. DSI supports working with ensembles of data, provenance data, machine learning models input and outputs, and performance data. An open-source release of DSI can be found at https://github.com/lanl/dsi.

Lightning Talk 5: Identifying Manipulation in Scientific Datasets
Devin Lange
Lightning Talk 6: Towards a Unifying Theory: Hypothesis Grammar for Data, Task, and Visualization
Kai Xu

Abstract: Data, task, and visualizations form the foundation of data visualization, where the effectiveness of a visualization depends on its alignment with the data and the user’s task. While existing grammar frameworks like “The Grammar of Graphics” and interaction specifications in tools like vega-lite cover graphics and interaction, respectively, a comprehensive grammar for task remains elusive, despite numerous proposed task taxonomies. These taxonomies are challenging to operationalize, lacking the ability to easily translate into code that can generate visualizations and interactions. To bridge this gap, we propose a preliminary step towards a task grammar by introducing a hypothesis grammar. Complex tasks can be deconstructed into simpler hypotheses, drawing from our understanding of scientific hypotheses. One key advantage of this grammar is its potential to automatically generate hypotheses from a given dataset and subsequently generate visualizations for hypothesis testing, leveraging existing graphics and interaction grammars. Moreover, integrating hypothesis grammar can greatly support the FAIR principles. For instance, data can be annotated with the hypotheses they address, providing a deeper understanding of the “why” behind the data, surpassing conventional metadata like timestamps and authors. This annotation opens up possibilities such as searching for data based on specific hypotheses.

Open Discussion
Next Steps and Closing
Christoph Garth
 
 
11:00 – 12:30 MolVA 2: Second Session
Chair: Bjorn Sommer
Room 1D
A virtual and mixed reality platform for molecular design & drug discovery – Nanome version 1.24 Paper
Simon Bennie, Martina Maritan, Jonathon M Gast, Marc Loschen, Daniel Gruffat, Roberta Bartolotta, Sam Hessenauer, Edgardo Leija, and Steve McCloskey

Abstract: The success of the design and improvement of nanoscale biomolecules like proteins and small molecule drugs relies on a proper understanding of their three-dimensional structures. Nanome’s virtual reality/mixed reality (VR/MR) platform provides an immersive and collaborative environment that offers a unique view into the nanoscale world. The platform enables faster and more effective ideation, improved communication of scientific concepts, and multiple tools for lead optimization of molecules. The latest 1.24 version of the Nanome platform integrates multi-user collaboration, mixed reality, enhanced avatars, and a flexible Python API for easy integration with various modeling techniques. We describe key elements of this state-of-the-art framework and how it can accelerate the pace of discovery through empowering industry-standard algorithms across domains of digital science. Nanome is available for download at https://home.nanome.ai/setup

Presenter: Dr Simon J. Bennie

Invited Talk: Evolving aesthetics in biomolecular graphics Paper
Laura Garrison

Abstract: Visual aesthetics in representing biomolecular structures is an ever-changing landscape that responds to technological advances, modes of dissemination, and user requirements. In this talk, I will discuss the goals, challenges, and solutions that have shaped current practices in biomolecular imagery with a focused discussion on rendering, color, narrative, and human-computer interface. The design space for aesthetics in biomolecular graphics will continue to evolve with increasing collaboration between domains, offering numerous opportunities and challenges to explore in the future.

Closing Remarks
 
 
11:00 – 12:30 EGPGV 2: Second Session
Chair: Filip Sadlo
Room 2AB
FunMC^2: A Filter for Uncertainty Visualization of Marching Cubes on Multi-Core Devices Paper
Zhe Wang, Tushar M. Athawale, Kenneth Moreland, Jieyang Chen, and Chris R. Johnson

Abstract: Visualization is an important tool for scientists to extract understanding from complex scientific data. Scientists need to understand the uncertainty inherent in all scientific data in order to interpret the data correctly. Uncertainty visualization has been an active and growing area of research to address this challenge. Algorithms for uncertainty visualization can be expensive, and research efforts have been focused mainly on structured grid types. Further, support for uncertainty visualization in production tools is limited. In this paper, we adapt an algorithm for computing key metrics for visualizing uncertainty in Marching Cubes (MC) to multi-core devices and present the design, implementation, and evaluation for a Filter for uncertainty visualization of Marching Cubes on Multi-Core devices (FunMC^2). FunMC^2 accelerates the uncertainty visualization of MC significantly, and it is portable across multi-core CPUs and GPUs. Evaluation results show that FunMC^2 based on OpenMP runs around 11x to 41x faster on multi-core CPUs than the corresponding serial version using one CPU core. FunMC^2 based on a single GPU is around 5x to 9x faster than FunMC2 running by OpenMP. Moreover, FunMC^2 is flexible enough to process ensemble data with both structured and unstructured mesh types. Furthermore, we demonstrate that FunMC^2 can be seamlessly integrated asa plugin into ParaView, a production visualization tool for post-processing.

Parallel Compositing of Volumetric Depth Images for Interactive Visualization of Distributed Volumes at High Frame Rates Paper
Aryaman Gupta, Pietro Incardona, Anton Brock, Guido Reina, Steffen Frey, Stefan Gumhold, Ulrik Günther, and Ivo F. Sbalzarini

Abstract: We present a parallel compositing algorithm for Volumetric Depth Images (VDIs) of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. VDIs are view-dependent piecewise constant representations of volume data that offer a potential solution. They are more compact and less expensive to render than the original data. So far, however, there is no method for generating VDIs from distributed data. We propose an algorithm that enables this by sort-last parallel generation and compositing of VDIs with automatically chosen content-adaptive parameters. The resulting composited VDI can then be streamed for remote display, providing responsive visualization of large, distributed volume data.

Efficient Sphere Rendering Revisited Paper
Patrick Gralka, Guido Reina, and Thomas Ertl

Abstract: Glyphs are an intuitive way of displaying the results of atomistic simulations, usually as spheres.
Raycasting of camera-aligned billboards is considered the state-of-the-art technique to render large sets of spheres in a rasterization-based pipeline since the approach was first proposed by Gumhold.
Over time various acceleration techniques have been proposed, such as the rendering of point primitives as billboards, which are trivial to rasterize and avoid a high workload in the vertex pipeline.
Other techniques attempt to optimize data upload and access patterns in shader programs, both relevant aspects for dynamic data.
Recent advances in graphics hardware raise the question of whether these optimizations are still valid.
We evaluate several rendering and data access scheme combinations on real-world datasets and derive recommendations for efficient rasterization-based sphere rendering.

Extended Visual Programming for Complex Parallel Pipelines in ParaView Paper
Marvin Petersen, Jonas Lukasczyk, Charles Gueunet, Timothée Chabat, and Christoph Garth

Abstract: Modern visualization software facilitates the creation of visualization pipelines combining a plethora of algorithms to achieve high-fidelity visualization. When the complexity of the pipelines to be created increases, additional techniques are needed to ensure that reasoning about the pipelines structure and its performance remains feasible. This paper presents three additions to ParaView with the goal of improving presentation of complex, parallel pipelines benefiting pipeline realization. More specifically, we provide a runtime performance annotation visualization integrated in a visual programming node editor, allowing all users to reason about basic performance and intuitively manipulate the structure and configuration of pipelines. Further, we extend the list of available filters with control flow filters, supporting for- and while-loops with a comprehensible representation in the node editor. Our extension is based on graphical manipulation of a node graph that expresses the flow of data and computation in a VTK pipeline, and draws upon a long tradition and positive experience with similar interfaces across a wide range of software systems such as the visualization tools SCIRun and VTK Designer, or the rendering systems Blender and Houdini. The extension we provide integrates seamlessly into the existing ParaView architecture as a plug-in, i.e., it does not require any modifications to ParaView itself or VTK’s execution model.

Closing
 
 
12:30 – 14:00 Lunch Break
Lobby
 
 
14:00 – 15:30 EuroVA 3: Honorable Mention Paper and Panel Fast Forward Video
Chair: Mennatallah El-Assady
Room 1A
Honorable Mention A Methodology for Task-Driven Guidance Design Paper
Ignacio Pérez-Messina, Davide Ceneda, and Silvia Miksch

Abstract: Mixed-initiative Visual Analytics (VA) systems are becoming increasingly important; however, the design of such systems still needs to be formulated. We present a methodology to aid and structure the design of guidance for mixed-initiative VA systems consisting of four steps: (1)~defining the target of analysis, (2)~identifying the user search tasks, (3)~describing the system guidance tasks, and (4)~specifying which and when guidance is provided. In summary, it specifies a space of possible user tasks and then maps it to the corresponding space of guidance tasks, using recent VA task typologies for guidance and visualizations. We illustrate these steps through a case study in a real-world model-building task involving decision-making with unevenly-spaced time-oriented data. Our methodology’s goal is to enrich existing VA systems with guidance, being its output a structured description of a complex guidance task schema.

Presenter: Ignacio Pérez-Messina

Panel: “Disrupting the Status Quo: Provocations in Visual Analytics”
 
 
14:00 – 15:30 EnvirVis 3: Climate, Land use, and Biodiversity Fast Forward Video
Chair: Karsten Rink
Room 1B
Interactive Visual Analysis of Regional Time Series Correlation in Multi-field Climate Ensembles Paper
Marina Evers, Michael Böttinger, and Lars Linsen

Abstract: Spatio-temporal multi-field data resulting from ensemble simulations are commonly used in climate research to investigate possible climatic developments and their certainty. One analysis goal is the investigation of possible correlations among different spatial regions in the different fields to find regions of related behavior. We propose an interactive visual analysis approach that focuses on the analysis of correlations in spatio-temporal ensemble data. Our approach allows for finding correlations between spatial regions in different fields.
Detection of clusters of strongly correlated spatial regions is supported by lower-dimensional embeddings. Then, groups can be selected and investigated in detail, e.g., to study the temporal evolution of the selected group, their Fourier spectra or the distribution of the correlations over the different ensemble members. We apply our approach to selected 2D scalar fields of a large ensemble climate simulation and demonstrate the utility of our tool with several use cases.

Presenter: Marina Evers

User-Centered Engineering of an Interactive Land Use Exploration Tool Paper
Tobias Buhl, David Marcomin, Stefan Fallert, Jana Blechschmidt, Franziska Bönisch, Robert Mark, Juliano Sarmento Cabral, and Sebastian von Mammen

Abstract: In this paper we showcase a system for visualizing and predicting land-use data. The time series-based visualization application strives to improve science communication by facilitating the understanding of land-use change and is backed up by a machine learning-based land-use prediction application that imputes historic data and generates predictions of land use in the future. To present the project, we discuss the system’s requirements which were developed by means of a User-Centered Engineering approach, elaborate on its current, early state of development and the corresponding results and finally discuss areas of potential improvement.

Presenter: Tobias Buhl

MultiSat4Slows system for detecting and assessing potentially active landslide regions — initial results from an ongoing interdisciplinary collaboration Paper
Mike Sips, Magdalena Vassileva, Daniel Eggert, and Mahdi Motagh

Abstract: Landslides represent one of the major threats worldwide to human life, settlements, and infrastructure. Their occurrence is increasing due to anthropogenic activities and environmental changes. Detecting slow-moving landslides in geographical space, monitoring their kinematic behavior in time, and correlating their changes in displacement to potential influencing factors (i.e., precipitation, land use change detection, and earthquakes) can contribute to forecast possible future landslide collapses. Satellite Earth Observation (EO) technology, such as Multi-temporal Synthetic Aperture Interferometry (MTI), provides millions of ground displacement time series that enable EO data scientists to detect slow-moving landslides in geographical space. In this short paper, we discuss our current Visual Analytics (VA) concept and system that supports EO data scientists to analyze ground displacement time series in a semi-automatic and exploratory manner. The goal is to derive helpful information for landslide hazard assessment, such as the location of slow-moving landslides, main kinematic parameters, changes in displacement trend, and possible correlation with external triggering factors. This paper presents the initial results of our VA system in supporting displacement classification and clustering, depicting detected clusters in the cluster overview visualization, and enabling exploratory data analysis and interactive steering.

Presenter: Mike Sips

Visualizing National Threat Assessments of Tree Species Paper
Christina Schnoor, Kristoffer Bargisen Rieck, Emily Beech, Malin Rivers, Jakob Kusnick, and Stefan Jänicke

Abstract: Trees are important to ecosystems around the world, and therefore it is vital to know which species are in particular need of conservation. The GlobalTree Portal primarily focuses on threat assessments at the global level, but nation-level investigations of threat assessments are not yet supported. Regional or national assessments are also displayed, even if the species was not evaluated in a country. This paper presents a visualization framework that enables domain experts to analyze national assessments inspired by the GlobalTree Portal. This visualization first provides a global overview of nation-level threat assessment efforts by highlighting those with many national assessments on a choropleth map. For a selected country, the experts can inspect how the tree species assessments are distributed across BGCI’s threat level categories Not Evaluated, Data Deficient, Not Threatened, Possibly Threatened, Threatened, and Extinct. The core component is a tree map visualization that displays the genera and the species within the selected country. These are color-coded according to the BGCI threat level, and thus, provide a quick overview of nation-level threat assessments at species and genus levels. The system was developed in close collaboration with biologists from BGCI, who evaluated the visualizations on a regular basis to fit their needs. The results certify the value of our solution for gaining quantitative insights about threat assessments on a national level, and BGCI researchers included the system in their work routines to impact decision making processes on national conservation actions.

Visualization-based Scrollytelling of Coupled Threats for Biodiversity, Species and Music Cultures Paper
Jakob Kusnick, Silke Lichtenberg, and Stefan Jänicke

Abstract: Biodiversity loss, land use change and international trade are the main causes for an increasing number of endangered species. As a consequence resource scarcity due to endangered species also threatens cultural heritage.
To depict such coupled threats and their interconnections for the specific case of musical instruments of a symphony orchestra, the MusEcology project developed a platform to analyze dependencies between musical instrument manufacturing for symphony orchestras, and threat assessments to plant and animal species used as resources.
Non-experts are rarely aware of this intertwined threat.
Therefore, low-threshold information distribution is urgently needed.
We extended the MusEcology platform with scrollytelling functionalities helping domain experts drafting stories that use the visualizations of different dimensions throughout various zoom levels.
We outline the utility of our approach with a particular scrollytelling example of the threatened pau-brasil wood (Paubrasilia echinata (Lam.) Gagnon, H.C.Lima & G.P.Lewis), endemic to the Brazilian Mata Atlântica, ever since 1800 used for sticks of high-quality string instrument bows.
The story of the natural material from forests to instrument-making workshops, musicians and audiences is told through informative texts, interviews, sound recordings, photographs, and schematic drawings.
By bringing together expertise from different fields, this story highlights the interconnected dependencies between ecosystems, culture, and music.
The interactive storytelling experiences are aimed at casual users and policy makers to raise awareness of the underlying complexity of biodiversity and instrument making, to support related and induce necessary decision making processes, and to unfold possible pathways towards a more harmonic and sustainable music ecosystem.

Presenter: Jakob Kusnick

 
 
14:00 – 15:30 VisGap 1: Keynote; Paper Session 1: Software Infrastructure
Chair: Christina Gillmann
Room 1C
Welcome Address
Keynote Approaches for the successful delivery of open-source visualization software
James Ahrens

Abstract: In this talk, I will describe approaches to successful creation of open-source visualization software. In summary, these approaches include defining and following clear project objectives and community policies, the use of agile software engineering methods, and the use of continuous integration and deployment practices. I believe these approaches are scalable from small to large teams. These approaches were developed and refined over the course of my career. During my career, I have researched, developed, and deployed open-source software tools including ParaView, a large-scale scientific visualization tool, Cinema, an image database approach for visual analysis, PISTON, a portable data parallel visualization library, ALPINE, in situ visualization infrastructure and algorithms, and, DSI, a data science infrastructure project. Real-world successes and failures during the development of these approaches and tools will be discussed. In addition, specific challenges of facing researchers and developers of visualization software, such as user interface development, user testing, use of graphics software and hardware libraries, and performance and portability concerns will also be discussed.

Better Information Visualizaton Software Through Packages for Data Science Ecosystems Paper
Rafael Henkin

Abstract: Good software development practices are important factors for the successful translation of visualization research into software. This paper argues for the creation of packages for data science ecosystems, with Python and R as case studies, as a way to employ existing tools and infrastructure towards better information visualization software. The paper describes open practices, sustainability and FAIR software to motivate package development. The ecosystems of Python and R are then reviewed based on general software development aspects and how common features of visualization software, such as rendering and interactivity, are supported. It concludes with the software engineering benefits related to creating packages in Python and R and initiatives to overcome obstacles that may hinder the development of better software.

Reflections on the Developments of Visual Analytics Systems for the K Computer System Log Data Paper
Jorji Nonaka, Keijiro Fujita, Takanori Fujiwara, Naohisa Sakamoto, Keiji Yamamoto, Masaaki Terai, Toshiyuki Tsukamoto, and Fumiyoshi Shoji

Abstract: Flagship-class high-performance computing (HPC) systems, also known as supercomputers, are large, complex systems that require particular attention for continuous and long-term stable operations. The K computer was a Japanese flagship-class supercomputer ranked as the fastest supercomputer in the Top500 ranking when it first appeared. It was composed of more than eighty thousand compute nodes and consumed more than 12 MW when running the LINPACK benchmark for the Top500 submission. A combined power substation, with a natural gas co-generation system (CGS), was used for the power supply, and also a large air/water cooling facility was used to extract the massive heat generated from this HPC system. During the years of its regular operation, a large log dataset has been generated from the K computer system and its facility, and several visual analytics systems have been developed to better understand the K computer’s behavior during the operation as well as the probable correlation of operational temperature with the critical hardware failures. In this paper, we will reflect on these visual analytics systems, mainly developed by graduate students, intended to be used by different types of end users on the HPC site. In addition, we will discuss the importance of collaborative development involving the end users, and also the importance of technical people in the middle for assisting in the deployment and possible continuation of the developed systems.

Presenter: Jorji Nonaka

 
 
14:00 – 15:30 MLVis 1: First Session
Room 1D
Opening
Ian Nabney
Keynote Genomic Maps – Visualization-driven analysis of big data in the life sciences and in biomedicine
Hans Binder
Tutorial 1: Supervision and fairness in neighbor embedding
Jaakko Peltonen
 
 
14:00 – 15:30 Inv. CG&A 1: Evaluation and Application Fast Forward Video
Chair: Tatiana von Landesberger
Room 2AB
Evaluating Representation Learning and Graph Layout Methods for Visualization Paper
Edith Heiter, Bo Kang, Tijl De Bie, and Jefrey Lijffijt

Abstract: Graphs and other structured data have come to the forefront in machine learning over the past few years due to the efficacy of novel representation learning methods boosting the prediction performance in various tasks. Representation learning methods embed the nodes in a low-dimensional real-valued space, enabling the application of traditional machine learning methods on graphs. These representations have been widely premised to be also suited for graph visualization. However, no benchmarks or encompassing studies on this topic exist. We present an empirical study comparing several state-of-the-art representation learning methods with two recent graph layout algorithms, using readability and distance-based measures as well as the link prediction performance. Generally, no method consistently outperformed the others across quality measures. The graph layout methods provided qualitatively superior layouts when compared to representation learning methods. Embedding graphs in a higher dimensional space and applying t-distributed stochastic neighbor embedding for visualization improved the preservation of local neighborhoods, albeit at substantially higher computational cost.

An Interactive Decision Support System for Land Reuse Tasks Paper
Yves Annanias, Dirk Zeckzer, Gerik Scheuermann, and Daniel Wiegreffe

Abstract: Experts face the task of deciding where and how land reuse—transforming previously used areas into landscape and utility areas—can be performed. This decision is based on which area should be used, which restrictions exist, and which conditions have to be fulfilled for reusing this area. Information about the restrictions and the conditions is available as mostly textual, nonspatial data associated to areas overlapping the target areas. Due to the large amount of possible combinations of restrictions and conditions overlapping (partially) the target area, this decision process becomes quite tedious and cumbersome. Moreover, it proves to be useful to identify similar regions that have reached different stages of development within the dataset which in turn allows determining common tasks for these regions. We support the experts in accomplishing these tasks by providing aggregated representations as well as multiple coordinated views together with category filters and selection mechanisms implemented in an interactive decision support system. Textual information is linked to these visualizations enabling the experts to justify their decisions. Evaluating our approach using a standard SUS questionnaire suggests that especially the experts were very satisfied with the interactive decision support system.

VisLitE: Visualization Literacy and Evaluation Paper
Elif E. Firat, Alark Joshi, and Robert S. Laramee

Abstract: With the widespread advent of visualization techniques to convey complex data, visualization literacy (VL) is growing in importance. Two noteworthy facets of literacy are user understanding and the discovery of visual patterns with the help of graphical representations. The research literature on VL provides useful guidance and opportunities for further studies in this field. This introduction summarizes and presents research on VL that examines how well users understand basic and advanced data representations. To the best of our knowledge, this is the first tutorial article on interactive VL. We describe evaluation categories of existing relevant research into unique subject groups that facilitate and inform comparisons of literacy literature and provide a starting point for interested readers. In addition, the introduction also provides an overview of the various evaluation techniques used in this field of research and their challenging nature. Our introduction provides researchers with unexplored directions that may lead to future work. This starting point serves as a valuable resource for beginners interested in the topic of VL.

 
 
15:30 – 16:00 Coffee Break
Lobby
 
 
16:00 – 17:40 EuroVA 4: Paper session 2: Decision-making and explanation Fast Forward Video
Chair: Andreas Kerren
Room 1A
A Practical Approach to Provenance Capturing for Reproducible Visual Analytics at an Ocean Research Institute Paper
Armin Bernstetter, Tom Kwasnitschka, and Isabella Peters

Abstract: Reproducibility – and the lack thereof – has been an important topic for some time in the field of Human-Computer Interaction.
Visual analytics workflows and in extension immersive analytics workflows are no exception there and benefit from being more transparent and reproducible. At our research institute, domain scientists in ocean research are using interactive visualization
workflows for sensemaking processes. We are building a framework that supports these workflows by shifting the focus from solely lying on the end-product (i.e. published insights and visualizations) towards the generation process. We do this by capturing, organizing, and visualizing provenance artifacts using a modular and extensible web-based application. We not only apply this framework to conventional 2D display-based work but also workflows inside a unique and spatially immersive projection dome.

Presenter: Armin Bernstetter

A Visual Analytics Framework for Renewable Energy Profiling and Resource Planning Paper
Ramakrishna P. Pammi, Shehzad Afzal, Hari Prasad Dasari, Muhammad Yousaf, Sohaib Ghani, Murali Sankar Sankar Venkatraman, and Ibrahim Hoteit

Abstract: Renewable energy growth is one of the focus areas globally against the backdrop of the global energy crisis and climate
change. Energy planners are looking into clean, safe, affordable, and reliable energy generation sources for a net zero future.
Countries are setting energy targets and policies prioritizing renewable energy, shifting the dependence on fossil fuels. The
selection of renewable energy sources depends on the suitability of the region under consideration and requires analyzing
relevant environmental datasets. In this work, we present a visual analytics framework that facilitates users to explore solar and
wind energy datasets consisting of Global Horizontal Irradiance (GHI), Direct Normal Irradiance (DNI), Diffusive Horizontal
Irradiance (DHI), and Wind Power (WP) spanning across a 40 year period. This framework provides a suite of interactive
decision support tools to analyze spatiotemporal patterns, variability in the variables across space and time at different temporal
resolutions, Typical Meteorological Year (TMY) data with varying percentiles, and provides the capability to interactively
explore and evaluate potential solar and wind energy equipment installation locations and study different energy acquisition
scenarios. This work is conducted in collaboration with domain experts involved in sustainable energy planning. Different use
case scenarios are also explained in detail, along with domain experts feedback and future directions.

Presenter: Shehzad Afzal

KidCAD: An Interactive Cohort Analysis Dashboard of Patients with Chronic Kidney Diseases Paper
Markus Höhn, Sarah Schwindt, Sara Hahn, Sammy Patyna, Stefan Büttner, and Jörn Kohlhammer

Abstract: Chronic Kidney Diseases (CKD) are a prominent health problem.
With an ongoing process, CKD leads to impaired kindey function with decreased ability to filter the patients’ blood, concluding in multiple complications, like heart disease and finally death.
We developed a prototype to support nephrologists to gain an overview of their CKD patients.
The prototype visualizes the patients in cohorts according to their pairwise similarity.
The user can interactively modify the similarity by changing the underlying weights of the included features.
The prototype was developed in response to the needs of physicians due to a context of use analysis.
A qualitative user study shows the need and suitability of our new approach.

Presenter: Markus Höhn

Scaling Up the Explanation of Multidimensional Projections Paper
Julian Thijssen, Zonglin Tian, and Alexandru Telea

Abstract: We present a set of interactive visual analysis techniques aiming at explaining data patterns in multidimensional projections. Our novel techniques include a global value-based encoding that highlights point groups having outlier values in any dimension as well as several local tools that provide details on the statistics of all dimensions for a user-selected projection area. Our techniques generically apply to any projection algorithm and scale computationally well to hundreds of thousands of points and hundreds of dimensions. We describe a user study that shows that our visual tools can be quickly learned and applied by users to obtain non-trivial insights in real-world multidimensional datasets.

Presenter: Alexandru Telea

Why am I reading this? Explaining Personalized News Recommender Systems Paper
Sverrir Arnórsson, Florian Abeillon, Ibrahim Al-Hazwani, Jürgen Bernard, Hanna Hauptmann, and Mennatallah El-Assady

Abstract: Social media and online platforms significantly impact what millions of people get exposed to daily, mainly through recommended content. Hence, recommendation processes have to benefit individuals and society. With this in mind, we present the visual workspace NewsRecXplain, with the goals of (1) Explaining and raising awareness about recommender systems, (2) Enabling individuals to control and customize news recommendations, and (3) Empowering users to contextualize their news recommendations to escape from their filter bubbles. This visual workspace achieves these goals by allowing users to configure their own individualized recommender system, whose news recommendations can then be explained within the workspace by way of embeddings and statistics on content diversity.

Presenter: Ibrahim Al-Hazwani

Closing
Marco Angelini and Mennatallah El-Assady
 
 
16:00 – 17:30 VisGap 2: Paper Session 2: Design and Applications; Capstone
Chair: Michael Krone
Room 1C
The Lack of Specialized Symbology and Visual Interaction Design Guidance for Sub-Sea Military Operations Paper
Gareth Patrick Walsh, Nicklas Sindlev Andersen, Nikolai Stoianov, and Stefan Jänicke

Abstract: This paper addresses the lack and need for specialized and visually effective interaction design guidance for sub-sea military operations. We identify gaps in the implementation of best practice visualization techniques, building upon our recently published survey on visual interfaces used in military decision support systems. Our analysis focuses on the current NATO symbology standard and several sub-sea military frontend systems to identify deficiencies and their underlying causes. Such origins of deficiencies include inadequate design consideration of environmental conditions, as well as incomplete hardware and software requirements for sub-sea conditions. While many such gaps exist, for the purposes of this paper, we narrow our focus to exploring the potential for a new sub-sea symbology for the maritime domain, drawing from insights gained and developed
through our participation in the EDIDP (European Defence Industrial Development Programme) project CUIIS (Comprehensive Underwater Intervention Information System). We propose extending existing NATO military standards by creating a comprehensive framework for a new sub-sea symbology and visual interaction design. This framework includes a set of semiotic communication symbols for military divers, which can easily be combined based on the most common messages required for effective communication between command and military divers. This paper concludes by highlighting the opportunities for improvement in NATO Military Symbology for sub-sea military operations.

Many types of design needed for effective visualizations Paper
Richard Brath

Abstract: For effective visualizations, there are many types of design to consider. Visualization design focuses on core theory of tasks, data and visual encodings. Workflow design, user interface design and graphic design all contribute to a successful visualizations. All design aspects range from initial design exploration to iterative design refinement. Guidelines can help, but have limitations. Examples illustrate visualization issues arising from missing domain knowledge, facilitating alternative designs, refining labeling, layouts to aid workflow, frankenvis, 3D timeseries, and ineffective design collaboration.

Capstone From tiny brains through raging rivers to Mars – the winding path from research prototypes to mature and sustainable software frameworks
Katja Bühler

Abstract: Software prototypes developed as part of research projects are often a rich source of novel innovative approaches to solving real-world problems. Yet there are few examples where such prototypes have evolved into mature and sustainable software or even products. The challenges involved are manifold – from the right composition of the team to the selection and maintenance of the technology to sustainable funding over many years, just to name a few. VRVis is an Austrian research center for visual computing with the mission to bring scientific research results into application. Since its founding in 2000, several software frameworks have emerged that today form a foundation for basic research at VRVis, but are also the subject of large-scale applied research projects supported by industry, government, and academia. Many of these frameworks have evolved from initial research ideas and prototypes into a large software base that is actively used and subject to constant evolution and change. I will present a selection of these frameworks and provide practical insights into the history and strategies of the various teams behind the software creating a sustainable product. Showing that there is not a single and straight path to success, but many, I invite you to take this as an inspiration for finding your way to develop your own software towards a sustainable framework.

Closing
 
 
16:00 – 17:30 MLVis 2: Second Session
Room 1D
Interactive dense pixel visualizations for time series and model attribution explanations Paper
Udo Schlegel and Daniel Keim

Abstract: The field of Explainable Artificial Intelligence (XAI) for Deep Neural Network models has developed significantly, offering numerous techniques to extract explanations from models.
However, evaluating explanations is often not trivial, and differences in applied metrics can be subtle, especially with non-intelligible data.
Thus, there is a need for visualizations tailored to explore explanations for domains with such data, e.g., time series.
We propose DAVOTS, an interactive visual analytics approach to explore raw time series data, activations of neural networks, and attributions in a dense-pixel visualization to gain insights into the data, models’ decisions, and explanations.
To further support users in exploring large datasets, we apply clustering approaches to the visualized data domains to highlight groups and present ordering strategies for individual and combined data exploration to facilitate finding patterns.
We visualize a CNN trained on the FordA dataset to demonstrate the approach.

Presenter: Udo Schlegel

Tutorial 2: Evaluation of data projection methods
Ian Nabney
Tutorial 3: Perception of order in visualization
Daniel Archambault
Panel Discussion
 
 
16:00 – 17:30 Inv. CG&A 2: Trust and Visualization Fast Forward Video
Chair: Alvitta Ottley
Room 2AB
Which Biases and Reasoning Pitfalls Do Explanations Trigger? Decomposing Communication Processes in Human–AI Interaction Paper
Mennatallah El-Assady and Caterina Moruzzi

Abstract: Collaborative human–AI problem-solving and decision making rely on effective communications between both agents. Such communication processes comprise explanations and interactions between a sender and a receiver. Investigating these dynamics is crucial to avoid miscommunication problems. Hence, in this article, we propose a communication dynamics model, examining the impact of the sender’s explanation intention and strategy on the receiver’s perception of explanation effects. We further present potential biases and reasoning pitfalls with the aim of contributing to the design of hybrid intelligence systems. Finally, we propose six desiderata for human-centered explainable AI and discuss future research opportunities.

Can Image Data Facilitate Reproducibility of Graphics and Visualizations? Toward a Trusted Scientific Practice Paper
Guido Reina

Abstract: Reproducibility is a cornerstone of good scientific practice; however, the ongoing “reproducibility crisis” shows that we still need to improve the way we are doing research currently. Reproducibility is crucial because it enables both the comparison to existing techniques as well as the composition and improvement of existing approaches. It can also increase trust in the respective results, which is paramount for adoption in further research and applications. While there are already many initiatives and approaches with different complexity aimed at enabling reproducible research in the context of visualization, we argue for an alternative, lightweight approach that documents the most relevant parameters with minimal overhead. It still complements complex approaches well, and integration with any existing tool or system is simple. Our approach uses the images produced by visualizations and seamlessly piggy-backs everyday communication and research collaborations, publication authoring, public outreach, and internal note-taking. We exemplify how our approach supports day-to-day work and discuss limitations and how they can be countered.

The Flow of Trust: A Visualization Framework to Externalize, Explore, and Explain Trust in ML Applications Paper
Stef van den Elzen, Gennady Andrienko, Natalia Andrienko, Brian D. Fisher, Rafael M. Martins, Jaakko Peltonen, Alexandru C. Telea, and Michel Verleysen

Abstract: We present a conceptual framework for the development of visual interactive techniques to formalize and externalize trust in machine learning (ML) workflows. Currently, trust in ML applications is an implicit process that takes place in the user’s mind. As such, there is no method of feedback or communication of trust that can be acted upon. Our framework will be instrumental in developing interactive visualization approaches that will help users to efficiently and effectively build and communicate trust in ways that fit each of the ML process stages. We formulate several research questions and directions that include: 1) a typology/taxonomy of trust objects, trust issues, and possible reasons for (mis)trust; 2) formalisms to represent trust in machine-readable form; 3) means by which users can express their state of trust by interacting with a computer system (e.g., text, drawing, marking); 4) ways in which a system can facilitate users’ expression and communication of the state of trust; and 5) creation of visual interactive techniques for representation and exploration of trust over all stages of an ML pipeline.

 
 
 
 

| Tuesday, 13 June, 2023

08:00 – 17:00 Registration
09:00 – 11:10 Opening & Keynote
Chair: Gerik Scheuermann
Room 1ABCD
Opening
Message from the Chairs
EuroVis PhD Award

Presenter: Thomas Ertl

Early Career Award
Keynote The Role of Visualization in Structural Biology and Drug Discovery Paper
Jens Meiler

Abstract: Research in computational structural biology is advancing at a breath-taking pace driven by several developments. Artificial Intelligence-driven algorithms such as AlphaFold give ready access to an accurate three-dimensional structure of every human protein including mutations that cause disease. In silico ultra-large library screening identifies hit compounds that can be the starting point for drug discovery. Protein design algorithms allow for the engineering of therapeutic antibodies and vaccine candidates. These developments depend on efficient visualization of biomolecules and their interactions, a formidable challenge for systems consisting of thousands of atoms with millions of interactions. Highlighting aspects critical for function in detail while simplifying or omitting less important aspects of protein structure is key to many visualization techniques in structural biology. Recent developments in the field will be reviewed accompanied by visualization examples.

 
 
11:10 – 11:30 Coffee Break
Lobby
 
 
11:30 – 12:45 FP 1: Best Papers: Awards Session Fast Forward Video
Chair: Daniel Archambault, Tobias Schreck, and Roxana Bujack
Room 1ABCD
Best Paper Mini-VLAT: A Short and Effective Measure of Visualization Literacy Paper
Saugat Pandey and Alvitta Ottley

Abstract: The visualization community regards visualization literacy as a necessary skill. Yet, despite the recent increase in research into visualization literacy by the education and visualization communities, we lack practical and time-effective instruments for the widespread measurements of people’s comprehension and interpretation of visual designs. We present Mini-VLAT, a brief but practical visualization literacy test. The Mini-VLAT is a 12-item short form of the 53-item Visualization Literacy Assessment Test (VLAT). The Mini-VLAT is reliable (coefficient omega = 0.72) and strongly correlates with the VLAT. Five visualization experts validated the Mini-VLAT items, yielding an average content validity ratio (CVR) of 0.6. We further validate Mini-VLAT by demonstrating a strong positive correlation between study participants’ Mini-VLAT scores and their aptitude for learning an unfamiliar visualization using a Parallel Coordinate Plot test. Overall, the Mini-VLAT items showed a similar pattern of validity and reliability as the 53-item VLAT. The results show that Mini-VLAT is a psychometrically sound and practical short measure of visualization literacy.

Presenter: Saugat Pandey

Honorable Mention ChemoGraph: Interactive Visual Exploration of the Chemical Space Paper
Bharat Kale, Austin Clyde, Maoyuan Sun, Arvind Ramanathan, Rick Stevens, and Michael E. Papka

Abstract: Exploratory analysis of the chemical space is an important task in the field of cheminformatics. For example, in drug discovery research, chemists investigate sets of thousands of chemical compounds in order to identify novel yet structurally similar synthetic compounds to replace natural products. Manually exploring the chemical space inhabited by all possible molecules and chemical compounds is impractical, and therefore presents a challenge. To fill this gap, we present ChemoGraph, a novel visual analytics technique for interactively exploring related chemicals. In ChemoGraph, we formalize a chemical space as a hypergraph and apply novel machine learning models to compute related chemical compounds. It uses a database to find related compounds from a known space and a machine learning model to generate new ones, which helps enlarge the known space. Moreover, ChemoGraph highlights interactive features that support users in viewing, comparing, and organizing computationally identified related chemicals. With a drug discovery usage scenario and initial expert feedback from a case study, we demonstrate the usefulness of ChemoGraph.

Presenter: Bharat Kumar Kale

Honorable Mention A Fully Integrated Pipeline for Visual Carotid Morphology Analysis Paper
Pepe Eulzer, Fabienne von Deylen, Wei-Chan Hsu, Ralph Wickenhoefer, Carsten Klingner, and Kai Lawonn

Abstract: Analyzing stenoses of the internal carotids – local constrictions of the artery – is a critical clinical task in cardiovascular disease treatment and prevention. For this purpose, we propose a self-contained pipeline for the visual analysis of carotid artery geometries. The only inputs are computed tomography angiography (CTA) scans, which are already recorded in clinical routine. We show how integrated model extraction and visualization can help to efficiently detect stenoses and we provide means for automatic, highly accurate stenosis degree computation. We directly connect multiple sophisticated processing stages, including a neural prediction network for lumen and plaque segmentation and automatic global diameter computation. We enable interactive and retrospective user control over the processing stages. Our aims are to increase user trust by making the underlying data validatable on the fly, to decrease adoption costs by minimizing external dependencies, and to optimize scalability by streamlining the data processing. We use interactive visualizations for data inspection and adaption to guide the user through the processing stages. The framework was developed and evaluated in close collaboration with radiologists and neurologists. It has been used to extract and analyze over 100 carotid bifurcation geometries and is built with a modular architecture, available as an extendable open-source platform.

Presenter: Pepe Eulzer

Best Short Paper Award Ceremony

Presenter: Ivan Viola

 
 
12:45 – 14:20 Lunch Break
Lobby
 
 
14:20 – 15:30 FP 2: Scalar and Vector Fields Fast Forward Video
Chair: Ingrid Hotz
Room 1A
Doppler Volume Rendering: A Dynamic, Piecewise Linear Spectral Representation for Visualizing Astrophysics Simulations Paper
Reem Alghamdi, Thomas Mueller, Alberto Jaspe-Villanueva, Markus Hadwiger, and Filip Sadlo

Abstract: We present a novel approach for rendering volumetric data including the Doppler effect of light. Similar to the acoustic Doppler effect, which is caused by relative motion between a sound emitter and an observer, light waves also experience compression or expansion when emitter and observer exhibit relative motion. We account for this by employing spectral volume rendering in an emission-absorption model, with the volumetric matter moving according to an accompanying vector field, and emitting and attenuating light at wavelengths subject to the Doppler effect. By introducing a novel piecewise linear representation of the involved light spectra, we achieve accurate volume rendering at interactive frame rates. We compare our technique to rendering with traditional point-based spectral representation, and demonstrate its utility using a simulation of galaxy formation.

Presenter: Reem Alghamdi

Memory-Efficient GPU Volume Path Tracing of AMR Data Using the Dual Mesh Paper
Stefan Zellmann, Qi Wu, Kwan-Liu Ma, and Ingo Wald

Abstract: A common way to render cell-centric adaptive mesh refinement (AMR) data is to compute the dual mesh and visualize that with a standard unstructured element renderer. While the dual mesh provides a high-quality interpolator, the memory requirements of the dual mesh data structure are significantly higher than those of the original grid, which prevents rendering very large data sets. We introduce a GPU-friendly data structure and a clustering algorithm that allow for efficient AMR dual mesh rendering with a competitive memory footprint. Fundamentally, any off-the-shelf unstructured element renderer running on GPUs could be extended to support our data structure just by adding a gridlet element type in addition to the standard tetrahedra, pyramids, wedges, and hexahedra supported by default. We integrated the data structure into a volumetric path tracer to compare it to various state-of-the-art unstructured element sampling methods. We show that our data structure easily competes with these methods in terms of rendering performance, but is much more memory-efficient.

Presenter: Stefan Zellmann

xOpat: eXplainable Open Pathology Analysis Tool Paper
Jiří Horák, Katarína Furmanová, Barbora Kozlikova, Tomáš Brázdil, Petr Holub, Martin Kačenga, Matej Gallo, Rudolf Nenutil, Jan Byška, and Vit Rusnak

Abstract: Histopathology research quickly evolves thanks to advances in whole slide imaging (WSI) and artificial intelligence (AI). However, existing WSI viewers are tailored either for clinical or research environments, but none suits both. This hinders the adoption of new methods and communication between the researchers and clinicians. The paper presents xOpat, an open-source, browser-based WSI viewer that addresses these problems. xOpat supports various data sources, such as tissue images, pathologists’ annotations, or additional data produced by AI models. Furthermore, it provides efficient rendering of multiple data layers, their visual representations, and tools for annotating and presenting findings. Thanks to its modular, protocol-agnostic, and extensible architecture, xOpat can be easily integrated into different environments and thus helps to bridge the gap between research and clinical practice. To demonstrate the utility of xOpat, we present three case studies, one conducted with a developer of AI algorithms for image segmentation and two with a research pathologist.

Presenter: Jiří Horák

 
 
14:20 – 15:30 FP 3: Methodology and Design Studies Fast Forward Video
Chair: Michael Sedlmair
Room 1BCD
Process and Pitfalls of Online Teaching and Learning with Design Study “Lite” Methodology: A Retrospective Analysis Paper
Uzma Haque Syeda, Cody Dunne, and Michelle A. Borkin

Abstract: Design studies are an integral method of visualization research with hundreds of instances in the literature. Although taught as a theory, the practical implementation of design studies is often excluded from visualization pedagogy due to the lengthy time commitments associated with such studies. Recent research has addressed this challenge and developed an expedited design study framework, the Design Study “Lite” Methodology (DSLM), which can implement design studies with novice students within just 14 weeks. The framework was developed and evaluated based on five semesters of in-person data visualization courses with 30 students or less and was implemented in conjunction with Service-Learning (S-L). With the growth and popularity of the data visualization field—and the teaching environment created by the COVID-19 pandemic—more academic institutions are offering visualization courses online. Therefore, in this paper, we strengthen and validate the epistemological foundations of the DSLM framework by testing its (1) adaptability to online learning environments and conditions and (2) scalability to larger classes with up to 57 students. We present two online implementations of the DSLM framework, with and without Service-Learning (S-L), to test the adaptability and scalability of the framework. We further demonstrate that the framework can be applied effectively without the S-L component. We reflect on our experience with the online DSLM implementations and contribute a detailed retrospective analysis using thematic analysis and grounded theory methods to draw valuable recommendations and guidelines for future applications of the framework. This work verifies that DSLM can be used successfully in online classes to teach design study methodology. Finally, we contribute novel additions to the DSLM framework to further enhance it for teaching and learning design studies in the classroom. The preprint and supplementary materials for this paper can be found at https://osf.io/6bjx5/.

Presenter: Uzma Haque Syeda

CGF Visual Exploration of Financial Data with Incremental Domain Knowledge Paper
Alessio Arleo, Christos Tsigkanos, Roger A. Leite, Schahram Dustdar, Silvia Miksch, and Johannes Sorger

Abstract: Modelling the dynamics of a growing financial environment is a complex task that requires domain knowledge, expertise and access to heterogeneous information types. Such information can stem from several sources at different scales, complicating the task of forming a holistic impression of the financial landscape, especially in terms of the economical relationships between firms. Bringing this scattered information into a common context is, therefore, an essential step in the process of obtaining meaningful insights about the state of an economy. In this paper, we present Sabrina 2.0, a Visual Analytics (VA) approach for exploring financial data across different scales, from individual firms up to nation-wide aggregate data. Our solution is coupled with a pipeline for the generation of firm-to-firm financial transaction networks, fusing information about individual firms with sector-to-sector transaction data and domain knowledge on macroscopic aspects of the economy. Each network can be created to have multiple instances to compare different scenarios. We collaborated with experts from finance and economy during the development of our VA solution, and evaluated our approach with seven domain experts across industry and academia through a qualitative insight-based evaluation. The analysis shows how Sabrina 2.0 enables the generation of insights, and how the incorporation of transaction models assists users in their exploration of a national economy.

TVCG Tasks and visualizations used for data profiling: A survey and interview study Paper
Roy A. Ruddle, James Cheshire, and Sara Johansson Fernstad

Abstract: The use of good-quality data to inform decision making is entirely dependent on robust processes to ensure it is fit for purpose. Such processes vary between organisations, and between those tasked with designing and following them. In this paper we report on a survey of 53 data analysts from many industry sectors, 24 of whom also participated in in-depth interviews, about computational and visual methods for characterizing data and investigating data quality. The paper makes contributions in two key areas. The first is to data science fundamentals, because our lists of data profiling tasks and visualization techniques are more comprehensive than those published elsewhere. The second concerns the application question “what does good profiling look like to those who routinely perform it?,” which we answer by highlighting the diversity of profiling tasks, unusual practice and exemplars of visualization, and recommendations about formalizing processes and creating rulebooks.

 
 
15:30 – 16:00 Coffee Break
Lobby
 
 
16:00 – 17:10 Dirk Bartz Prize Fast Forward Video
Chair: Tim Gerrits
Room 1A
Opening

Abstract: These involve “seeing to learn”, or how to deploy MP techniques to open the black box of ML models, and “learning to see”, or how to use ML to create better MP techniques for visualizing high-dimensional data.

Transdisciplinary Visualization of Aortic Dissections Paper
Gabriel Mistelbauer, Kathrin Baeumler, Domenico Mastrodicasa, Lewis Hahn, Antonio Pepe, Veit Sandfort, Virginia Hinostroza, Kai Ostendorf, Aaron Schroeder, Anna Sailer, Martin Willemink, Shannon Walters, Bernhard Preim, and Dominik Fleischmann

Abstract: Aortic dissection is a life-threatening condition caused by the abrupt formation of a secondary blood flow channel within the vessel wall. Patients surviving the acute phase remain at high risk for late complications, such as aneurysm formation and aortic rupture. The timing of these complications is variable, making long-term imaging surveillance crucial for aortic growth monitoring. Morphological characteristics of the aorta, its hemodynamics, and, ultimately, risk models impact treatment strategies. Providing such a wealth of information demands expertise across a broad spectrum to understand the complex interplay of these influencing factors. We present results of our longstanding transdisciplinary efforts to confront this challenge. Our team has identified four key disciplines, each requiring specific expertise overseen by radiology: lumen segmentation and landmark detection, risk predictors and inter-observer analysis, computational fluid dynamics simulations, and visualization and modeling. In each of these disciplines, visualization supports analysis and serves as communication medium between stakeholders, including patients. For each discipline, we summarize the work performed, the related work, and the results.

Presenter: Gabriel Mistelbauer

Visualizing Carotid Stenoses for Stroke Treatment and Prevention Paper
Pepe Eulzer, Kevin Richter, Anna Hundertmark, Monique Meuschke, Ralph Wickenhoefer, Carsten Klingner, and Kai Lawonn

Abstract: Analyzing carotid stenoses – potentially lethal constrictions of the brain-supplying arteries – is a critical task in clinical stroke treatment and prevention. Determining the ideal type of treatment and point for surgical intervention to minimize stroke risk is considerably challenging. We propose a collection of visual exploration tools to advance the assessment of carotid stenoses in clinical applications and research on stenosis formation. We developed methods to analyze the internal blood flow, anatomical context, vessel wall composition, and to automatically and reliably classify stenosis candidates. We do not presume already segmented and extracted surface meshes but integrate streamlined model extraction and pre-processing along with the result visualizations into a single framework. We connect multiple sophisticated processing stages in one user interface, including a neural prediction network for vessel segmentation and automatic global diameter computation. We enable retrospective user control over each processing stage, greatly simplifying error detection and correction. The framework was developed and evaluated in multiple iterative user studies, involving a group of eight specialists working in stroke care (radiologists and neurologists). It is publicly available, along with a database of over 100 carotid bifurcation geometries that were extracted with the framework from computed tomography data. Further, it is a vital part of multiple ongoing studies investigating stenosis pathophysiology, stroke risk, and the necessity for surgical intervention.

Presenter: Pepe Eulzer

Visual Exploration, Analysis, and Communication of Physiological Processes Paper
Laura Garrison and Stefan Bruckner

Abstract: Describing the myriad biological processes occurring in living beings over time, the science of physiology is complex and critical to our understanding of how life works. Physiology spans many spatio-temporal scales to combine and bridge from the basic sciences (biology, physics, and chemistry) to medicine. Recent years have seen an explosion of new and finer-grained experimental and acquisition methods to characterize these data. The volume and complexity of these data necessitate effective visualizations to complement standard analysis practice. Visualization approaches must carefully consider and be adaptable to the user’s main task, be it exploratory, analytical, or communication-oriented. This research contributes to the areas of theory, empirical findings, methods, applications, and research replicability in visualizing physiology. Our overarching theme is the cross-disciplinary application of medical illustration and visualization techniques to address challenges in exploring, analyzing, and communicating aspects of human physiology to audiences with differing expertise.

Presenter: Laura Garrison

Awards Ceremony
 
 
16:00 – 17:30 FP 4: Graphs and Hypergraphs Fast Forward Video
Chair: Andreas Kerren
Room 1BCD
CGF Evonne: A Visual Tool for Explaining Reasoning with OWL Ontologies and Supporting Interactive Debugging Paper
J. Méndez, C. Alrabbaa, P. Koopmann, R. Langner, F. Baader, and R. Dachselt

Abstract: OWL is a powerful language to formalize terminologies in an ontology. Its main strength lies in its foundation on description logics, allowing systems to automatically deduce implicit information through logical reasoning. However, since ontologies are often complex, understanding the outcome of the reasoning process is not always straightforward. Unlike already existing tools for exploring ontologies, our visualization tool Evonne is tailored towards explaining logical consequences. In addition, it supports the debugging of unwanted consequences and allows for an interactive comparison of the impact of removing statements from the ontology. Our visual approach combines (1) specialized views for the explanation of logical consequences and the structure of the ontology, (2) employing multiple layout modes for iteratively exploring explanations, (3) detailed explanations of specific reasoning steps, (4) cross-view highlighting and colour coding of the visualization components, (5) features for dealing with visual complexity and (6) comparison and exploration of possible fixes to the ontology. We evaluated Evonne in a qualitative study with 16 experts in logics, and their positive feedback confirms the value of our concepts for explaining reasoning and debugging ontologies.

CGF ComBiNet: Visual Query and Comparison of Bipartite Multivariate Dynamic Social Networks Paper
A. Pister, C. Prieur, and J.-D. Fekete

Abstract: We present ComBiNet, a visualization, query, and comparison system for exploring bipartite multivariate dynamic social networks. Historians and sociologists study social networks constructed from textual sources mentioning events related to people, such as marriage acts, birth certificates, and contracts. We model this type of data using bipartite multivariate dynamic networks to maintain a representation faithful to the original sources while not too complex. Relying on this data model, ComBiNet allows exploring networks using both visual and textual queries using the Cypher language, the two being synchronized to specify queries using the most suitable modality; simple queries are easy to express visually and can be refined textually when they become complex. These queries are used for applying topological and attribute-based selection on the network. Query results are visualized in the context of the whole network and over a geographical map for geolocalized entities. We also present the design of our interaction techniques for querying social networks to visually compare the selections in terms of topology, measures, and attribute distributions. We validate the query and comparison systems by showing how they have been used to answer historical questions and by explaining how they have been improved through a usability study conducted with historians.

CGF Faster Edge-Path Bundling Through Graph Spanners Paper
Markus Wallinger, Daniel Archambault, David Auber, Martin Nöllenburg, and Jaakko Peltonen

Abstract: Edge-Path bundling is a recent edge bundling approach that does not incur ambiguities caused by bundling disconnected edges together. Although the approach produces less ambiguous bundlings, it suffers from high computational cost. In this paper, we present a new Edge-Path bundling approach that increases the computational speed of the algorithm without reducing the quality of the bundling. First, we demonstrate that biconnected components can be processed separately in an Edge-Path bundling of a graph without changing the result. Then, we present a new edge bundling algorithm that is based on observing and exploiting a strong relationship between Edge-Path bundling and graph spanners. Although the worst case complexity of the approach is the same as of the original Edge-Path bundling algorithm, we conduct experiments to demonstrate that the new approach is 5–256 times faster than Edge-Path bundling depending on the dataset, which brings its practical running time more in line with traditional edge bundling algorithms.

RectEuler: Visualizing Intersecting Sets using Rectangles Paper
Patrick Paetzold, Rebecca Kehlbeck, Hendrik Strobelt, Yumeng Xue, Sabine Storandt, and Oliver Deussen

Abstract: Euler diagrams are a popular technique to visualize set-typed data. However, creating diagrams using simple shapes remains a challenging problem for many complex, real-life datasets. To solve this, we propose RectEuler: a flexible, fully-automatic method using rectangles to create Euler-like diagrams.
We use an efficient mixed-integer optimization scheme to place set labels and element representatives (e.g., text or images) in conjunction with rectangles describing the sets. By defining appropriate constraints, we adhere to well-formedness properties and aesthetic considerations. If a dataset cannot be created within a reasonable time or at all, we iteratively split the diagram into multiple components until a drawable solution is found. Redundant encoding of the set membership using dots and set lines improves the readability of the diagram. Our web tool lets users see how the layout changes throughout the optimization process and provides interactive explanations. For evaluation, we perform quantitative and qualitative analysis across different datasets and compare our method to state-of-the-art Euler diagram generation methods.

Presenter: Patrick Paetzold

 
 
17:30 – 19:30 Poster Viewing and Job Fair more information
Lobby
A Conversational Data Visualisation Platform for Hierarchical Multivariate Data Paper
Ecem Kavaz, Anna Puig, Inmaculada Rodríguez, and Eduard Vives

Abstract: This paper presents a novel data visualisation platform that integrates both direct manipulation and conversational interaction styles for analysing hierarchical multivariate data. The proposed architecture is based on the Rasa conversational AI framework. We show its full potential in a real-life case study for analysing hate speech in online news.

A Dashboard for Interactive Convolutional Neural Network Training And Validation Through Saliency Maps Paper
Tim Cech, Furkan Simsek, Willy Scheibel, and Jürgen Döllner

Abstract: Quali-quantitative methods provide ways for interrogating Convolutional Neural Networks (CNN). For it, we propose a dashboard using a quali-quantitative method based on quantitative metrics and saliency maps. By those means, a user can discover patterns during the training of a CNN. With this, they can adapt the training hyperparameters of the model, obtaining a CNN that learned patterns desired by the user. Furthermore, they neglect CNNs which learned undesirable patterns. This improves users’ agency over the model training process.

A tour through the zoo of scented widgets Paper
Vasile Ciorna, Nicolas Medoc, Guy Melançon, Frank PETRY, and Mohammad Ghoniem

Abstract: Software developers can choose among many popular widget toolkits to build graphical user interfaces quickly. Widgets like sliders, buttons and scrollbars are widespread and well-understood by the public. They allow users to interact with the software, to trigger operations or to set parameters. For a long time, it was also recognized that widgets could be enhanced to provide a better user experience. Embedded visualizations, also known as scents, could be added to standard widgets to enhance their capabilities. In this article, we survey the scientific literature concerning scented widgets. We propose a two-level taxonomy structured around four main dimensions: Purpose, Widget Type, Number of Scents and Scent Type, and provide related examples of scented widgets. We also discuss our current thoughts about the scope of this survey, and call for feedback to further improve and extend the proposed taxonomy.

An initial visual analysis of German city dashboards Paper
Christoph Huber, Till Nagel, and Heiner Stuckenschmidt

Abstract: City dashboards are powerful tools for quickly understanding various urban phenomena through visualizing urban data using various techniques. In this paper, we investigate the common data sets used, and the most frequently employed visualization techniques in city dashboards. We reviewed 16 publicly available dashboards from 42 cities that are part of German smart city programs and have a high level of digitization. Through analysis of the visualization techniques used, we present our results visually and discuss our findings.

CitadelPolice: An Interactive Visualization Environment for Scenario Testing on Criminal Networks Paper
Liza Anna Sofie Roelofsen, Robert G. Belleman, Miles van der Lely, Frederike Oetker, and Rick Quax

Abstract: Criminal networks have proven to be highly resilient against law enforcement interventions. This resiliency has driven researchers to investigate these networks further. However, the obtained insights reaching law enforcement agencies are generally highly case-dependent or extremely general. Therefore, CitadelPolice aims to provide an environment for visualizing criminal network models on a comprehensive and interactive dashboard. The main advantage of CitadelPolice is that it allows law enforcement to independently test specific scenarios and discover the most effective disruption strategy before deploying it. To achieve this, we used a computational network model based on collaboration with and data from the Dutch Police Force, named the Criminal Cocaine Replacement Model and implemented this on a web-based graph visualization and simulation tool named Citadel. Using this, we can interactively visualize the network while running simulations. To test the effectiveness of the network visualization and implementation of the model, we performed sequential usability testing and compared the results over time.

CohExplore: Visually Supporting Students in Exploring Text Cohesion Paper
Carina Liebers, Shivam Agarwal, and Fabian Beck

Abstract: A cohesive text allows readers to follow the described ideas and events. Exploring cohesion in text might aid students enhancing their academic writing. We introduce CohExplore, which promotes exploring and reflecting on cohesion of a given text by visualizing computed cohesion-related metrics on an overview and detailed level. Detected topics are color-coded, semantic similarity is shown via lines, while connectives and co-references in a paragraph are encoded using text decoration. Demonstrating the system, we share insights about a student-authored text.

Comparative Visualization of Longitudinal 24-hour Ambulatory Blood Pressure Measurements in Pediatric Patients with Chronic Kidney Disease Paper
Mahmut Özmen, Mohamed Yaseen Jabarulla, Carl Robert Grabitz, Anette Melk, Elke Wühl, and Steffen Oeltze-Jafra

Abstract: Pediatric chronic kidney disease (CKD) increases the risk of cardiovascular disease, stroke and other life-threatening conditions. Monitoring blood pressure in CKD patients is crucial to managing these risks. 24-hour ambulatory blood pressure monitoring (ABPM) is recommended for its comprehensive and accurate assessment of blood pressure over 24 hours. Analyzing and comparing 24-hour ABPM data of multiple diagnostic visits is a challenging task. Traditional methods involve comparing individual visits using paper print-outs, which can be time-consuming and lacks a systematic overview of deviations over time. In this work, we present a dashboard visualization that allows clinicians (i) to assess the evolution of ABPM data over multiple diagnostic visits, (ii) to compare ABPM data of CKD patients with reference data of a healthy cohort, and (iii) to perform a detailed intra-individual comparison of the ABPM data acquired at two subsequent diagnostic visits. We present a case study of a patient with mild-to-moderate-stage CKD to demonstrate how our dashboard can assists clinicians in analyzing ABPM data and collect the feedback of three pediatricians.

Constructing Hierarchical Continuity in Hilbert & Moore Treemaps Paper
Willy Scheibel and Jürgen Döllner

Abstract: The Hilbert and Moore treemap layout algorithms are based on the space-filling Hilbert and Moore curves, respectively, to map tree-structured datasets to a 2D treemap layout. Considering multiple snapshots of a time-variant dataset, one of the design goals for Hilbert and Moore treemaps is layout stability, i.e., low changes in the layout for low changes in the underlying tree-structured data. For this, their underlying space-filling curve is expected to be continuous across all nodes and hierarchy levels, which has to be considered throughout the layouting process. We propose optimizations to subdivision templates, their orientation, and discuss the continuity of the underlying space-filling curve. We show real-world examples of Hilbert and Moore treemaps for small and large datasets with continuous space-filling curves, allowing for improved layout stability.

Explorative Study on Semantically Resonant Colors for Combinations of Categories with Application to Meteorological Data Paper
Laura Pelchmann, Sebastian Bremm, Kerstin Ebell, and Tatiana von Landesberger

Abstract: We present an exploratory study of semantically resonant colors for combinations of categories. The goal is to support color selection of multi-labeled classes of classified data. We asked participants to assign colors to different categories in the meteorological domain and then to their combinations. Our results show that the colors chosen for the combinations are related to the colors for the individual categories. We also found indications that people tend to prefer darker color values for combinations of categories. Our results can be used to color code meteorological data.

Interaction Tasks for Explainable Recommender Systems Paper
Ibrahim Al-Hazwani, Turki Alahmadi, Mennatallah El-Assady, Kathrin Wardatzky, Oana Inel, and Jürgen Bernard

Abstract: In the modern web experience, users interact with various types of recommender systems. In this literature study, we investigate the role of interaction in explainable recommender systems using 27 relevant papers from recommender systems, human-computer interaction, and visualization fields. We structure interaction approaches into 1)
the task, 2) the interaction intent, 3) the interaction technique, and 4) the interaction effect on explainable recommender systems. We present a preliminary interaction taxonomy for designers and developers to improve the interaction design of explainable recommender systems. Findings based on exploiting the descriptive power of the taxonomy emphasize the importance of interaction in creating effective and user-friendly explainable recommender systems.

MILADY (Matrix+Linear Diagrams): Visual Exploration and Edition of Multivariate Graphs for Computer Networks Management Paper
Mathieu Guglielmino, Francesco Bronzino, Arnaud Sallaberry, and Sébastien Monnet

Abstract: This poster introduces MILADY (Matrix+Linear Diagram), a visual method for exploring and editing multivariate graphs from a computer networks perspective. Existing methods usually require multiple views, but our integrated approach enables users to visualize and edit both aspects with drag gestures and in a single view. We demonstrate the usefulness of our method for computer networks management.

Multi-Criteria Optimization for Automatic Dashboard Design Paper
Jiwon Choi and Jaemin Jo

Abstract: We present Gleaner, an automatic dashboard design system that optimizes the design in terms of four design criteria, namely Specificity, Interestingness, Diversity, and Coverage. With these criteria, Gleaner not only optimizes for the expressiveness and interestingness of a single visualization but also improves the diversity and coverage of the dashboard as a whole. Users are able to express their intent for desired dashboard design to Gleaner, including specifying preferred or constrained attributes and adjusting the weight of the Oracle. This flexibility in expressing intent enables Gleaner to design dashboards that are well-aligned with the user’s own analytic goals leading to more effective and efficient data exploration and analysis.

ORD-Xplore: Bridging Open Research Data Collections through Modality Abstractions Paper
Madhav Sachdeva, Michael Blum, Yann Stricker, Tobias Schreck, Rudolf Mumenthaler, and Jürgen Bernard

Abstract: We present ORD-Xplore, an approach to bridge gaps between digital editions, which represent valuable collections of multiple digitized research artifacts. However, digital editions often co-exist isolated, making it difficult for researchers to access, find, and re-use open research data from multiple digital editions. An ultimate goal is to unify library services across editions, even for editions with heterogeneity. In ORD-Xplore, we utilize abstraction methods from visualization research to help digital librarians identify unifying data modalities, as one important step towards standardization of heterogeneous digital editions.

Project iMuse: an Interactive Visualizer of Lyrical Sentiment Paper
Jack Anstey, Anna Lu, Ruohe Wang, and Zack Zhang

Abstract: Our interactive visualization, Project iMuse, provides the unique ability to view the most used words in popular music over the past 50+ years in conjunction with how they were used in songs through sentiment analysis. To aid in more detailed analyses, Project iMuse has the ability to dynamically consider a variety of user defined subsets. These subsets are created through the user interaction, which includes changing the range of years considered and selecting particular word types.

PTMVision: An Interactive Visualization Platform for Post-Translational Modifications of Proteins Paper
Simon Tim Hackl, Caroline Jachmann, Theresa Harbig, Mathias Witte Paz, and Kay Nieselt

Abstract: In recent years, proteins have been shown to carry many more post-translational modifications (PTMs) than originally thought. The visualization of proteins along with their PTMs facilitates exploration and understanding of the effects of these PTMs on the protein structure, function, and interactions with other proteins. Therefore, we developed PTMVision, an interactive web-based visualization. We combine information about PTMs in the primary sequence with a two-dimensional representation of the protein’s tertiary structure using a presence-absence map and a modified contact map that relates PTMs with the spatial arrangement of proteins without the need of a 3D structure. The prototype of PTMVision is part of the TueVis Visualization Server and is available at https://ptmvision-tuevis.cs.uni-tuebingen.de/.

Putting Annotations to the Test Paper
Franziska Becker and Thomas Ertl

Abstract: When users work with interactive visualization systems, they get to see more accessible representations of raw data and interact with these, e.g. by filtering the data or modifying the visualization parameters like color. Internal representations such as hunches about trends, outliers or data points of interest, relationships and more are usually not visualized and integrated in systems, i.e. they are externalized. In addition, how externalizations in visualization systems can affect users in terms of memory, post-analysis recall, speed or analysis quality is not yet completely understood. We present a visualization-agnostic externalization framework that lets users annotate visualizations, automatically connect them to related data and store them for later retrieval. In addition, we conducted a pilot study to test the framework’s usability and users’ recall of exploratory analysis results. In two tasks, one without and one with annotation features available, we asked participants to answer a question with the help of visualizations and report their findings with concrete examples afterwards. Qualitative analysis of the summaries showed that there are only minor differences in terms of detail or completeness, which we suspect is due to the short task time and consequently more shallow analyses made by participants. We discuss how to improve our framework’s usability and modify our study design for future research to gain more insight into externalization effects on post-analysis recall.

TVCG Scanpath Prediction on Information Visualisations Paper
You Wang, Mihai Bâce, and Andreas Bulling

Abstract: We propose Unified Model of Saliency and Scanpaths (UMSS)-a model that learns to predict multi-duration saliency and scanpaths (i.e. sequences of eye fixations) on information visualisations. Although scanpaths provide rich information about the importance of different visualisation elements during the visual exploration process, prior work has been limited to predicting aggregated attention statistics, such as visual saliency. We present in-depth analyses of gaze behaviour for different information visualisation elements (e.g. Title, Label, Data) on the popular MASSVIS dataset. We show that while, overall, gaze patterns are surprisingly consistent across visualisations and viewers, there are also structural differences in gaze dynamics for different elements. Informed by our analyses, UMSS first predicts multi-duration element-level saliency maps, then probabilistically samples scanpaths from them. Extensive experiments on MASSVIS show that our method consistently outperforms state-of-the-art methods with respect to several, widely used scanpath and saliency evaluation metrics. Our method achieves a relative improvement in sequence score of 11.5% for scanpath prediction, and a relative improvement in Pearson correlation coefficient of up to 23.6 These results are auspicious and point towards richer user models and simulations of visual attention on visualisations without the need for any eye tracking equipment.

Supporting Medical Personnel at Analyzing Chronic Lung Diseases with Interactive Visualizations Paper
Rene Pascal Warnking, Jan Scheer, Frederik Trinkmann, Fabian Siegel, and Till Nagel

Abstract: We present a visualization system for medical practitioners to analyze lung function data collected at different points in time. In particular, our approach aims to solve the problems practitioners encounter in their daily work life when they have to consult different text-based documents to get access to the same data we provide in a single interface. To test the suitability of our system, we conducted a formative study where participants used our system to answer both simple and complex questions previously designed in collaboration with a domain expert. Our results indicate that our target users can easily work with our system and use it to answer both types of questions.

Symbolic Event Visualization for Analyzing User Input and Behavior of Augmented Reality Sessions Paper
Solveig Rabsahl, Thomas Satzger, Snehanjali Kalamkar, Jens Grubert, and Fabian Beck

Abstract: Interacting with augmented reality (AR) systems involves different domains and is more complex than interacting with traditional user interfaces. To analyze AR interactions, we suggest an event visualization approach that discerns different event layers on a timeline. It is based on symbolic event representations of typical user actions, such as physical movement or interaction with scene objects. Although focusing on the Microsoft HoloLens 2, the approach can generalize to similar environments and provide a basis for developing a more comprehensive visual analytics and annotation solution for AR usage sessions.

The Challenge of Branch-Aware Data Manifold Exploration Paper
Daniël M. Bot, Jannes Peeters, and Jan Aerts

Abstract: Branches within clusters can represent meaningful subgroups that should be explored. In general, automatically detecting branching structures within clusters requires analysing the distances between data points and a centrality metric, resulting in a complex two-dimensional hierarchy. This poster describes abstractions for this data and formulates requirements for a visualisation, building towards a comprehensive branch-aware cluster exploration interface.

Towards Visualisation Specifications from Multi-Lingual Natural Language Queries using Large Language Models Paper
Maeve Hutchinson, Pranava Madhyastha, Aidan Slingsby, and Radu Jianu

Abstract: In this paper, we present an empirical demonstration of a prompt-based learning approach, which utilizes pre-trained Large Language Models to generate visualization specifications from user queries expressed in natural language. We showcase the approach’s flexibility in generating valid specifications in languages other than English (e.g., Spanish) despite lacking access to any training samples. Our findings represent the first steps towards the development of multilingual interfaces for data visualization that transcend English-centric systems, making them more accessible to a wider range of users.

Unveiling the Dispersal of Historical Books from Religious Orders Paper
Yiwen Xing, Dengyi Yan, Cristina Dondi, Rita Borgo, and Alfie Abdul-Rahman

Abstract: In this paper, we introduce a visualization prototype designed to assist historians in exploring the dispersal of books from religious orders throughout Europe during the sixteenth century and beyond. The prototype is the result of a collaboration between visualization researchers and a historical book researcher, aiming to apply visualization techniques to address real-world domain challenges. Over two months, we engaged in an intensive collaboration with the domain expert to analyze domain issues and requirements and subsequently developed a prototype featuring two interfaces. Weekly discussions with the domain expert guided design and ongoing prototype evaluation. In its infancy, the prototype shows promise for enhancement and scalability. Future efforts will target systematic usability and practicality evaluations.

VESPA: VTK Enhanced with Surface Processing Algorithms Paper
Charles Gueunet and Tiffany L Chhim

Abstract: This work introduces the VESPA project, a bridge between geometry processing with the Computational Geometry Algorithms Library (CGAL) and scientific visualization with the Visualization Tool Kit (VTK) and ParaView. After a brief description of these tools, we motivate the use of VESPA through the example of a full processing pipeline detailing the construction of a mold from an initial 3D surface model. This paper illustrates the use of several robust geometry operations as well as the benefits of added interactivity and visualization. As an open source project, VESPA is already publicly available and open to contributions.

Visual linkage and interactive features of Evidente for an enhanced analysis of SNP-based phylogenies Paper
Mathias Witte Paz, Theresa Harbig, Dolores Varga, Eileen Kränzle, and Kay Nieselt

Abstract: Phylogenetic trees of a set of bacterial strains are often used to analyze their evolutionary relationships and they are commonly based on genomic features, such as single nucleotide polymorphisms (SNPs). Evidente – a recently published tool – provides visual and analytical linkage across a phylogenetic tree, SNP data and metadata, and integrates them into one interactive visual analytics platform. In contrast to other approaches, Evidente shows how SNPs agree with the tree structure. Evidente is part of the TueVis server (https://evidente-tuevis.cs.uni-tuebingen.de/). Here, we give an overview of the tasks supported by Evidente. The version of Evidente described in the publication can seamlessly visualize up to 150 strains. We thus introduce further enhancements for larger trees, such as data-driven aggregation and semantic zooming.

Visual Planning and Analysis of Latin Formation Dance Patterns Paper
Samuel Beck, Nina Doerr, Fabian Schmierer, Michael Sedlmair, and Steffen Koch

Abstract: Latin formation dancing is a team sport in which up to eight couples perform a coordinated choreography. A central part are the patterns formed by the dancers on the dance floor and the transitions between them. Planning and practicing patterns are some of the most challenging aspects of Latin formation dancing. Interactive visualization approaches can support instructors as well as dancers in tackling these challenges. We present a web-based visualization prototype that assists with the planning, training, and analysis of patterns. Its design was iteratively developed with the involvement of experienced formation instructors. The interface offers views of the dancers’ positions and orientations, pattern transitions, poses, and analytical information like dance floor utilization and movement distances. In a first expert study with formation instructors, the prototype was well received.

Visualizing Pairwise Feature Interactions in Neural Additive Models Paper
Christian Alexander Steinparz, Andreas Hinterreiter, and Marc Streit

Abstract: We present an approach for incorporating feature interactions into Neural Additive Models (NAMs), building upon existing work in this area, to enhance their predictive capabilities while maintaining interpretability. Our contribution focuses on the visual exploration and management of the increased number of feature maps resulting from the addition of pairwise feature combinations to NAMs. This method allows for effectively visualizing individual and pairwise feature interactions using line plots and heatmaps, respectively. To address the potential explosion in the number of feature maps, we apply different scoring functions to compute the importance of a feature map and then filter and sort them based on their importance. The proposed interactive dashboard effectively manages large sets of feature maps, while preserving the white-box properties of NAMs.

Visualizing Sunlight Radiation in the Arctic Ocean Paper
Esben Bay Sørensen, Karl Attard, Jean-Pierre Gattuso, Jakob Kusnick, and Stefan Jänicke

Abstract: The Arctic is experiencing dramatic environmental transformation due to rising temperatures and melting ice, which are affecting its environment, wildlife, and human communities. Remote sensing technologies (e.g. satellites) are increasingly being used to understand environmental change in the remote and understudied Arctic Ocean across broad spatial and temporal scales, generating vast data sets that require interactive visualization to be dynamically explored. We present a prototype visualization that uses aggregation means on different zoom levels to allow exploring a 200GB data set on sunlight in the Arctic Ocean, which consists of monthly time series of coastal pixels, photosynthetically available radiation (PAR), light attenuation coefficient (KPAR), and PAR estimated at the seafloor (PARBOTTOM) from 1998 to 2018. Our main example—the analysis of trends in sunlight radiation levels along the west coast of Greenland—exemplifies our tool’s value for marine biologists to getting a concise and interactive overview of sunlight radiation levels, which allows studying potential impacts on the Arctic ecosystem.

WebGPU for Scalable Client-Side Aggregate Visualization Paper
Gerald Kimmersdorfer, Dominik Wolf, and Manuela Waldner

Abstract: WebGPU is a new graphics API, which now provides compute shaders for general purpose GPU operations in web browsers. We demonstrate the potential of this new technology for scalable information visualization by showing how to filter and aggregate a spatio-temporal dataset with millions of temperature measurements for real-time interactive exploration of climate change.

 
 
 
 

| Wednesday, 14 June, 2023

08:30 – 17:00 Registration
09:00 – 10:30 STAR 1: Parameter Spaces and Chart Corpora Fast Forward Video
Chair: Menna El-Assady
Room 1A
The State of the Art in Creating Visualization Corpora for Automated Chart Analysis Paper
Chen Chen and Zhicheng Liu

Abstract: We present a state-of-the-art report on visualization corpora in automated chart analysis research. We survey 56 papers that created or used a visualization corpus as the input of their research techniques or systems. Based on a multi-level task taxonomy that identifies the goal, method, and outputs of automated chart analysis, we examine the property space of existing chart corpora along five dimensions: format, scope, collection method, annotations, and diversity. Through the survey, we summarize common patterns and practices of creating chart corpora, identify research gaps and opportunities, and discuss the desired properties of future benchmark corpora and the required tools to create them.

CGF Visual Parameter Space Exploration in Time and Space Paper
Nikolaus Piccolotto, Markus Bögl, and Silvia Miksch

Abstract: Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models.

 
 
09:00 – 10:30 FP 5: Cognition, Perception and Stories Fast Forward Video
Chair: Alexandra Diehl
Room 1BCD
Data stories of water: Studying the communicative role of data visualizations within long-form journalism Paper
Manuela Garreton, Francesca Morini, Daniela Paz Moyano, Gianna-Carina Grün, Denis Parra, and Marian Dörk

Abstract: We present a methodology for making sense of the communicative role of data visualizations in journalistic storytelling and share findings from surveying water-related data stories. Data stories are a genre of long-form journalism that integrate text, data visualization, and other visual expressions (e.g., photographs, illustrations, videos) for the purpose of data-driven storytelling. In the last decade, a considerable number of data stories about a wide range of topics have been published worldwide. Authors use a variety of techniques to make complex phenomena comprehensible and use visualizations as communicative devices that shape the understanding of a given topic. Despite the popularity of data stories, we, as scholars, still lack a methodological framework for assessing the communicative role of visualizations in data stories. To this extent, we draw from data journalism, visual culture, and multimodality studies to propose an interpretative framework in six stages. The process begins with the analysis of content blocks and framing elements and ends with the identification of dimensions, patterns, and relation- ships between textual and visual elements. The framework is put to the test by analyzing 17 data stories about water-related issues. Our observations from the survey illustrate how data visualizations can shape the framing of complex topics

Presenter: Manuela Garreton

Belief Decay or Persistence? A Mixed-method Study on Belief Movement Over Time Paper
Shrey Gupta, Alireza Karduni, and Emily Wall

Abstract: When individuals encounter new information (data), that information is incorporated with their existing beliefs (prior) to form a new belief (posterior) in a process referred to as belief updating. While most studies on rational belief updating in visual data analysis elicit beliefs immediately after data is shown, we posit that there may be critical movement in an individual’s beliefs when elicited immediately after data is shown v. after a temporal delay (e.g., due to forgetfulness or weak incorporation of the data). Our paper investigates the hypothesis that posterior beliefs elicited after a time interval will “decay” back towards the prior beliefs compared to the posterior beliefs elicited immediately after new data is presented. In this study, we recruit 101 participants to complete three tasks where beliefs are elicited immediately after seeing new data and again after a brief distractor task. We conduct (1) a quantitative analysis of the results to understand if there are any systematic differences in beliefs elicited immediately after seeing new data or after a distractor task and (2) a qualitative analysis of participants’ reflections on the reasons for their belief update. While we find no statistically significant global trends across the participants beliefs elicited immediately v. after the delay, the qualitative analysis provides rich insight into the reasons for an individual’s belief movement across 9 prototypical scenarios, which includes (i) decay of beliefs as a result of either forgetting the information shown or strongly held prior beliefs, (ii) strengthening of confidence in updated beliefs by positively integrating the new data and (iii) maintaining a consistently updated belief over time, among others. These results can guide subsequent experiments to disambiguate when and by what mechanism new data is truly incorporated into one’s belief system.

Presenter: Alireza Karduni

Do Disease Stories need a Hero? Effects of Human Protagonists on a Narrative Visualization about Cerebral Small Vessel Disease Paper
Sarah Mittenentzwei, Veronika Weiß, Stefanie Schreiber, Laura Garrison, Stefan Bruckner, Malte Pfister, Bernhard Preim, and Monique Meuschke

Abstract: Authors use various media formats to convey disease information to a broad audience, from articles and videos to interviews or documentaries. These media often include human characters, such as patients or treating physicians, who are involved with the disease. While artistic media, such as hand-crafted illustrations and animations are used for health communication in many cases, our goal is to focus on data-driven visualizations. Over the last decade, narrative visualization has experienced increasing prominence, employing storytelling techniques to present data in an understandable way. Similar to classic storytelling formats, narrative medical visualizations may also take a human character-centered design approach. However, the impact of this form of data communication on the user is largely unexplored. This study investigates the protagonist’s influence on user experience in terms of engagement, identification, self-referencing, emotional response, perceived credibility, and time spent in the story. Our experimental setup utilizes a character-driven story structure for disease stories derived from Joseph Campbell’s Hero’s Journey. Using this structure, we generated three conditions for a cerebral small vessel disease story that vary by their protagonist: (1) a patient, (2) a physician, and (3) a base condition with no human protagonist. These story variants formed the basis for our hypotheses on the effect of a human protagonist in disease stories, which we evaluated in an online study with 30 participants. Our findings indicate that a human protagonist exerts various influences on the story perception and that these also vary depending on the type of protagonist.

Presenter: Sarah Mittenentzwei

Don’t Peek at My Chart: Privacy-preserving Visualization for Mobile Device Paper
Songheng Zhang, Dong Ma, and Yong Wang

Abstract: Data visualizations have been widely used on mobile devices like smartphones for various tasks (e.g., visualizing personal health and financial data), making it convenient for people to view such data anytime and anywhere. However, others nearby can also easily peek at the visualizations, resulting in personal data disclosure. In this paper, we propose a perception-driven approach to transform mobile data visualizations into privacy-preserving ones. Specifically, based on human visual perception, we develop a masking scheme to adjust the spatial frequency and luminance contrast of colored visualizations. The resulting visualization retains its original information in close proximity but reduces visibility when viewed from a certain distance or farther away. We conducted two user studies to inform the design of our approach (N=16) and systematically evaluate its performance (N=18), respectively. The results demonstrate the effectiveness of our approach in terms of privacy preservation for mobile data visualizations.

Presenter: Songheng Zhang

 
 
10:30 – 11:00 Coffee Break
Lobby
 
 
11:00 – 12:30 Panel 1: The Future of Interactive Data Analysis and Visualization
Chair: Jürgen Bernard and Mennatallah El-Assady
Room 1A
The Future of Interactive Data Analysis and Visualization Paper
Johanna Schmidt, Timo Ropinski, Alvitta Ottley, Michael Sedlmair, and Marc Streit

Abstract: The interactive data analysis and visualization (VIS) community has prospered for over thirty years. Generation after generation, the community has evolved its understanding of research problems and, along the way, contributed various techniques, applications, and research methods. While some of the developed techniques have stood the test of time, we will consider what else needs to be remembered or even revitalized from the good old days in this panel. Further, VIS is currently facing exciting times, with great changes and trends within and outside the community. Thus, in this panel, we want to analyze current research trends and discuss our most exciting ideas and directions. Looking ahead, it can already be anticipated that the future of VIS is subject to change. In this panel, we want to map out future research directions for our community. Along these three lines, the guiding theme of our interactive panel will be three types of (provoking) statements: (i) In the good old days, I liked when we did . . . (ii) Currently, a most exciting trend is … & (iii) In the future, we will be doing . . . Come and join us to reflect on past and present trends, daring a look ahead to an exciting future for the interactive data analysis and visualization community!

 
 
11:00 – 12:30 FP 6: Visualization Techniques I: Sequences and High-dimensional Data Fast Forward Video
Chair: Alexandru Telea
Room 1BCD
TVCG COMPO*SED: Composite Parallel Coordinates for Co-Dependent Multi-Attribute Choices Paper
Lena Cibulski, Thorsten May, Johanna Schmidt, and Jörn Kohlhammer

Abstract: We propose Composite Parallel Coordinates, a novel parallel coordinates technique to effectively represent the interplay of component alternatives in a system. It builds upon a dedicated data model that formally describes the interaction of components. Parallel coordinates can help decision-makers identify the most preferred solution among a number of alternatives. Multi-component systems require one such multi-attribute choice for each component. Each of these choices might have side effects on the system’s operability and performance, making them co-dependent. Common approaches employ complex multi-component models or involve back-and-forth iterations between single components until an acceptable compromise is reached. A simultaneous visual exploration across independently modeled but connected components is needed to make system design more efficient. Using dedicated layout and interaction strategies, our Composite Parallel Coordinates allow analysts to explore both individual properties of components as well as their interoperability and joint performance. We showcase the effectiveness of Composite Parallel Coordinates for co-dependent multi-attribute choices by means of three real-world scenarios from distinct application areas. In addition to the case studies, we reflect on observing two domain experts collaboratively working with the proposed technique and communicating along the way.

VisCoMET: Visually Analyzing Team Collaboration in Medical Emergency Trainings Paper
Carina Liebers, Shivam Agarwal, Maximilian Krug, Karola Pitsch, and Fabian Beck

Abstract: Handling emergencies requires efficient and effective collaboration of medical professionals. To analyze their performance, in an application study, we have developed VisCoMET, a visual analytics approach displaying interactions of healthcare personnel in a triage training of a mass casualty incident. The application scenario stems from social interaction research, where the collaboration of teams is studied from different perspectives. We integrate recorded annotations from multiple sources, such as recorded videos of the sessions, transcribed communication, and eye-tracking information. For each session, an informationrich timeline visualizes events across these different channels, specifically highlighting interactions between the team members. We provide algorithmic support to identify frequent event patterns and to search for user defined event sequences. Comparing different teams, an overview visualization aggregates each training session in a visual glyph as a node, connected to similar sessions through edges. An application example shows the usage of the approach in the comparative analysis of triage training sessions, where multiple teams encountered the same scene, and highlights discovered insights. The approach was evaluated through feedback from visualization and social interaction experts. The results show that the approach supports reflecting on teams’ performance by exploratory analysis of collaboration behavior while particularly enabling the comparison of triage training sessions.

Presenter: Carina Liebers

FlexEvent: going beyond Case-Centric Exploration and Analysis of Multivariate Event Sequences Paper
Sanne van der Linden, Bernice Wulterkens, Merel van Gilst, Sebastiaan Overeem, Carola van Pul, Anna Vilanova, and Stef van den Elzen

Abstract: In many domains, multivariate event sequence data is collected focused around an entity (the case). Typically, each event has multiple attributes, for example, in healthcare a patient has events such as hospitalization, medication, and surgery. In addition to the multivariate events, also the case (a specific attribute, e.g., patient) has associated multivariate data (e.g., age, gender, weight). Current work typically only visualizes one attribute per event (label) in the event sequences. As a consequence, events can only be explored from a predefined case-centric perspective. However, to find complex relations from multiple perspectives (e.g., from different case definitions, such as doctor), users also need an event- and attribute-centric perspective. In addition, support is needed to effortlessly switch between and within perspectives. To support such a rich exploration, we present FlexEvent: an exploration and analysis method that enables investigation beyond a fixed case-centric perspective. Based on an adaptation of existing visualization techniques, such as scatterplots and juxtaposed small multiples, we enable flexible switching between different perspectives to explore the multivariate event sequence data needed to answer multi-perspective hypotheses. We evaluated FlexEvent with three domain experts in two use cases with sleep disorder and neonatal ICU data that show our method facilitates experts in exploring and analyzing real-world multivariate sequence data from different perspectives.

Presenter: Sanne van der Linden

A Comparative Evaluation of Visual Summarization Techniques for Event Sequences Paper
Kazi Tasnim Zinat, Jinhua Yang, Arjun Gandhi, Nistha Mitra, and Zhicheng Liu

Abstract: Real-world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using
simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous
visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of
summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques.
In this work, we present the design and results of an insight-based crowdsourcing experiment evaluating three existing visual
summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by
these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on
summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence
Synopsis produces the highest-quality visual summaries for all three tasks, but understanding Sequence Synopsis results also
takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and
interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.

Presenter: Kazi Tasnim Zinat

 
 
12:30 – 14:00 Lunch Break
Lobby
 
 
14:00 – 15:30 STAR 2: Networks Fast Forward Video
Chair: Stef van den Elzen
Room 1A
CGF Are We There Yet? A Roadmap of Network Visualization from Surveys to Task Taxonomies Paper
Velitchko Filipov, Alessio Arleo, and Silvia Miksch

Abstract: Networks are abstract and ubiquitous data structures, defined as a set of data points and relationships between them. Network visualization provides meaningful representations of these data, supporting researchers in understanding the connections, gathering insights, and detecting and identifying unexpected patterns. Research in this field is focusing on increasingly challenging problems, such as visualizing dynamic, complex, multivariate, and geospatial networked data. This ever-growing, and widely varied, body of research led to several surveys being published, each covering one or more disciplines of network visualization. Despite this effort, the variety and complexity of this research represents an obstacle when surveying the domain and building a comprehensive overview of the literature. Furthermore, there exists a lack of clarification and uniformity between the terminology used in each of the surveys, which requires further effort when mapping and categorizing the plethora of different visualization techniques and approaches. In this paper, we aim at providing researchers and practitioners alike with a “roadmap” detailing the current research trends in the field of network visualization. We design our contribution as a meta-survey where we discuss, summarize, and categorize recent surveys and task taxonomies published in the context of network visualization. We identify more and less saturated disciplines of research and consolidate the terminology used in the surveyed literature. We also survey the available task taxonomies, providing a comprehensive analysis of their varying support to each network visualization discipline and by establishing and discussing a classification for the individual tasks. With this combined analysis of surveys and task taxonomies, we provide an overarching structure of the field, from which we extrapolate the current state of research and promising directions for future work.

The State of the Art in Visualizing Dynamic Multivariate Networks Paper
Bharat Kale, Maoyuan Sun, and Michael E. Papka

Abstract: Most real-world networks are both dynamic and multivariate in nature, meaning that the network is associated with various attributes and both the network structure and attributes evolve over time. Visualizing dynamic multivariate networks is of great significance to the visualization community because of their wide applications across multiple domains. However, it remains challenging because the techniques should focus on representing the network structure, attributes and their evolution concurrently. Many real-world network analysis tasks require the concurrent usage of the three aspects of the dynamic multivariate networks. In this paper, we analyze current techniques and present a taxonomy to classify the existing visualization techniques based on three aspects: temporal encoding, topology encoding, and attribute encoding. Finally, we survey application areas and evaluation methods; and discuss challenges for future research.

 
 
14:00 – 15:30 FP 7: Visual Analysis and Processes Fast Forward Video
Chair: Lars Linsen
Room 1BCD
Ferret: Reviewing Tabular Datasets for Manipulation Paper
Devin Lange, Shaurya Sahai, Jeff M. Phillips, and Alexander Lex

Abstract: How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high-profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time-consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer-review process, using a guided, interactive visualization approach.
In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities.
We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data.

Presenter: Devin Lange

CGF HardVis: Visual Analytics to Handle Instance Hardness Using Undersampling and Oversampling Techniques Paper
A. Chatzimparmpas, F. V. Paulovich, and A. Kerren

Abstract: Despite the tremendous advances in machine learning (ML), training with imbalanced data still poses challenges in many real‐world applications. Among a series of diverse techniques to solve this problem, sampling algorithms are regarded as an efficient solution. However, the problem is more fundamental, with many works emphasizing the importance of instance hardness. This issue refers to the significance of managing unsafe or potentially noisy instances that are more likely to be misclassified and serve as the root cause of poor classification performance.This paper introduces HardVis, a visual analytics system designed to handle instance hardness mainly in imbalanced classification scenarios. Our proposed system assists users in visually comparing different distributions of data types, selecting types of instances based on local characteristics that will later be affected by the active sampling method, and validating which suggestions from undersampling or oversampling techniques are beneficial for the ML model. Additionally, rather than uniformly undersampling/oversampling a specific class, we allow users to find and sample easy and difficult to classify training instances from all classes. Users can explore subsets of data from different perspectives to decide all those parameters, while HardVis keeps track of their steps and evaluates the model’s predictive performance in a test set separately. The end result is a well‐balanced data set that boosts the predictive power of the ML model. The efficacy and effectiveness of HardVis are demonstrated with a hypothetical usage scenario and a use case. Finally, we also look at how useful our system is based on feedback we received from ML experts.

Human-Computer Collaboration for Visual Analytics: an Agent-based Framework Paper
Shayan Monadjemi, Mengtian Guo, David Gotz, Roman Garnett, and Alvitta Ottley

Abstract: The visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever-expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent-based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed-initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively.

Presenter: Shayan Monadjemi

CGF Seeking patterns of visual pattern discovery for knowledge building Paper
N. Andrienko, G. Andrienko, S. Chen, and B. Fisher

Abstract: Currently, the methodological and technical developments in visual analytics, as well as the existing theories, are not sufficiently grounded by empirical studies that can provide an understanding of the processes of visual data analysis, analytical reasoning and derivation of new knowledge by humans. We conducted an exploratory empirical study in which participants analysed complex and data‐rich visualisations by detecting salient visual patterns, translating them into conceptual information structures and reasoning about those structures to construct an overall understanding of the analysis subject. Eye tracking and voice recording were used to capture this process. We analysed how the data we had collected match several existing theoretical models intended to describe visualisation‐supported reasoning, knowledge building, decision making or use and development of mental models. We found that none of these theoretical models alone is sufficient for describing the processes of visual analysis and knowledge generation that we observed in our experiments, whereas a combination of three particular models could be apposite. We also pondered whether empirical studies like ours can be used to derive implications and recommendations for possible ways to support users of visual analytics systems. Our approaches to designing and conducting the experiments and analysing the empirical data were appropriate to the goals of the study and can be recommended for use in other empirical studies in visual analytics.

 
 
15:30 – 16:00 Coffee Break
Lobby
 
 
16:00 – 17:45 SP 1: VA and Perception Fast Forward Video
Chair: Johanna Schmidt
Room 1A
Visual Exploration of Indirect Bias in Language Models Paper
Judith Louis-Alexandre and Manuela Waldner

Abstract: Language models are trained on large text corpora that often include stereotypes. This can lead to direct or indirect bias in downstream applications. In this work, we present a method for interactive visual exploration of indirect multiclass bias learned by contextual word embeddings. We introduce a new indirect bias quantification score and present two interactive visualizations to explore interactions between multiple non-sensitive concepts (such as sports, occupations, and beverages) and sensitive attributes (such as gender or year of birth) based on this score.

Presenter: Manuela Waldner

Bolt: A Natural Language Interface for Dashboard Authoring Paper
Arjun Srinivasan and Vidya Setlur

Abstract: Authoring dashboards is often a complex process, requiring expertise in both data analysis and visualization design. With current tools, authors lack the means to express their objectives for creating a dashboard (e.g., summarizing data changes or comparing data categories), making it difficult to discover and assemble content relevant to the dashboard. Addressing this challenge, we propose the idea of employing natural language (NL) for dashboard authoring with a prototype interface, BOLT. In this paper, we detail BOLT’s design and implementation, describing how the system maps NL utterances to prevalent dashboard objectives and generates dashboard recommendations. Utilizing BOLT as a design probe, we validate the proposed idea of NL-based dashboard authoring through a preliminary user study. Based on the study feedback, we highlight promising application scenarios and future directions to support richer dashboard authoring workflows.

RiskFix: Supporting Expert Validation of Predictive Timeseries Models in High-Intensity Settings Paper
Gabriela Morgenshtern, Arnav Verma, Sana Tonekaboni, Robert Greer, Jürgen Bernard, Mjaye Mazwi, Anna Goldenberg, and Fanny Chevalier

Abstract: Many real-world machine learning workflows exist in longitudinal, interactive machine learning (ML) settings.
This longitudinal nature is often due to incremental increasing of data, e.g., in clinical settings, where observations about patients evolve over their care period.
Additionally, experts may become a bottleneck in the workflow, as their limited availability, combined with their role as human oracles, often leads to a lack of ground truth data.
In such cases where ground truth data is small, the validation of interactive machine learning workflows relies on domain experts. Only those humans can assess the validity of a model prediction, especially in new situations that have been covered only weakly by available training data.
Based on our experiences working with domain experts of a pediatric hospital’s intensive care unit, we derive requirements for the design of support interfaces for the validation of interactive ML workflows in fast-paced, high-intensity environments.
We present RiskFix, a software package optimized for the validation workflow of domain experts of such contexts.
RiskFix is adapted to the cognitive resources and needs of domain experts in validating and giving feedback to the model.
Also, RiskFix supports data scientists in their model-building work, with appropriate data structuring for the re-calibration (and possible retraining) of ML models.

Presenter: Gabriela Morgenshtern

A Business Intelligence Dashboard for the Phone: Small-scale Visualizations Embedded into a Mobile Analysis and Monitoring Solution Paper
Nils Höll, Shahid Latif, and Fabian Beck

Abstract: Although smartphones have become ubiquitous, most of the visualization applications are still designed for large-screen devices. In a business intelligence context, dashboard solutions for monitoring key performance indicators and performing simple analysis tasks can profit from being available on the phone. We identify usage scenarios and design requirements by interviewing 20 business experts. Our solution adapts existing diagrams and proposes novel visualizations for the small-screen environment, and integrates them into an easy-to-use visual dashboard.

Presenter: Fabian Beck

LOKI: Reusing Custom Concepts in Interactive Analytic Workflows Paper
Vidya Setlur and Andrew Beers

Abstract: Natural language (NL) interaction enables users to be expressive with their queries when exploring data. Users often specify complex NL queries that involve a combination of grouping, aggregations, and conditionals of data attributes and values. Such
queries are often reused several times by users during their analytical workflows. Existing systems offer limited support to save these bespoke queries as concepts that can be referenced in subsequent NL queries, leading to users having to respecify these
queries repeatedly. To address this issue, we describe a system, LOKI that allows users to save complex and bespoke queries as reusable concepts and use these concepts in other NL queries and analytics tools. For example, users can save an NL query,
“show me the opportunity amount by customer for open opportunities” in a sales dataset, as a concept ‘followup customers’ and reference this custom concept in a query such as “show me the total opportunity amount for followup customers.” A qualitative
evaluation of LOKI indicates the usefulness of supporting the reuse of custom concepts across various analytical workflows. We identify future research directions around in-situ semantic enrichment and dynamic concept maps for data exploration.

Presenter: Vidya Setlur

Effect of color palettes in heatmaps perception: a study Paper
Elena Molina Lopez, Carolina Middel Soria, and Pere-Pau Vázquez

Abstract: Heatmaps are a widely used technique in visualization. Unfortunately, they have not been investigated in depth and little is known about the best parameterizations so that they are properly interpreted. The effect of different palettes on our ability to read values is still unknown. To address this issue, we conducted a user study, in which we analyzed the effect of two commonly used color palettes, Blues and Viridis, on value estimation and value search. As a result, we provide some suggestions for what to expect from the heatmap configurations analyzed.

Presenter: Elena Molina López

cols4all: a color palette analysis tool Paper
Martijn Tennekes and Marco Puts

Abstract: cols4all is a software tool to analyse and compare color palettes, using several properties, including color blind friendliness and fairness, which checks whether all palette colors stand out about equally.

 
 
16:00 – 17:30 FP 8: Social Sciences and Sport Fast Forward Video
Chair: Stefan Jänicke
Room 1BCD
Exploring Interpersonal Relationships in Historical Voting Records Paper
Gabriel Dias Cantareira, Yiwen Xing, Nicholas Cole, Rita Borgo, and Alfie Abdul-Rahman

Abstract: Historical records from democratic processes and negotiation of constitutional texts are a complex type of data to navigate due to the many different elements that are constantly interacting with one another: people, timelines, different proposed documents, changes to such documents, and voting to approve or reject those changes. In particular, voting records can offer various insights about relationships between people of note in that historical context, such as alliances that can form and dissolve over time and people with unusual behavior. In this paper, we present a toolset developed to aid users in exploring relationships in voting records from a particular domain of constitutional conventions. The toolset consists of two elements: a dataset visualizer, which shows the entire timeline of a convention and allows users to investigate relationships at different moments in time via dimensionality reduction, and a person visualizer, which shows details of a given person’s activity in that convention to aid in understanding the behavior observed in the dataset visualizer. We discuss our design choices and how each tool in those elements works towards our goals, and how they were perceived in an evaluation conducted with domain experts.

Presenter: Gabriel Dias Cantareira

CGF PDViz: a Visual Analytics Approach for State Policy Diffusion Paper
Dongyun Han, Abdullah-Al-Raihan Nayeem, Jason Windett, and Isaac Cho

Abstract: Sub‐national governments across the United States implement a variety of policies to address large societal problems and needs. Many policies are picked up or adopted in other states. This process is called policy diffusion and allows researchers to analyse and compare the social, political, and contextual characteristics that lead to adopting certain policies, as well as the efficacy of these policies once adopted. In this paper, we introduce PDViz, a visual analytics approach that allows social scientists to dynamically analyse the policy diffusion history and underlying patterns. It is designed for analysing and answering a list of research questions and tasks posed by social scientists in prior work. To evaluate our system, we present two usage scenarios and conduct interviews with domain experts in political science. The interviews highlight that PDViz provides the result of policy diffusion patterns that align with their domain knowledge as well as the potential to be a learning tool for students to understand the concept of policy diffusion.

CGF ComVis-Sail: Comparative sailing performance visualization for coaching Paper
M. Pieras, R. Marroquim, D. Broekens, E. Eisemann, and A. Vilanova

Abstract: During training sessions, sailors rely on feedback provided by the coaches to reinforce their skills and improve their performance. Nowadays, the incorporation of sensors on the boats enables coaches to potentially provide more informed feedback to the sailors. A common exercise during practice sessions, consists of two boats of the same class, sailing side by side in a straight line with different boat handling techniques. Coaches try to understand which techniques are that make one boat go faster than the other. The analysis of the obtained data from the boats is challenging given its multi-dimensional, time-varying and spatial nature. At present, coaches only rely on aggregated statistics reducing the complexity of the data, hereby losing local and temporal information. We describe a new domain characterization and present a visualization design that allows coaches to analyse the data, structuring their analysis and explore the data from different perspectives. A central element of the tool is the glyph design to intuitively represent and aggregate multiple aspects of the sensor data. We have conducted multiple user studies with naive users, sailors and coaches to evaluate the design and potential of the overall tool.

Tac-Anticipator: Visual Analytics of Anticipation Behaviors in Table Tennis Matches Paper
Jiachen Wang, Yihong Wu, Xiaolong (Luke) Zhang, Yixin Zeng, Zheng Zhou, Hui Zhang, Xiao Xie, and Yingcai Wu

Abstract: Anticipation skill is important for elite racquet sports players. Successful anticipation allows them to predict the actions of the opponent better and take early actions in matches. Existing studies of anticipation behaviors, largely based on the analysis of in-lab behaviors, failed to capture the characteristics of in-situ anticipation behaviors in real matches. This research proposes a data-driven approach for research on anticipation behaviors to gain more accurate and reliable insight into anticipation skills. Collaborating with domain experts in table tennis, we develop a complete solution that includes data collection, the development of a model to evaluate anticipation behaviors, and the design of a visual analytics system called Tac-Anticipator. Our case study reveals the strengths and weaknesses of top table tennis players’ anticipation behaviors. In a word, our work enriches the research methods and guidelines for visual analytics of anticipation behaviors.

Presenter: Jiachen Wang

 
 
 
 

| Thursday, 15 June, 2023

08:30 – 17:00 Registration
09:00 – 10:30 STAR 3: Volumes and Particles Fast Forward Video
Chair: Ingrid Hotz
Room 1A
State-of-the-art in Large-Scale Volume Visualization Beyond Structured Data Paper
Jonathan Sarton, Stefan Zellmann, Serkan Demirci, Ugur Gudukbay, Welcome Alexandre-Barff, Laurent Lucas, Jean-Michel Dischler, Stefan Wesner, and Ingo Wald

Abstract: Volume data these days is usually massive in terms of its topology, multiple fields, or temporal component. With the gap between compute and memory performance widening, the memory subsystem becomes the primary bottleneck for scientific volume visualization. Simple, structured, regular representations are often infeasible because the buses and interconnects involved need to accommodate the data required for interactive rendering. In this state-of-the-art report, we review works focusing on large-scale volume rendering beyond those typical structured and regular grid representations. We focus primarily on hierarchical and adaptive mesh refinement representations, unstructured meshes, and compressed representations that gained recent popularity. We review works that approach this kind of data using strategies such as out-of-core rendering, massive parallelism, and other strategies to cope with the sheer size of the ever-increasing volume of data produced by today’s supercomputers and acquisition devices. We emphasize the data management side of large-scale volume rendering systems and also include a review of tools that support the various volume data types discussed.

State-of-the-Art Report on Optimizing Particle Advection Performance Paper
Abhishek Yenpure, Sudhanshu Sane, Roba Binyahib, David Pugmire, Christoph Garth, and Hank Childs

Abstract: The computational work to perform particle advection-based flow visualization techniques varies based on many factors, including number of particles, duration, and mesh type. In many cases, the total work is significant, and total execution time (“performance”) is a critical issue. This state-of-the-art report considers existing optimizations for particle advection, using two high-level categories: algorithmic optimizations and hardware efficiency. The sub-categories for algorithmic opti- mizations include solvers, cell locators, I/O efficiency, and precomputation, while the sub-categories for hardware efficiency all involve parallelism: shared-memory, distributed-memory, and hybrid. Finally, this STAR concludes by identifying current gaps in our understanding of particle advection performance and its optimizations.

 
 
09:00 – 10:30 FP 9: Visualization Techniques II: Diagrams and Glyphs Fast Forward Video
Chair: Rita Borgo
Room 1BCD
Teru Teru Bōzu: Defensive Raincloud Plots Paper
Michael Correll

Abstract: Univariate visualizations like histograms, rug plots, or box plots provide concise visual summaries of distributions. However, each individual visualization may fail to robustly distinguish important features of a distribution, or provide sufficient information for all of the relevant tasks involved in summarizing univariate data. One solution is to juxtapose or superimpose multiple univariate visualizations in the same chart, as in Allen et al.’s “raincloud plots.” In this paper I examine the design space of raincloud plots, and, through a series of simulation studies, explore designs where the component visualizations mutually “defend” against situations where important distribution features are missed or trivial features are given undue prominence. I suggest a class of “defensive” raincloud plot designs that provide good mutual coverage for surfacing distributional features of interest.

Presenter: Michael Correll

VENUS: A Geometrical Representation for Quantum State Visualization Paper
Shaolun Ruan, Ribo Yuan, Yong Wang, Yanna Lin, Ying Mao, Weiwen Jiang, Zhepeng Wang, Wei Xu, and Qiang Guan

Abstract: Visualizations have played a crucial role in helping quantum computing users explore quantum states in various quantum computing applications. Among them, Bloch Sphere is the widely-used visualization for showing quantum states, which leverages angles to represent quantum amplitudes. However, it cannot support the visualization of quantum entanglement and superposition, the two essential properties of quantum computing. To address this issue, we propose VENUS, a novel visualization for quantum state representation. By explicitly correlating 2D geometric shapes based on the math foundation of quantum computing characteristics, VENUS effectively represents quantum amplitudes of both the single qubit and two qubits for quantum entanglement. Also, we use multiple coordinated semicircles to naturally encode probability distribution, making the quantum superposition intuitive to analyze. We conducted two well-designed case studies and an in-depth expert interview to evaluate the usefulness and effectiveness of VENUS. The result shows that VENUS can effectively facilitate the exploration of quantum states for the single qubit and two qubits.

Presenter: Shaolun Ruan

TVCG Out of the Plane: Flower vs. Star Glyphs to Support High-Dimensional Exploration in Two-Dimensional Embeddings Paper
Christian van Onzenoodt, Pere-Pau Vázquez, and Timo Ropinsk

Abstract: Exploring high-dimensional data is a common task in many scientific disciplines. To address this task, two-dimensional embeddings, such as tSNE and UMAP, are widely used. While these determine the 2D position of data items, effectively encoding the first two dimensions, suitable visual encodings can be employed to communicate higher-dimensional features. To investigate such encodings, we have evaluated two commonly used glyph types, namely flower glyphs and star glyphs. To evaluate their capabilities for communicating higher-dimensional features in two-dimensional embeddings, we ran a large set of crowd-sourced user studies using real-world data obtained from data.gov. During these studies, participants completed a broad set of relevant tasks derived from related research. This paper describes the evaluated glyph designs, details our tasks, and the quantitative study setup before discussing the results. Finally, we will present insights and provide guidance on the choice of glyph encodings when exploring high-dimensional data.

CGF iFUNDit: Visual Profiling of Fund Investment Styles
Rong Zhang, Bon Kyung Ku, Yong Wang, Xuewu Yue, Siyuan Liu, Ke Li, and Huamin Qu
 
 
10:30 – 11:00 Coffee Break
Lobby
 
 
11:00 – 12:30 SP 2: 3D Fast Forward Video
Chair: Markus Hadwiger
Room 1A
Detection and Visual Analysis of Pathological Abnormalities in Diffusion Tensor Imaging with an Anomaly Lens Paper
Marlo Bareth, Samuel Groeschel, Johannes Grün, Pablo Pretzel, and Thomas Schultz

Abstract: In clinical practice, Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) is usually evaluated by visual inspection of grayscale maps of Fractional Anisotropy or mean diffusivity. However, the fact that those maps only contain part of the information that is captured in DT-MRI implies a risk of missing signs of disease. In this work, we propose a visualization system that supports a more comprehensive analysis with an anomaly score that accounts for the full diffusion tensor information. It is computed by comparing the DT-MRI scan of a given patient to a control group of healthy subjects, after spatial coregistration. Moreover, our system introduces an Anomaly Lens which visualizes how a user-specified region of interest deviates from the controls, indicating which aspects of the tensor (norm, anisotropy, mode, rotation) differ most, whether they are elevated or reduced, and whether their covariation matches the covariances within the control group. Applying our system to patients with metachromatic leukodystrophy clearly indicates regions affected by the disease, and permits their detailed analysis.

Presenter: Thomas Schultz

Accelerated Volume Rendering with Volume Guided Neural Denoising Paper
Susmija Jabbireddy, Shuo Li, Xiaoxu Meng, Judith E Terrill, and Amitabh Varshney

Abstract: Monte Carlo path tracing techniques create stunning visualizations of volumetric data. However, a large number of computationally expensive light paths are required for each sample to produce a smooth and noise-free image, trading performance for quality. High-quality interactive volume rendering is valuable in various fields, especially education, communication, and clinical diagnosis. To accelerate the rendering process, we combine learning-based denoising techniques with direct volumetric rendering. Our approach uses additional volumetric features that improve the performance of the denoiser in the post-processing stage. We show that our method significantly improves the quality of Monte Carlo volume-rendered images for various datasets through qualitative and quantitative evaluation. Our results show that we can achieve volume rendering quality comparable to the state-of-the-art at a significantly faster rate using only one sample path per pixel.

Presenter: Susmija Jabbireddy

Level of Detail Visual Analysis of Structures in Solid-State Materials Paper
Signe Sidwall Thygesen, Alexei I. Abrikosov, Peter Steneteg, Talha Bin Masood, and Ingrid Hotz

Abstract: We propose a visual analysis method for the comparison and evaluation of structures in solid-state materials based on the electron density field using topological analysis.
The work has been motivated by a material science application, specifically looking for new so-called layered materials whose physical properties are required in many modern technological developments. Due to the incredibly large search space, this is a slow and tedious process, requiring efficient data analysis to characterize and understand the material properties. The core of our proposed analysis pipeline is an abstract bar representation that serves as a concise signature of the material, supporting direct comparison and also an exploration of different material candidates.

Multi-attribute Visualization and Improved Depth Perception for the Interactive Analysis of 3D Truss Structures Paper
Michael Becher, Anja Groß, Peter Werner, Mathias Maierhofer, Guido Reina, Thomas Ertl, Achim Menges, and Daniel Weiskopf

Abstract: In architecture, engineering, and construction (AEC), load-bearing truss structures are commonly modeled as a set of connected beam elements. For complex 3D structures, rendering beam elements as line segments presents several challenges due densely overlapping elements, including visual clutter, and general depth perception issues. Furthermore, line segments provide very little area for displaying additional element attributes. In this paper, we investigate the effectiveness of rendering effects for reducing visual clutter and improving depth perception for truss structures specifically, such as distance-based brightness attenuation and screen-space ambient occlusion (SSAO). Additionally, we provide multiple options for multi-attribute visualization directly on the structure and evaluate both aspects with two expert interviews.

Presenter: Michael Becher

TGVE: a tool for analysis and visualization of geospatial data Paper
Layik Hama, Roger Beecham, and Nik Lomax

Abstract: We introduce the Turing Geovisualisation Engine (TGVE), a web-based, open-source tool for interactive visualization and analysis of geospatial data. Built on ReactJS and R, TGVE is designed to support a variety of users, including data scientists and stakeholders who wish to engage the wider public with geospatial data. In this short paper, we provide an overview of TGVE’s features and capabilities, including its ability to publish data and customize visualization settings using URL parameters. We highlight the potential impact of TGVE for geospatial research and offer examples of its use in practice. Additionally, we discuss current limitations of the tool and outline future work, such as improving compatibility with other geospatial data formats and addressing performance issues for large datasets.

Presenter: Dr Layik Hama

Honorable Mention ARrow: A Real-Time AR Rowing Coach Paper
Elena Iannucci, Zhutian Chen, Iro Armeni, Marc Pollefeys, Hanspeter Pfister, and Johanna Beyer

Abstract: Rowing requires physical strength and endurance in athletes as well as a precise rowing technique. The ideal rowing stroke is based on biomechanical principles and typically takes years to master. Except for time-consuming video analysis after practice, coaches currently have no means to quantitatively analyze a rower’s stroke sequence and body movement. We propose ARrow, an AR application for coaches and athletes that provides real-time and situated feedback on a rower’s body position and stroke. We use computer vision techniques to extract the rower’s 3D skeleton and to detect the rower’s stroke cycle. ARrow provides visual feedback on three levels: Tracking of basic performance metrics over time, visual feedback and guidance on a rower’s stroke sequence, and a rowing ghost view that helps synchronize the body movement of two rowers. We developed ARrow in close colaboration with international rowing coaches and demonstrate its usefulness in a user study with athletes and coaches.

Presenter: Elena Iannucci

 
 
11:00 – 12:30 FP 10: Visualization for Life Sciences Fast Forward Video
Chair: Kay Nieselt
Room 1BCD
TVCG A Visual Interface for Exploring Hypotheses about Neural Circuits Paper
Sumit K. Vohra, Philipp Harth, Yasuko Isoe, Armin Bahl, Haleh Fotowat, Florian Engert, Hans-Christian Hege, and Daniel Baum

Abstract: One of the fundamental problems in neurobiological research is to understand how neural circuits generate behaviors in response to sensory stimuli. Elucidating such neural circuits requires anatomical and functional information about the neurons that are active during the processing of the sensory information and generation of the respective response, as well as an identification of the connections between these neurons. With modern imaging techniques, both morphological properties of individual neurons as well as functional information related to sensory processing, information integration and behavior can be obtained. Given the resulting information, neurobiologists are faced with the task of identifying the anatomical structures down to individual neurons that are linked to the studied behavior and the processing of the respective sensory stimuli. Here, we present a novel interactive tool that assists neurobiologists in the aforementioned task by allowing them to extract hypothetical neural circuits constrained by anatomical and functional data. Our approach is based on two types of structural data: brain regions that are anatomically or functionally defined, and morphologies of individual neurons. Both types of structural data are interlinked and augmented with additional information. The presented tool allows the expert user to identify neurons using Boolean queries. The interactive formulation of these queries is supported by linked views, using, among other things, two novel 2D abstractions of neural circuits. The approach was validated in two case studies investigating the neural basis of vision-based behavioral responses in zebrafish larvae. Despite this particular application, we believe that the presented tool will be of general interest for exploring hypotheses about neural circuits in other species, genera and taxa.

visMOP – A Visual Analytics Approach for Multi-omics Pathways Paper
Nicolas Brich, Nadine Schacherer, Miriam Hoene, Cora Weigert, Rainer Lehmann, and Michael Krone

Abstract: We present an approach for the visual analysis of multi-omics data obtained using high-throughput methods.
The term “omics” denotes measurements of different types of biologically relevant molecules, like the products of gene transcription (transcriptomics) or the abundance of proteins (proteomics).
Current popular visualization approaches often only support analyzing each of these omics separately.
This, however, disregards the interconnectedness of different biologically relevant molecules and processes.
Consequently, it describes the actual events in the organism suboptimally or only partially.
Our visual analytics approach for multi-omics data provides a comprehensive overview and details-on-demand by integrating the different omics types in multiple linked views.
To give an overview, we map the measurements to known biological pathways and use a combination of a clustered network visualization, glyphs, and interactive filtering.
To ensure the effectiveness and utility of our approach, we designed it in close collaboration with domain experts and assessed it using an exemplary workflow with real-world transcriptomics, proteomics, and lipidomics measurements from mice.

Presenter: Nicolas Brich

GO-Compass: Visual Navigation of Multiple Lists of GO terms Paper
Theresa Harbig, Mathias Witte Paz, and Kay Nieselt

Abstract: Analysis pipelines in genomics, transcriptomics, and proteomics commonly produce lists of genes, e.g., differentially expressed genes. Often these lists overlap only partly or not at all and contain too many genes for manual comparison. However, using background knowledge, such as the functional annotations of the genes, the lists can be abstracted to functional terms. One approach is to run Gene Ontology (GO) enrichment analyses to determine over- and/or underrepresented functions for every list of genes. Due to the hierarchical structure of the Gene Ontology, lists of enriched GO terms can contain many closely related terms, rendering the lists still long, redundant, and difficult to interpret for researchers.

In this paper, we present GO-Compass (Gene Ontology list comparison using Semantic Similarity), a visual analytics tool for the dispensability reduction and visual comparison of lists of GO terms. For dispensability reduction, we adapted the REVIGO algorithm, a summarization method based on the semantic similarity of GO terms, to perform hierarchical dispensability clustering on multiple lists.
In an interactive dashboard, GO-Compass offers several visualizations for the comparison and improved interpretability of GO terms lists. The hierarchical dispensability clustering is visualized as a tree, where users can interactively filter out dispensable GO terms and create flat clusters by cutting the tree at a chosen dispensability. The flat clusters are visualized in animated treemaps and are compared using a correlation heatmap, UpSet plots, and bar charts.

With two use cases on published datasets from different omics domains, we demonstrate the general applicability and effectiveness of our approach.
In the first use case, we show how the tool can be used to compare lists of differentially expressed genes from a transcriptomics pipeline and incorporate gene information into the analysis. In the second use case using genomics data, we show how GO-Compass facilitates the analysis of many hundreds of GO terms. For qualitative evaluation of the tool, we conducted feedback sessions with five domain experts and received positive comments.
GO-Compass is part of the TueVis Visualization Server as a web application available at https://go-compass-tuevis.cs.uni-tuebingen.de/

Presenter: Theresa Harbig

DASS Good: Explainable Data Mining of Oncology Imaging and Toxicity Data. Paper
Andrew Wentzel, Carla Gabriela Floricel, Guadalupe Canahuate, Mohamed A Naser, Abdallah Mohamed, Clifton David Fuller, Lisanne van Dijk, and G. Elisabeta Marai

Abstract: Developing applicable clinical machine learning models is a difficult task when the data includes spatial information, for example, radiation dose distributions across adjacent organs at risk. We describe the co-design of a modeling system, DASS, to support the hybrid human-machine development and validation of predictive models for estimating long-term toxicities related to radiotherapy doses in head and neck cancer patients. Developed in collaboration with domain experts in oncology and data mining, DASS incorporates human-in-the-loop visual steering, spatial data, and explainable AI to augment domain knowledge with automatic data mining. We demonstrate DASS with the development of two practical clinical stratification models and report feedback from domain experts. Finally, we describe the design lessons learned from this collaborative experience.

Presenter: Andrew Wentzel

 
 
12:30 – 14:00 Lunch Break
Lobby
 
 
14:00 – 15:30 STAR 4: VA + AI Fast Forward Video
Chair: Silvia Miksch
Room 1A
CGF State of the Art of Visual Analytics for eXplainable Deep Learning Paper
B. La Rosa, G. Blasilli, R. Bourqui, D. Auber, G. Santucci, R. Capobianco, E. Bertini, R. Giot, and M. Angelini

Abstract: The use and creation of machine-learning-based solutions to solve problems or reduce their computational costs are becoming increasingly widespread in many domains. Deep Learning plays a large part in this growth. However, it has drawbacks such as a lack of explainability and behaving as a black-box model. During the last few years, Visual Analytics has provided several proposals to cope with these drawbacks, supporting the emerging eXplainable Deep Learning field. This survey aims to (i) systematically report the contributions of Visual Analytics for eXplainable Deep Learning; (ii) spot gaps and challenges; (iii) serve as an anthology of visual analytical solutions ready to be exploited and put into operation by the Deep Learning community (architects, trainers and end users) and (iv) prove the degree of maturity, ease of integration and results for specific domains. The survey concludes by identifying future research challenges and bridging activities that are helpful to strengthen the role of Visual Analytics as effective support for eXplainable Deep Learning and to foster the adoption of Visual Analytics solutions in the eXplainable Deep Learning community. An interactive explorable version of this survey is available online at https://aware-diag-sapienza.github.io/VA4XDL.

VA + Embeddings STAR: A State-of-the-Art Report on the Use of Embedding Approaches in Visual Analytics Paper
Zeyang Huang, Daniel Witschard, Kostiantyn Kucher, and Andreas Kerren

Abstract: Over the past years, an increasing number of publications in information visualization, especially within the field of visual analytics, have mentioned the term “embedding” when describing the computational approach. Within this context, embeddings are usually (relatively) low-dimensional, distributed representations of various data types (such as texts or graphs), and since they have proven to be extremely useful for a variety of data analysis tasks across various disciplines and fields, they have become widely used. Existing visualization approaches aim to either support exploration and interpretation of the embedding space through visual representation and interaction, or aim to use embeddings as part of the computational pipeline for addressing downstream analytical tasks. To the best of our knowledge, this is the first survey that takes a detailed look at embedding methods through the lens of visual analytics, and the purpose of our survey article is to provide a systematic overview of the state of the art within the emerging field of embedding visualization. We design a categorization scheme for our approach, analyze the current research frontier based on peer-reviewed publications, and discuss existing trends, challenges, and potential research directions for using embeddings in the context of visual analytics. Furthermore, we provide an interactive survey browser for the collected and categorized survey data, which currently includes 122 entries that appeared between 2007 and 2023.

 
 
14:00 – 15:30 FP 11: Interaction and Accessibility Fast Forward Video
Chair: Tobias Isenberg
Room 1BCD
Unfolding Edges: Adding Context to Edges in Multivariate Graph Visualization Paper
Mark-Jan Bludau, Marian Dörk, and Christian Tominski

Abstract: Existing work on visualizing multivariate graphs is primarily concerned with representing the attributes of nodes. Even though edges are the constitutive elements of networks, there have been only few attempts to visualize attributes of edges. In this work, we focus on the critical importance of edge attributes for interpreting network visualizations and building trust in the underlying data. We propose ‘unfolding of edges’ as an interactive approach to integrate multivariate edge attributes dynamically into existing node-link diagrams. Unfolding edges is an in-situ approach that gradually transforms basic links into detailed representations of the associated edge attributes. This approach extends focus+context, semantic zoom, and animated transitions for network visualizations to accommodate edge details on-demand without cluttering the overall graph layout. We explore the design space for the unfolding of edges, which covers aspects of making space for the unfolding, of actually representing the edge context, and of navigating between edges. To demonstrate the utility of our approach, we present two case studies in the context of historical network analysis and computational social science. For these, web-based prototypes were implemented based on which we conducted interviews with domain experts. The experts’ feedback suggests that the proposed unfolding of edges is a useful tool for exploring rich edge information of multivariate graphs.

Presenter: Mark-Jan Bludau

WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrieval Paper
Shishi Xiao, Yihan Hou, Cheng Jin, and Wei Zeng

Abstract: Retrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations.The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing example-based chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the bitmap query chart; and second, the Retrieval stage embeds the user’s intent with customized text prompt as well as query chart, to recall targeted retrieval result. We develop a prototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.

Presenter: Shishi XIAO

Beyond Alternative Text and Tables: Comparative Analysis of Visualization Tools and Accessibility Methods Paper
Nam Wook Kim, Grace Ataguba, Shakila Cherise S Joyner, Chuangdian Zhao, and Hyejin Im

Abstract: Modern visualization software and programming libraries have made data visualization construction easier for everyone. However, the extent of accessibility design they support for blind and low-vision people is relatively unknown. It is also unclear how they can improve chart content accessibility beyond conventional alternative text and data tables. To address these issues, we examined the current accessibility features in popular visualization tools, revealing limited support for the standard accessibility methods and scarce support for chart content exploration. Next, we investigate two promising accessibility approaches that provide off-the-shelf solutions for chart content accessibility: structured navigation and conversational interaction. We present a comparative evaluation study and discuss what to consider when incorporating them into visualization tools.

Presenter: Nam Wook Kim

ParaDime: A Framework for Parametric Dimensionality Reduction Paper
Andreas Hinterreiter, Christina Humer, Bernhard Kainz, and Marc Streit

Abstract: ParaDime is a framework for parametric dimensionality reduction (DR). In parametric DR, neural networks are trained to embed high-dimensional data items in a low-dimensional space while minimizing an objective function. ParaDime builds on the idea that the objective functions of several modern DR techniques result from transformed inter-item relationships. It provides a common interface for specifying these relations and transformations and for defining how they are used within the losses that govern the training process. Through this interface, ParaDime unifies parametric versions of DR techniques such as metric MDS, t-SNE, and UMAP. It allows users to fully customize all aspects of the DR process. We show how this ease of customization makes ParaDime suitable for experimenting with interesting techniques such as hybrid classification/embedding models and supervised DR. This way, ParaDime opens up new possibilities for visualizing high-dimensional data.

Presenter: Andreas Hinterreiter

 
 
15:30 – 16:00 Coffee Break
Lobby
 
 
16:00 – 17:30 SP 3: Graphs and High-Dimensional Data Fast Forward Video
Chair: Alessio Arleo
Room 1A
Ask and You Shall Receive (a Graph Drawing): Testing ChatGPT’s Potential to Apply Graph Layout Algorithms Paper
Sara Di Bartolomeo, Giorgio Severi, Victor Schetinger, and Cody Dunne

Abstract: Large language models (LLMs) have recently taken the world by storm. They can generate coherent text, hold meaningful conversations, and be taught concepts and basic sets of instructions—such as the steps of an algorithm. In this context, we are interested in exploring the application of LLMs to graph drawing algorithms by performing experiments on ChatGPT, one of the most recent cutting-edge LLMs made available to the public. These algorithms are used to create readable graph visualizations. The probabilistic nature of LLMs presents challenges to implementing algorithms correctly, but we believe that LLMs’ ability to learn from vast amounts of data and apply complex operations may lead to interesting graph drawing results. For example, we could enable users with limited coding backgrounds to use simple natural language to create effective graph visualizations. Natural language specification would make data visualization more accessible and user-friendly for a wider range of users. Exploring LLMs’ capabilities for graph drawing can also help us better
understand how to formulate complex algorithms for LLMs; a type of knowledge that could transfer to other areas of computer science. Overall, our goal is to shed light on the exciting possibilities of using LLMs for graph drawing—using the Sugiyama algorithm as a sample case—while providing a balanced assessment of the challenges and opportunities they present. A free copy of this paper with all
supplemental materials to reproduce our results is available on osf.io/n5rxd

Presenter: Sara Di Bartolomeo

Best Paper Identifying Cluttering Edges in Near-Planar Graphs Paper
Simon van Wageningen, Tamara Mchedlidze, and Alexandru Telea

Abstract: Planar drawings of graphs tend to be favored over non-planar drawings. Testing planarity and creating a planar layout of a planar graph can be done in linear time. However, creating readable drawings of nearly planar graphs remains a challenge. We therefore seek to answer which edges of nearly planar graphs create clutter in their drawings generated by mainstream graph drawing algorithms. We present a heuristic to identify problematic edges in nearly planar graphs and adjust their weights in order to produce higher quality layouts with spring-based drawing algorithms. Our experiments show that our heuristic produces significantly higher quality drawings for augmented grid graphs, augmented triangulations, and deep triangulations.

Presenter: Simon van Wageningen

CatNetVis: Semantic Visual Exploration of Categorical High-Dimensional Data with Force-Directed Graph Layouts Paper
Michael Thane, Kai Michael Blum, and Dirk Joachim Lehmann

Abstract: We introduce CatNetVis, a novel method of representing semantical relations in categorical high-dimensional data. Traditional
methods provide insights into many aspects of visual exploration of data. However, most of them lack information on relations in
between categories or even clusters of categories. The force-directed network layout utilized by CatNetVis enables a lightweight
approach in order to explore such semantical relations. The connections within the network are perceived as an intuitive
metaphor for clusters of connections/relations in categorical data denoted as communities. While the user interacts, visual
encodings such as information about the entropy and frequencies allow a fast perception of relation between categories and its
frequencies, respectively. We illustrate how CatNetVis performs as an effective addition to traditional methods by demonstrating
the method on an example data sets and comparing it to conventional methods.

Presenter: Michael Thane

Visualizing Element Interactions in Dynamic Overlapping Sets Paper
Shivam Agarwal

Abstract: Elements, the members in sets, may change their memberships over time. Moreover, elements also directly interact with each other, indicating an explicit connection between them. Visualizing both together becomes challenging. Using an existing dynamic set visualization as a basis, we propose an approach to encode the interactions of elements together with changing memberships in sets. We showcase the value in visually analyzing both the aspects of elements together through two application examples. The first example shows the evolution of business portfolio and interactions (e.g., acquisitions and partnerships) among companies. A second example analyzes the dynamic collaborative interactions among researchers in computer science.

Presenter: Shivam Agarwal

Semantic Hierarchical Exploration of Large Image Datasets Paper
Alex Bäuerle, Christian van Onzenoodt, Daniel Jönsson, and Timo Ropinski

Abstract: We present a method for exploring and comparing large sets of images with metadata using a hierarchical interaction approach. Browsing many images at the same time requires either a large screen space or an abundance of scrolling interaction. We address this problem by projecting the images onto a two-dimensional Cartesian coordinate system by combining the latent space of vision neural networks and dimensionality reduction techniques. To alleviate overdraw of the images, we integrate a hierarchical layout and navigation, where each group of similar images is represented by the image closest to the group center. Advanced interactive analysis of images in relation to their metadata is enabled through integrated, flexible filtering based on expressions. Furthermore, groups of images can be compared through selection and automated aggregated visualization of their metadata. We showcase our method in three case studies involving the domains of photography, machine learning, and medical imaging.

MoneyVis: Open Bank Transaction Data for Visualization and Beyond Paper
Elif E. Firat, Dharmateja Vytla, Navya Vasudeva Singh, Zhuoqun Jiang, and Robert S. Laramee

Abstract: With the rapid evolution of financial technology (FinTech) the importance of analyzing financial transactions is growing in importance. As the prevalence and number of financial transactions grow, so does the necessity of visual analysis tools to study the behavior represented by these transactions. However, real bank transaction data is generally private for security and confidentiality reasons, thus preventing its use for visual analysis and research. We present MoneyData, an anonymized open bank data set spanning seven years worth of transactions for research and analysis purposes. To our knowledge, this is the first real-world retail bank transaction data that has been anonymized and made public for visualization and analysis by other researchers. We describe the data set, its characteristics, and the anonymization process and present some preliminary analysis and images as a starting point for future research. The transactions are also categorized to facilitate understanding. We believe the availability of this open data will greatly benefit the research community and facilitate further study of finance.

Presenter: Robert S. Laramee

 
 
16:00 – 17:30 FP 12: Where to Look? AR, VR, and Attention Fast Forward Video
Chair: Holger Theisel
Room 1BCD
Evaluating View Management for Situated Visualization in Web-based Handheld AR Paper
Andrea Batch, Sungbok Shin, Julia Liu, Peter William Scott Butcher, Panagiotis D. Ritsos, and Niklas Elmqvist

Abstract: As visualization makes the leap to mobile and situated settings, where data is increasingly integrated with the physical world using mixed reality, there is a corresponding need for effectively managing the immersed user’s view of situated visualizations. In this paper we present an analysis of view management techniques for situated 3D visualizations in handheld augmented reality: a shadowbox, a world-in-miniature metaphor, and an interactive tour. We validate these view management solutions through a concrete implementation of all techniques within a situated visualization framework built using a web-based augmented reality visualization toolkit, and present results from a user study in augmented reality accessed using handheld mobile devices.

Presenter: Andrea Batch

Illustrative Motion Smoothing for Attention Guidance in Dynamic Visualizations Paper
Johannes Eschner, Peter Mindek, and Manuela Waldner

Abstract: 3D animations are an effective method to learn about complex dynamic phenomena, such as mesoscale biological processes. The animators’ goals are to convey a sense of the scene’s overall complexity while, at the same time, visually guiding the user through a story of subsequent events embedded in the chaotic environment. Animators use a variety of visual emphasis techniques to guide the observers’ attention through the story, such as highlighting, halos — or by manipulating motion parameters of the scene. In this paper, we investigate the effect of smoothing the motion of contextual scene elements to attract attention to focus elements of the story exhibiting high-frequency motion. We conducted a crowdsourced study with 108 participants observing short animations with two illustrative motion smoothing strategies: geometric smoothing through noise reduction of contextual motion trajectories and visual smoothing through motion blur of \rev{context} items. We investigated the observers’ ability to follow the story as well as the effect of the techniques on speed perception in a molecular scene. Our results show that moderate motion blur significantly improves users’ ability to follow the story. Geometric motion smoothing is less effective but increases the visual appeal of the animation. However, both techniques also slow down the perceived speed of the animation. We discuss the implications of these results and derive design guidelines for animators of complex dynamic visualizations.

Presenter: Johannes Eschner

Visual Gaze Labeling for Augmented Reality Studies Paper
Seyda Öney, Nelusa Pathmanathan, Michael Becher, Michael Sedlmair, Daniel Weiskopf, and Kuno Kurzhals

Abstract: Augmented Reality (AR) provides new ways for situated visualization and human-computer interaction in physical environments. Current evaluation procedures for AR applications rely primarily on questionnaires and interviews, providing qualitative means to assess usability and task solution strategies. Eye tracking extends these existing evaluation methodologies by providing indicators for visual attention to virtual and real elements in the environment. However, the analysis of viewing behavior, especially the comparison of multiple participants, is difficult to achieve in AR.
Specifically, the definition of areas of interest (AOIs), which is often a prerequisite for such analysis, is cumbersome and tedious with existing approaches.
To address this issue, we present a new visualization approach to define AOIs, label fixations, and investigate the resulting annotated scanpaths. Our approach utilizes automatic annotation of gaze on virtual objects and an image-based approach that also considers spatial context for the manual annotation of objects in the real world. Our results show, that with our approach, eye tracking data from AR scenes can be annotated and analyzed flexibly with respect to data aspects and annotation strategies.

Presenter: Seyda Öney

Been There, Seen That: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments Paper
Nelusa Pathmanathan, Seyda Öney, Michael Becher, Michael Sedlmair, Daniel Weiskopf, and Kuno Kurzhals

Abstract: The distribution of visual attention can be evaluated using eye tracking, providing valuable insights into usability issues and interaction patterns. However, when used in real, augmented, and collaborative environments, new challenges arise that go beyond desktop scenarios and purely virtual environments. Toward addressing these challenges, we present a visualization technique that provides complementary views on the movement and eye tracking data recorded from multiple people in real-world environments. Our method is based on a space-time cube visualization and a linked 3D replay of recorded data. We showcase our approach with an experiment that examines how people investigate an artwork collection. The visualization provides insights into how people moved and inspected individual pictures in their spatial context over time. In contrast to existing methods, this analysis is possible for multiple participants without extensive annotation of areas of interest. Our technique was evaluated with a think-aloud experiment to investigate analysis strategies and an interview with domain experts to examine the applicability in other research fields.

Presenter: Nelusa Pathmanathan

 
 
 
 

| Friday, 16 June, 2023

08:30 – 12:00 Registration
09:00 – 10:30 STAR 5: Virtual Environments Fast Forward Video
Chair: Tim Gerrits
Room 1A
TVCG Visual Cue Based Corrective Feedback for Motor Skill Training in Mixed Reality: A Survey Paper
Florian Diller, Gerik Scheuermann, and Alexander Wiebel

Abstract: When learning a motor skill it is helpful to get corrective feedback from an instructor. This will support the learner to execute the movement correctly. With modern technology, it is possible to provide this feedback via mixed reality. In most cases, this involves visual cues to help the user understand the corrective feedback. We analyzed recent research approaches utilizing visual cues for feedback in mixed reality. The scope of this paper is visual feedback for motor skill learning, which involves physical therapy, exercise, rehabilitation etc. While some of the surveyed literature discusses therapeutic effects of the training, this paper focuses on visualization techniques. We categorized the literature from a visualization standpoint, including visual cues, technology and characteristics of the feedback. This provided insights into how visual feedback in mixed reality is applied in the literature and how different aspects of the feedback are related. The insights obtained can help to better adjust future feedback systems to the target group and their needs. This paper also provides a deeper understanding of the characteristics of the visual cues in general and promotes future, more detailed research on this topic.

CGF State of the Art of Molecular Visualization in Immersive Virtual Environments Paper
David Kuťák, Pere-Pau Vázquez, Tobias Isenberg, Michael Krone, Marc Baaden, Jan Byška, Barbora Kozlíková, and Haichao Miao

Abstract: Visualization plays a crucial role in molecular and structural biology. It has been successfully applied to a variety of tasks, including structural analysis and interactive drug design. While some of the challenges in this area can be overcome with more advanced visualization and interaction techniques, others are challenging primarily due to the limitations of the hardware devices used to interact with the visualized content. Consequently, visualization researchers are increasingly trying to take advantage of new technologies to facilitate the work of domain scientists. Some typical problems associated with classic 2D interfaces, such as regular desktop computers, are a lack of natural spatial understanding and interaction, and a limited field of view. These problems could be solved by immersive virtual environments and corresponding hardware, such as virtual reality head-mounted displays. Thus, researchers are investigating the potential of immersive virtual environments in the field of molecular visualization. There is already a body of work ranging from educational approaches to protein visualization to applications for collaborative drug design. This review focuses on molecular visualization in immersive virtual environments as a whole, aiming to cover this area comprehensively. We divide the existing papers into different groups based on their application areas, and types of tasks performed. Furthermore, we also include a list of available software tools. We conclude the report with a discussion of potential future research on molecular visualization in immersive environments.

 
 
09:00 – 10:30 FP 13: Visualization and Machine Learning Fast Forward Video
Chair: Yong Wang
Room 1BCD
VISITOR: Visual Interactive State Sequence Exploration for Reinforcement Learning Paper
Yannick Metz, Eugene Bykovets, Lucas Joos, Daniel Keim, and Mennatallah El-Assady

Abstract: Understanding the behavior of deep reinforcement learning agents is a crucial requirement throughout their development.
Existing work has addressed the identification of observable behavioral patterns in state sequences or analysis of isolated internal representations; however, the overall decision-making of deep-learning RL agents remains opaque.
To tackle this, we present VISITOR, a visual analytics system enabling the analysis of entire state sequences, the diagnosis of singular predictions, and the comparison between agents.

A sequence embedding view enables the multiscale analysis of state sequences, utilizing custom embedding techniques for a stable spatialization of the observations and internal states.

We provide multiple layers: (1) a state space embedding, highlighting different groups of states inside the state-action sequences, (2) a trajectory view, emphasizing decision points, (3) a network activation mapping, visualizing the relationship between observations and network activations, (4) a transition embedding, enabling the analysis of state-to-state transitions. The embedding view is accompanied by an interactive reward view that captures the temporal development of metrics, which can be linked directly to states in the embedding. Lastly, a model list allows for the quick comparison of models across multiple metrics. Annotations can be exported to communicate results to different audiences.

Our two-stage evaluation with eight experts confirms the effectiveness in identifying states of interest, comparing the quality of policies, and reasoning about the internal decision-making processes.

Presenter: Yannick Metz

LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity Paper
Anjana Arunkumar, Shubham Sharma, Rakhi Agrawal, Sriram Chandrasekaran, and Chris Bryan

Abstract: Cross-task generalization is a significant outcome that defines mastery in natural language understanding. Humans show a remarkable aptitude for this, and can solve many different types of tasks, given definitions in the form of textual instructions and a small set of examples. Recent work with pre-trained language models mimics this learning style: users can define and exemplify a task for the model to attempt as a series of natural language prompts or instructions. While prompting approaches have led to higher cross-task generalization compared to traditional supervised learning, analyzing ‘bias’ in the task instructions given to the model is a difficult problem, and has thus been relatively unexplored. For instance, are we truly modeling a task, or are we modeling a user’s instructions? To help investigate this, we develop LINGO, a novel visual analytics interface that supports an effective, task-driven workflow to (1) help identify bias in natural language task instructions, (2) alter (or create) task instructions to reduce bias, and (3) evaluate pre-trained model performance on debiased task instructions. To robustly evaluate LINGO, we conduct a user study with both novice and expert instruction creators, over a dataset of 1,616 linguistic tasks and their natural language instructions, spanning 55 different languages. For both user groups, LINGO promotes the creation of more difficult tasks for pre-trained models, that contain higher linguistic diversity and lower instruction bias. We additionally discuss how the insights learned in developing and evaluating LINGO can aid in the design of future dashboards that aim to minimize the effort involved in prompt creation across multiple domains.

Presenter: Anjana Arunkumar

Doom or Deliciousness: Challenges and Opportunities for Visualization in the Age of Generative Models Paper
Victor Schetinger, Sara Di Bartolomeo, Mennatallah El-Assady, Andrew M McNutt, Matthias Miller, João Paulo Apolinário Passos, and Jane L. Adams

Abstract: Generative text-to-image models (as exemplified by DALL-E, MidJourney, and Stable Diffusion) have recently made enormous technological leaps, demonstrating impressive results in many graphical domains—from logo design to digital painting to photographic composition. However, the quality of these results has led to existential crises in some fields of art, leading to questions about the role of human agency in the production of meaning in a graphical context. Such issues are central to visualization, and while these generative models have yet to be widely applied in visualization, it seems only a matter of time until their integration is manifest. Seeking to circumvent similar ponderous dilemmas, we attempt to understand the roles that generative models might play across visualization. We do so by constructing a framework that characterizes what these technologies offer at various stages of the visualization workflow, augmented and analyzed through semi-structured interviews with 21 experts from related domains. Through this work, we map the space of opportunities and risks that might arise in this intersection, identifying doomsday prophecies and delicious low-hanging fruits that are ripe for research.

Presenter: Victor Schetinger

Visual Analytics on Network Forgetting for Task-Incremental Learning Paper
Ziwei Li, Jiayi Xu, Wei-Lun Chao, and Han-Wei Shen

Abstract: Task-incremental learning (Task-IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learn- ing tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under-explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in-depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time.

Presenter: Ziwei Li

 
 
10:30 – 11:00 Coffee Break
Lobby
 
 
11:00 – 12:30 Capstone & Closing
Chair: Mario Hlawitschka
Room 1ABCD
Capstone Seeing is learning in high dimensions Paper
Alexandru C. Telea

Abstract: Multidimensional projections (MPs) are one of the techniques of choice for visually exploring large high-dimensional data. In parallel, machine learning (ML) and in particular deep learning applications are one of the most prominent generators of large, high-dimensional, and complex datasets which need visual exploration. In this talk, I will explore the connections, challenges, and potential synergies between these two fields. These involve “seeing to learn”, or how to deploy MP techniques to open the black box of ML models, and “learning to see”, or how to use ML to create better MP techniques for visualizing high-dimensional data. Specific questions I will cover include selecting suitable MP methods from the wide arena of such available techniques; using ML to create faster and simpler to use MP methods; assessing projections from the novel perspectives of stability and ability to handle time-dependent data; extending the projection metaphor to create dense representations of classifiers; and using projections not only to explain, but also to improve, ML models.

Closing

Presenter: Gerik Scheuermann