TUTORIALS
Monday, May 3, 2021
Target audience: Introductory to intermediate
Outline:
In the last decades, geometry processing has attracted a growing interest thanks to the wide availability of new devices and software that make 3D digital data available and manipulable to everyone. Typical issues that are faced by geometry processing algorithms include the variety of discrete representations for 3D data (point clouds, polygonal or tet-meshes and voxels), or the type of deformation this data may undergo. Powerful approaches to address these issues come from looking at the spectral decomposition of canonical differential operators, such as the Laplacian, which provides a rich, informative, robust, and invariant representation of the 3D objects. Reasoning about spectral quantities is at the core of spectral geometry, which has enabled unprecedented performance in many tasks of computer graphics (e.g., shape matching with functional maps, shape retrieval, compression, and texture transfer), as well as contributing in opening new directions of research. The focus of this tutorial is on inverse computational spectral geometry. We will offer a different perspective on spectral geometric techniques, supported by recent successful methods in the graphics and 3D vision communities, as well as older, but notoriously overlooked results. Here, the interest shifts from studying the “forward” path typical of spectral geometry pipelines (e.g., computing Laplacian eigenvalues and eigenvectors of a given shape) to studying the inverse path (e.g., recovering a shape from given Laplacian eigenvalues, like in the classical “hearing the shape of the drum” problem). As is emblematic of inverse problems, the ill-posed nature of the reverse direction requires additional effort, but the benefits can be quite considerable as showcased on several challenging tasks in graphics and geometry processing. The purpose of the tutorial is to overview the foundations and the current state of the art on inverse computational spectral geometry, to highlight the main benefits of inverse spectral pipelines, as well as their current limitations and future developments in the context of computer graphics. The tutorial is aimed at a wide audience with a basic understanding of geometry processing, and will be accessible and interesting to students, researchers and practitioners from both the academia and the industry.
Syllabus:
- Introduction: Motivation and historical overview
- Problem foundations: Shape-from-spectrum as a classical problem in mathematical physics, inverse eigenvalue problems in matrix calculus, shape-from-metric and shape-from-intrinsic operators
- Background: The forward direction of classical spectral geometry processing
- Inverse spectral geometry in CG: Motivations, applications, and examples in graphics
- Computational techniques: Existing approaches based on formulating an optimization problem, numerical methods and machine learning-based techniques
- Applications: Inverse spectral geometric pipelines addressing practical problems in graphics
- Open problems and future directions: Main limitations of current approaches, next steps and open problems
- Conclusions and Q&A
Target audience: Intermediate
Outline:
Volumetric video, free-viewpoint video or 4D reconstruction refer to the process of reconstructing 3D content over time using a multi-view setup. This method is constantly gaining popularity both in research and industry. In fact, volumetric video is more and more considered to acquire dynamic photorealistic content instead of relying on traditional 3D content creation pipelines. The aim of the tutorial is to provide an overview of the entire volumetric video pipeline. Furthermore, it presents existing projects that may serve as a starting point to this topic at the intersection of computer vision and graphics.
The first part of the tutorial will focus on the process of computing 3D models from captured videos. Topics will include content acquisition with affordable hardware, photogrammetry, and surface reconstruction from point clouds. A remarkable contribution of the presenters to the graphics community is that they will not only provide an overview of their topic but have in addition open sourced their implementations. Topics of the second part will focus on usage and distribution of volumetric video, including data compression, streaming or post-processing like pose-modification or seamless blending. The tutorial will conclude with an overview of perceptual studies focusing on quality assessment of 3D and 4D content.
Syllabus: Introduction (5min)
Part I – Capturing Volumetric Video with Open-Source Tools
Meshroom, an open source photogrammetry pipeline (Fabien Castan and Simone Gasparini – 25 min)
Low-cost volumetric video with consumer grade sensors (Dimitris Zarpalas and Nikolaos Zioulis – 25 min)
Poisson surface reconstruction (Misha Kazhdan – 25 min)
Part II – Beyond Capturing
Perceptual aspects on volumetric video quality (Eduard Zell – 20 min)
Interactive volumetric videos (Anna Hilsmann – 30 min)
4D compression and streaming (Andrea Tagliasacchi – 20 min)
Conclusion (10 min)
Tuesday, May 4, 2021
Target audience: Introductory to intermediate
Outline:
This tutorial will present the challenges and unique aspects of mixed reality visualization applications such as organization of data for visualization, real-world data sources for visualization, real time photo-realistic rendering techniques, diminished reality rendering techniques and cognitive and perceptual issues. We are also interested in ways that the existing body of research in the graphics and visualization community can be applied in this research area. We expect the tutorial to be a working event with both presentation of state-of- the-art and challenges in this research area. Our tutorial is open both for academia and industry, and expected to be a community hub for both areas that are interested in an introduction to the unique challenges of mixed reality visualization. We welcome a diverse audience consisting of students, researchers and developers that have a basic understanding of computer graphics and computer vision.
Part I – Visually Coherent Mixed Reality
• Light Estimation and Camera Simulation (David Mandl)
• Material Estimation (Kasper Ladefoged)
• Diminished Reality (Shohei Mori)
Part II – Dynamic Mixed Reality
• Perceptual Issues in Mixed Reality (Markus Tatzgern)
• Displaying Mixed Reality Environments (Christoph Ebner)
• Authoring Dynamic MR Environments (Peter Mohr)
Wednesday, May 5, 2021
Target audience: Introductory to intermediate
Outline: since its inception, the CUDA programming model has been continuously evolving. Because the CUDA toolkit aims to consistently expose cutting-edge capabilities for general purpose compute jobs to its users, the added features in each new version reflect the rapid changes that we observe in GPU architectures. Over the years, the changes in hardware, a growing scope of built-in functions and libraries, as well as an advancing C++ standard compliance have expanded the design choices when coding for CUDA, and significantly altered the directives to achieve peak performance. In this tutorial, we give a thorough introduction to the CUDA toolkit, demonstrate how a contemporary application can benefit from recently introduced features and how they can be applied to task-based GPU scheduling in particular.
To provide a profound understanding of how CUDA applications can achieve peak performance, Part 1 of this tutorial outlines the modern CUDA architecture. Following a basic introduction, we expose how language features are linked to – and constrained by – the underlying physical hardware components. Furthermore, we describe common applications for massively parallel programming, offer a detailed breakdown of potential issues and list ways to mitigate performance impacts. An exemplary analysis of PTX and SASS snippets illustrates how code patterns in CUDA are mapped to actual hardware instructions.
Course Notes and Code samples: Please find them on https://cuda-tutorial.github.io/
Syllabus:
- Fundamentals of CUDA
- History of the GPU
- The CUDA execution model
- Kernels, grids, blocks and warps
- Building CUDA applications
- Debugging and Profiling
- Common CUDA libraries
- Understanding the GPU hardware
- The CUDA memory model
- Warp scheduling and latency hiding
- Independent thread scheduling
- Performance metrics and optimization
- Basics of PTX and SASS
Thursday, May 6, 2021
Target audience: Introductory to intermediate
Outline: In Part 2, we will focus on novel features that were enabled by the arrival of CUDA 10+ toolkits and the Volta+ architectures, such as ITS, tensor cores and the graph API. In addition to basic use case demonstrations, we outline our own experiences with these capabilities and their potential performance benefits. We also discuss how long-standing best practices are affected by these changes and describe common caveats for dealing with legacy code on recent GPU models. We show how these considerations can be implemented in practice by presenting state-of-the-art research into task-based GPU scheduling, and how the dynamic adjustment of thread roles and group configurations can significantly increase performance.
Course Notes and Code samples: Please find them on https://cuda-tutorial.github.io/
Syllabus:
- Recent CUDA features and trends
- Synchronization with independent thread scheduling
- Graph API
- Arrival-wait barriers
- Tensor cores
- Set-aside L2 cache
- libcu++: a standard library for CUDA
- Global memory vs. texture memory
- Shared memory vs. the L1 cache
- Task-based CUDA programming
- Programming on different levels of the GPU hierarchy
- Persistent threads and megakernels
- Dynamic parallelism and task-queues
- GPU queues
- Dynamic memory management
- Mixed-parallelism usage scenarios: image processing, software rasterization, mesh subdivision, building spatial acceleration structures and more