Following the tradition of previous conferences, SGP 2014 will feature a two-day school on geometry processing, specifically targeted towards graduate students at the beginning of their PhD studies. The courses will focus on fundamental concepts and important aspects of digital geometry processing.
This school is intended for those SGP participants, who are not yet that familiar with the overall field and who will thus benefit from a more thorough introduction into the topics dealt with at the following SGP event, where there is no time for much of an introduction in the actual presentations.
Programme:Day 1 (Monday 7 July)
Day 2 (Tuesday 8 July)
Laplace-Beltrami: The Swiss Army Knife of Geometry Processing
A remarkable variety of basic geometry processing tools can be expressed in terms of the Laplace-Beltrami operator on a surface—understanding these tasks in terms of fundamental PDEs such as heat flow, Poisson equations, and eigenvalue problems leads to an efficient, unified treatment at the computational level. The central goal of this tutorial is to show students 1. how to build the Laplacian on a triangle mesh, and 2. how to use this operator to implement a diverse array of geometry processing tasks. We will also discuss alternative discretizations of the Laplacian (e.g., on point clouds and polygon meshes), recent developments in discretization (e.g., via power diagrams), and important properties of the Laplacian in the smooth setting that become essential in geometry processing (e.g., existence of solutions, boundary conditions, etc.).
Tutorial on Generative Modeling
This tutorial introduces the concepts and techniques of generative modeling. It starts with some introductory examples in the first learning unit to motivate the main idea: to describe a shape using an algorithm. After the explanation of technical terms, the second unit focuses on technical details of algorithm descriptions, programming languages, grammars and compiler construction, which play an important role in generative modeling. The purely geometric aspects are covered by the third learning unit. It comprehends the concepts of geometric building blocks and advanced modeling operations. Notes on semantic modeling aspects – i.e. the meaning of a shape – complete this unit and introduce the inverse problem. What is the perfect generative description for a real object? The answer to this question is discussed in the fourth learning unit while its application is shown (among other applications of generative and inverse-generative modeling) in the fifth unit. The discussion of open research questions concludes this tutorial.
The assumed background knowledge of the audience comprehends basics of computer science (including algorithm design and the principles of programming languages) as well as a general knowledge of computer graphics. The tutorial takes approximately 120min. and enables the attendees to take an active part in future research on generative modeling.
LIBIGL: a C++ library for geometry processing without a mesh data structure
LIBIGL is a new open source C++ library for geometry processing research and development. Dropping the heavy data structures of tradition geometry libraries, LIBIGL is a simple header-only library of encapsulated functions. This combines the rapid prototyping familiar to MATLAB or PYTHON programmers with the performance and ver- satility of C++. The tutorial is a self-contained, hands-on introduction to LIBIGL. Via live coding and interactive examples, we demonstrate how to accomplish various common geometry processing tasks such as computation of differential quantities and operators, real-time deformation, global parametrization, numerical optimization and mesh repair. Accompanying lecture notes contain further details and cross-platform example applications for each topic.
Feature extraction, matching and evaluation for shape registration
Feature extraction and matching (FEM) of 3D shapes finds numerous applications in computer graphics and vision for object modelling, retrieval, recognition, and animation. However, due to imaging noise, occlusion, appearance and disappearance of points, clustered background, or featureless simple geometry, the method usually unavoidably introduces false matches, which complicates the task.
This tutorial will first state the main problems for 3D shape registration, then survey the most widely used methods for the task. These methods can be broadly classified into three main categories: FEM, iterative closest point (ICP) algorithm and its variants, and randomized search. FEM will be of particular interest due to its wide applicability to varying complexities of shapes, varying degrees of overlap, and varying magnitudes of transformation. Thus, two state of the art FEM methods, signature of histograms of orientations (SHOT) and the universal shape context (USC), will be described in detail, so that the audience can get a clear idea about the main ideas and steps inside these techniques and about how the FEM methods generally work. While the state of the art FEM methods generally work well, they usually unavoidably introduce many, up to 95%, false matches, leading the underlying transformation to be inaccurately estimated and the overlapping shapes to be inappropriately aligned. To improve the performance of the FEM methods, this tutorial will then describe two state of the art methods to evaluate the point matches established, so that the underlying transformation can be estimated as accurately as possible from the evaluated/weighted point matches and the refinement using the ICP algorithm or its variants, if necessary, can be more likely to succeed. To demonstrate the performance of various feature extraction, matching and evaluation techniques, some experimental results based on real data will be presented with the performance of each method measured using various metrics: average registration error, root of mean squares, relative difference in the estimated rotation axis, rotation angle, and translation vector of the underlying transformation, computational time, etc. Finally, some conclusion will be drawn with regards to the existing feature extraction, matching, and evaluation methods and future research directions will be indicated.
This tutorial will put a particular emphasis on the ideas and the basic computational steps without assuming any advance knowledge about the topics. Some knowledge about digital imaging and the potential application of 3D imaging will however help the audience to follow.
Structure preserving representations of Euclidean geometry through conformal geometric algebra
Conformal Geometric Algebra (CGA) is being used to encode Euclidean geometry compactly, resulting in software with fewer exceptions for the usual primitives (points, lines, planes), and extending the Euclidean primitives to spheres, circles, tangents et cetera in a consistent algebraic manner. Its power lies in being a computational framework in which constructions are represented in a structure-preserving manner: moving an element constructed from primitives is identical to moving the primitives and constructing the element (trivial, but our usual linear algebra representations fail in this). I show what the essential steps are to get from standard linear algebra to CGA, with a focus on the representation of transformations (especially Euclideanmotions); the primitives then follow.
The presentation will interleave geometric equations with interactive software, and should give you a full overview of how CGA tricks. It will provide an entry to other texts (like my book Geometric Algebra for Computer Science, Morgan-Kaufmann 2009).
IQmulus Workshop Invited Talks
SpatialHadoop: A MapReduce Framework for Spatial Data
Mohamed F. Mokbel, University of Minnesota, US
The talk is about SpatialHadoop; a full-fledged MapReduce framework with native support for spatial data. SpatialHadoop is a comprehensive extension to Hadoop that injects spatial data awareness in each Hadoop layer, namely, the language, storage, MapReduce, and operations layers. In the language layer, SpatialHadoop adds a simple and expressive high level language for spatial data types and operations. In the storage layer, SpatialHadoop adapts traditional spatial index structures, Grid, R-tree and R+-tree, to form a two-level spatial index. SpatialHadoop enriches the MapReduce layer by new components for efficient and scalable spatial data processing. In the operations layer, SpatialHadoop is already equipped with three basic operations, range query, kNN, and spatial join as case studies. Other spatial operations can also be added following a similar approach. We will also discuss various projects that we are carrying, based on SpatialHadoop, to manage NASA satellite data, Twitter data, and OpenStreetMap data.
Point Cloud Data Management
Peter van Oosterom, Delft University of Technology, NL
Point cloud data are important sources for 3D geo-information. Modern acquisition technologies, such as laser scanning, dense image matching from photos or multibeam echo-sounding, generate point clouds with billions or even trillions points especially with repeated scans of same area (the temporal dimension). These point clouds are too massive to be handled efficiently by common geo-ICT infrastructures. Therefore, core support for point cloud data types in the existing spatial DBMS is needed, besides the existing vector and raster data types. Further, a new and specific web-services protocol for point cloud data is investigated, supporting progressive transfer based on multi-resolution. The eScience project investigates solutions in order to better exploit the rich potential of point cloud data. The project partners are: Rijkswaterstaat, Fugro, Oracle, Netherlands eScience Centre and TU Delft. An inventory of the user requirements has been made using structured interviews with users from different background: government, industry and academia. Based on these requirements a benchmark has been developed to compare various point cloud data management solutions w.r.t. functionality and performance. The main test data set is the second national height map of the Netherlands, AHN2, with 6 to 10 samples for every square meter of the country, resulting in more than 100 billion points with 3 cm accuracy. The AHN2 data is specified and financed by the Dutch government (Rijkswaterstaat and regional Water boards) and produced on contract basis by engineering firms, such as Fugro. Initially for water management applications (flood modelling, dike monitoring), but more and more other government, commercial and scientific applications are developed (forest mapping, generation of 3D city models, etc.).