19 January 2018 (Time: 15:15-17:00 Location: BBG-214)

Title: GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB
Authors: Franziska Mueller, Florian Bernard, Oleksandr Sornychenki, Dushyant Mehta, Srinath Sridhar, Dan Casas, Christian Theobalt

Speaker: Geert Beuneker and Finn van der Heide
Summary: We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to “real” images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.

Design and Development of Intelligent AGV Using Computer Vision and Artificial Intelligence
Authors: Saurin Sheth, Anand Ajmera, Arpit Sharma, Shivang Paterl, Chintan Kathrecha
Speakers: Bram Jonkers and Midas Schonewille

 Summary: The main aim of this paper is to develop a smart material handling system using an AGV (automated guided vehicle). The task is to transport a container of a fixed size from a defined start point to a defined end point. There is an overhead camera located at the boundary of the arena, in such a way that complete arena can be seen in a single frame. The camera will be capturing real-time images of the vehicle to determine its position and orientation using OpenCV library. The computer will also perform the task of path planning by using various artificial intelligence algorithms like RRT (rapidly random exploring tree) and A* (A Star). The outcome of this process will be the shortest path from beginning point to finish point while avoiding the obstacles. The commands should be enough for the robot to understand where it should go next, i.e., the next pose for the robot. This process continues until the goal is reached. To achieve this, few algorithms are developed for shape detection and edge detection. They help in determining the obstacles and the free area/ path where robot can traverse. The image from overhead camera is used to make the shortest global path from start to end using image processing. The computer will do this using various packages in ROS (robot operating system). This global path will generate waypoints for robot to traverse and the image will also provide current pose for the robot. Though the orientation of the obstacles varies the path of AGV and will always follow the shortest path. Thus, AGV shows the artificial intelligent.

12 January 2018 (Time: 15:15-17:00 Location: BBG-214)

Title: Game Channels for Trustless Off-Chain Interactions in Decentralized Virtual Worlds
Authors: Daniel Kraft

Speaker: Marthe Hegeman and Tobias van Driessel
Summary: Blockchains can be used to build multi-player online games and virtual worlds that require no central server. This concept is pioneered by Huntercoin, but it leads to large growth of the blockchain and heavy resource requirements. In this paper, we present a new protocol inspired by payment channels and sidechains that allows for trustless off-chain interactions of players in private turn-based games. They are usually performed without requiring space in the public blockchain, but if a dispute arises, the public network can be used to resolve the conflict. We also analyze the resulting security guarantees and describe possible extensions to games with shared turns and for near real-time interaction. Our proposed concept can be used to scale Huntercoin to very large or even infinite worlds and to enable almost real-time interactions between players.

Interactive Sound Propagation with Bidirectional Path Tracing
Authors: Chunxiao Cao, Zhong Ren, Carl Schissler, Dinesh Monocha, Kun Zhou
Speakers: Ermis Chalkiadakis and Ricky van den Waardenburg

 Summary: We introduce Bidirectional Sound Transport (BST), a new algorithm that simulates sound propagation by bidirectional path tracing using multiple importance sampling. Our approach can handle multiple sources in large virtual environments with complex occlusion, and can produce plausible acoustic effects at an interactive rate on a desktop PC. We introduce a new metric based on the signal-to-noise ratio (SNR) of the energy response and use this metric to evaluate the performance of ray-tracing-based acoustic simulation methods. Our formulation exploits temporal coherence in terms of using the resulting sample distribution of the previous frame to guide the sample distribution of the current one. We show that our sample redistribution algorithm converges and better balances between early and late reflections. We evaluate our approach on different benchmarks and demonstrate significant speedup over prior geometric acoustic algorithms.

15 December 2017 (Time: 15:15-17:00 Location: BBG-214)

Title: Navigation for Characters and Crowds in Complex Virtual Environments
Speaker: Wouter van Toll (PhD)
 Bio: Dr. Wouter van Toll is a lecturer at Utrecht University. He currently teaches the GMT course "Crowd simulation", and he has been a lecturer in the courses "Geometric algorithms" and "Game programming". For his PhD and MSc theses, Wouter developed and implemented algorithms for efficient path planning and crowd simulation in multi-layered 3D environments. Furthermore, Wouter has been in charge of the group's crowd simulation software, which is used both in research and in the simulation industry.
Summary: In a crowd simulation, virtual walking characters need to compute and traverse paths through a virtual environment while avoiding collisions. Real-time crowd simulation requires efficient data structures and algorithms. These were the topics of Wouter van Toll’s PhD thesis. In this presentation, Wouter will give a broad overview of his thesis, and he will zoom in on two chapters: "A comparative study of navigation meshes" and "A generic crowd simulation framework". He will also give a glimpse of what it was like to be a PhD student.

Globally and Locally Consistent Image Completion
Authors: Satoshi Iizuka, Edgar Simo-Serra, Hisroshi Ishikawa
Speakers: Nikita Iefymov and Karim Machlab

 Summary: We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by llingin missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.

24 November 2017 (Time: 15:15-17:00 Location: BBG-214)

Title: Embodiment in Multimodal Augmented Reality
Speaker: Nina Rosa (PhD candidate)
 Bio: Nina Rosa is a PhD candidate in the Game Research Graduate Program at Utrecht University. She previously obtained her bachelor's degrees in Computer Science and Mathematics, and her master's degree in Game and Media Technology. Her master thesis won the Ngi-NGN Informatie Scriptieprijs 2015, awarded by the KHMW in Haarlem. Nina is currently also a member of the Faculty Council for the Faculty of Science, and a member of the ICS PhD Council.
Summary: Until recently, both virtual reality (VR) and augmented reality (AR) research were primarily focused on improving the visual fidelity of the technological systems. Currently there is a noticeable shift towards more experience-based research where we not only focus on the visual sense but also other senses. Using multiple senses in virtual and augmented environments creates new interesting research opportunities, one of which is the aspect of embodiment. In this presentation I will describe a few challenges and opportunities that multimodality brings in the areas of VR and AR, and show what roles the body can serve in these matters, and how we can manipulate the conventional schema of a body using VR and AR technology.

A Dose of Reality: Overcoming Usability Challenges in VR Head-Mounted Displays
Authors: Mark McGill, Daniel Boland and Roderick Murray-Smith
Speakers: Chrit Hameleers and Sam de Redelijkheid

 Summary: We identify usability challenges facing consumers adopting Virtual Reality (VR) head-mounted displays (HMDs) in a survey of 108 VR HMD users. Users reported significant issues in interacting with, and being aware of their real-world context when using a HMD. Building upon existing work on blending real and virtual environments, we performed three design studies to address these usability concerns. In a typing study, we show that augmenting VR with a view of reality significantly corrected the performance impairment of typing in VR. We then investigated how much reality should be incorporated and when, so as to preserve users' sense of presence in VR. For interaction with objects and peripherals, we found that selectively presenting reality as users engaged with it was optimal in terms of performance and users' sense of presence. Finally, we investigated how this selective, engagement-dependent approach could be applied in social environments, to support the user's awareness of the proximity and presence of others.


3 November 2017 (Time: 15:15-17:00 Location: Bestuurs-Lieregg)

Title: Balanced by Design: Predicates in Game Mechanics
Speaker: Ronnie Vanderfeesten (PhD candidate)
 Bio: Ron Vanderfeesten currently is a PHD Student at the University of Utrecht with a Master's Degree in Computer Science and Engineering and a Bachelor's Degree in Applied Physics. He also works part-time at an indie gamedevelopment studio based in The Hague. He has over 15 years of experience in 3d modeling, gamedesign and programming, with an emphasis on the creation of virtual characters.
Summary: If you play a lot of games you might hear things like: "that's so OP" and "They should really nerf class/item/character X, so imba". Addressing those issues and maintaining a good game balance keeps a game healthy and fun for all players. However, the current approach to making game mechanics: "let's design something cool, and we'll balance it later in a patch" leads to requiring frequent (minor) updates and sometimes even situations where the game mechanics are very difficult to balance due to the inherent design of that part of the game. This presentation will show, using simple examples, how one can model the intuitive notion of balance in game mechanics and then use predicates to ensure a set of game mechanics fullfill certain desirable properties by design. We will show how randomness can be used to improve game balance and show a way to systematically detect and exclude loopholes in a game system.

Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder
Authors: Chakravarty R. Alla Chaitanya, Anton Kaplanyan, Christoph Schied, Marco Salvi, Aaron, Lefohn, Derek Nowrouzezahrai and Timo Aila
Speakers: Matthijs Lardinoije and Martijn Visser

 Summary: We describe a machine learning technique for reconstructing image sequences rendered using Monte Carlo methods. Our primary focus is on reconstruction of global illumination with extremely low sampling budgets at interactive rates. Motivated by recent advances in image restoration with deep convolutional networks, we propose a variant of these networks better suited to the class of noise present in Monte Carlo rendering. We allow for much larger pixel neighborhoods to be taken into account, while also improving execution speed by an order of magnitude. Our primary contribution is the addition of recurrent connections to the network in order to drastically improve temporal stability for sequences of sparsely sampled input images. Our method also has the desirable property of automatically modeling relationships based on auxiliary per-pixel input channels, such as depth and normals. We show signicantly higher quality results compared to existing methods that run at comparable speeds, and furthermore argue a clear path for making our method run at realtime rates in the near future.


13 October 2017 (Time: 15:15-17:00 Location: Bestuurs-Lieregg)

Title: Introduction to GMT Colloquium (pdf)
Speaker: Dr. Zerrin Yumak
Bounce Maps: An Improved Restitution Model for Real-time Rigid-Body Impact
Authors: Jui-Hsien Wang, Rajsekhar Setaluri, Doug L. James, and Dinesh K. Pai
Speakers: Jeroen Huisen and Navid Saremi
 Summary: We present a novel method to enrich standard rigid-body impact models with a spatially varying coefficient of restitution map, or Bounce Map. Even state-of-the art methods in computer graphics assume that for a single rigid body, post- and pre-impact dynamics are related with a single global, constant, namely the coefficient of restitution. We first demonstrate that this assumption is highly inaccurate, even for simple objects. We then present a technique to efficiently and automatically generate a function which maps locations on the object’s surface along with impact normals, to a scalar coefficient of restitution value. Furthermore, we propose a method for two-body restitution analysis, and, based on numerical experiments, estimate a practical model for combining one-body Bounce Map values to approximate the two-body coefficient of restitution. We show that our method not only improves accuracy, but also enables visually richer rigid-body simulations.

A Review of Building Evacuation Models
Authors: Erica D. Kuligowski, Richard D. Peacock and Bryan L. Hoskins
Speaker: Yiran Zhao

 Summary: Evacuation calculations are increasingly becoming a part of performance-based analyses to assess the level of life safety provided in buildings. In some cases, engineers are using back-of-the-envelope (hand) calculations to assess life safety, and in others, computational evacuation models are being used. This paper presents a review of 26 current computer evacuation models, and is an updated version of a previous review published in 2005. Models are categorized by their availability, overarching method of simulating occupants, purpose, type of grid/structure, perspective of the occupants, perspective of the building, internal algorithms for simulating occupant behavior and movement, the incorporate of fire effects, the use of computer-aided design drawings, visualization methods, and validation techniques. Models are also categorized based upon whether they simulate special features of an evacuation, including counterflow, exit blockages, fire conditions that affect behavior, incapacitation of the occupants due to toxic smoke products, group behavior, disabled or slower-moving occupant effects, pre-evacuation delays, elevator usage, and occupant route choice.