fg2026

FG2026 Tutorial, Kyoto, Japan

Tutorial description

Event-based cameras are a recently introduced device that asynchronously senses light intensity changes of each pixel and that can be applied in multiple recognition problems of interest to the community, in particular due to the fact that event-based cameras continuously encode sparse appearance & motion information at a very high speed, with low latency and with a high dynamic range. In the last few years, there has been a great interest in this device, including its use in problems related to the analysis of faces and gestures. Event-based cameras are particularly interesting in the analysis of faces and gestures for their high temporal resolution and high dynamic range, thus eliminating motion blur and allowing handling challenging illumination. More over, they allow to analyzing subtle changes in faces in a continuous stream of data.

In this context, this tutorial gives an introductory and comprehensive overview of event-based cameras and discusses their use in face and gesture recognition problems. The tutorial is organized in three parts: First, we give an overview of event cameras, existing sensors, and problems they have been applied on, from low-level vision (e.g. optic flow, tracking, feature detection) to high-level vision (e.g. reconstruction, segmentation, recognition). Second, we discuss existing techniques to process trains of events, including learning-based ones. Finally, we present an overview of recent work on face and gesture recognition using event based cameras, including a discussion of the existing datasets and methods used on these problems, as well as possible open research directions. The tutorial is intended for researchers with no prior experience with event-based cameras, as well as researchers that have worked with event-based cameras in the past but who want to review recent advances in this area.

Preliminary Schedule:

Requirements (e.g., facilities, Internet access, etc.) and Additional Information:

Organizers:

Rodrigo Verschae

Daniel Acevedo

Nicolas Mastropasqua:

Ignacio Bugueno-Cordova:

Experience

The instructors have experience on face gesture, detection and recognition problems with event-based cameras [1-5] (see list of related publications at the end of this document). They also have previous experience on face recognition, detection and analysis with standard RGB cameras. The instructors have given tutorials and talks on this topic, including a tutorial on “Introduction to face and gesture recognition using event-based cameras” given at the 15th IEEE International Conference on Automatic Face and Gesture Recognition (2020). This new tutorial builds upon the one of 2020, updating all of its taking into account the important body of work that has been published in the last couple of years.