FG2026 Tutorial, Kyoto, Japan
Tutorial description
Event-based cameras are a recently introduced device that asynchronously senses light intensity changes of each pixel and that can be applied in multiple recognition problems of interest to the community, in particular due to the fact that event-based cameras continuously encode sparse appearance & motion information at a very high speed, with low latency and with a high dynamic range. In the last few years, there has been a great interest in this device, including its use in problems related to the analysis of faces and gestures. Event-based cameras are particularly interesting in the analysis of faces and gestures for their high temporal resolution and high dynamic range, thus eliminating motion blur and allowing handling challenging illumination. More over, they allow to analyzing subtle changes in faces in a continuous stream of data.
In this context, this tutorial gives an introductory and comprehensive overview of event-based cameras and discusses their use in face and gesture recognition problems. The tutorial is organized in three parts: First, we give an overview of event cameras, existing sensors, and problems they have been applied on, from low-level vision (e.g. optic flow, tracking, feature detection) to high-level vision (e.g. reconstruction, segmentation, recognition). Second, we discuss existing techniques to process trains of events, including learning-based ones. Finally, we present an overview of recent work on face and gesture recognition using event based cameras, including a discussion of the existing datasets and methods used on these problems, as well as possible open research directions.
The tutorial is intended for researchers with no prior experience with event-based cameras, as well as researchers that have worked with event-based cameras in the past but who want to review recent advances in this area.
Preliminary Schedule:
- No special requirements.
- Primary target audience: The tutorial is intended for researchers with no prior experience with event-based cameras, as well as researchers that have worked with event-based cameras in the past but who want to review recent advances in this area.
- Background: we assume knowledge in image processing, computer vision, basics of Deep Learning, and basics of signal processing.
- Slides and prepared material will be made available.
Organizers:
Rodrigo Verschae
- Email: rodrigo@verschae.org
- Web: http://rodrigo.verschae.org
- https://scholar.google.com/citations?user=Fv1lZNkAAAAJ&hl=en
- Affiliation: Universitad Tecnica Universidad Técnica Federico Santa María, Chile
- Short bio: Rodrigo Verschae is currently with the Universitad Tecnica Universidad Técnica Federico Santa María, Chile. Doctor in Electrical Engineering and Master in Applied Maths, he is interested in Computer Vision, Machine Learning, and Robotics with experience in various application areas. Rodrigo has been Director and Associate Professor at the Institute of Engineering Sciences, Universidad de O’Higgins, aassistant professor at Kyoto University, Japan (2015-2018), a postdoctoral fellow at the Advanced Mining Technology Center AMTC (2011-2013), a research fellow at the Kyushu Institute of Technology, Japan (2009-2011), and an associated researcher at Fraunhofer IPK-Institute, Germany (2004-2005), among others.
Daniel Acevedo
- Email: dacevedo@dc.uba.ar
- Web: https://scholar.google.com/citations?user=1Yv2P6oAAAAJ&hl=en
- Affiliation: Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación. Buenos Aires, Argentina.
- Short bio: Daniel Acevedo is currently an assistant researcher at the ICC (Instituto de Ciencias de la Computación) part of UBA (University of Buenos Aires) and CONICET (Consejo Nacional de Investigaciones Científicas y Técnicas). Daniel also works at the Department of Computer Sciences (FCEyN - UBA) as a professor. His research mainly focuses on digital image processing topics: facial expression recognition, texture analysis and retrieval. As well, he has worked on satellite data compression.
Nicolas Mastropasqua:
- Email: nmastropasqua@dc.uba.ar
- Web: https://scholar.google.com/citations?user=m-mTz2kAAAAJ&hl=en
- Affiliation: Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación. Buenos Aires, Argentina.
- Short bio: Nicolas Mastropasqua is a Ph.D. student in Computer Science at Universidad de Buenos Aires. He received the degree in Computer Science from the Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires in 2023. His research interests include computer vision, neuromorphic vision, facial expression analysis, and driver monitoring systems.
Ignacio Bugueno-Cordova:
- Email: i.bugueno@ieee.org
- Web: https://ibugueno.github.io/
- Affiliation: Universidad de O’Higgins
- Short bio: Ignacio Bugueno-Cordova obtained his Electrical Engineering degree in 2018 and his Master of Science degree in 2025, both from Universidad de Chile, Santiago, Chile. He has been a research assistant at the Robotics and Intelligent Systems Laboratory at Universidad de O’Higgins since 2020. In 2025, he was awarded an IEEE CIS Graduate Student Research Grant to conduct a research stay at the L3S Research Center, Hanover, Germany. His research interests include artificial intelligence, computer/event vision, deep learning, IoT, telecommunications, and robotics.
Experience
The instructors have experience on face gesture, detection and recognition problems with event-based cameras [1-5] (see list of related publications at the end of this document). They also have previous experience on face recognition, detection and analysis with standard RGB cameras. The instructors have given tutorials and talks on this topic, including a tutorial on “Introduction to face and gesture recognition using event-based cameras” given at the 15th IEEE International Conference on Automatic Face and Gesture Recognition (2020). This new tutorial builds upon the one of 2020, updating all of its taking into account the important body of work that has been published in the last couple of years.
The main instructors have given tutorial/talks in related computer vision topics in the following international events:
- IEEE RAS ICRA 2025 LA@Chile HANDS-ON: “Gentle introduction to Event-based Robot Vision” given at the satellite event of the ICRA 2025 conference sponsored by IEEE RAS.
- LACORO 2024 HANDS-ON: “Computer/Event vision and deep learning for dynamic environments” given at the 3rd Latin American Summer School on Robotics.
- Invited spotlight speaker at the KHIPU AI conference, Santiago, Chile, March 2025.
- Tutorial on “Introduction to face and gesture recognition using event-based cameras” given at the 15th IEEE International Conference on Automatic Face and Gesture Recognition, November, 2020 (online).
- Tutorial presentation on “Deep Photovoltaic Prediction” at the IEEE RAS Summer School on Deep Learning for Robot Vision, Santiago, Chile, December 2019.
- Tutorial presentation on “Efficient object detection” at the IEEE RAS Summer School on Robot Vision, Santiago, Chile, December 2012 Tutorial presentation on “Multiclass object detection” at the IEEE LA-RAS Summer School, Santiago, Chile, December 2010
- Tutorial presentation on “Face detection” at the IEEE LA-RAS Summer School, Santiago, Chile December 2006 (together with Javier Ruiz-del-Solar)
- [1] R Verschae, I Bugueno-Cordova, “evTransFER: A Transfer Learning Framework for Event-based Facial Expression Recognition”, Neurocomputing, 2026, 132641.
- [2] N Mastropasqua, I Bugueno-Cordova, R Verschae, D Acevedo, Pablo Negri, Maria Elena Buemi, “Event-based facial microexpression analysis using Spiking Neural Networks”, ICPRS, 2025
- [3] N Mastropasqua, D Acevedo, I Bugueno-Cordova, R Verschae, “Exploring spatial-temporal dynamics in event-based facial microexpression analysis”, 2nd Workshop on Neuromorphic Vision ICCV, 2025
- [4] R Verschae, I Bugueno-Cordova, “Event-based Gesture and Facial Expression Recognition: A Comparative Analysis”, in IEEE Access, vol. 11, pp. 121269-121283, 2023, doi: 10.1109/ACCESS.2023.3328220.
- [5] I Bugueno-Cordova, R Verschae “Event-based Facial Expression Recognition”, in LatinX in CV Workshop, 2023 International Conference on Computer Vision