Tutorials

Tutorial 1 (August 6, 13:10-14:40)

Lecturer: Ikuro Sato (Denso IT Laboratory/Tokyo Institute of Technology)

Title: Vision Technologies for Autonomous Driving

Abstract:

Computer vision provides indispensable technologies for autonomous driving. Achieving safe and reliable autonomous driving requires solving various visual tasks in real time with sufficiently high accurately, including object detection, tracking, 3D localization, traffic sign recognition, road structure recognition, and even estimation of the intentions of road users. In this respect, autonomous driving is like a “dojo for vision technology.” In this tutorial, we will introduce recent vision technologies related to autonomous driving, and discuss remaining challenges and future prospects.

Bio:

He received his Ph.D. in Physics from University of Maryland, USA, in 2005. After working as a postdoctoral fellow at Lawrence Berkeley National Laboratory, USA, he joined Denso IT Laboratory, Inc. in 2008. Since 2020, he has held a concurrent position at the School of Computing, Tokyo Institute of Technology. His research interests include image recognition and machine learning related to autonomous driving.

Tutorial 2 (August 6, 14:50-16:20)

Lecturer: Mayu Otani (CyberAgent)

Title: Performance Evaluation for Computer Vision Research

Abstract:

Research in computer vision inherently involves evaluation experiments, which validate ideas and facilitate comparisons between various methods. The development of standardized evaluation methods significantly boosts the advancement of this field. In this tutorial, I will explore these evaluation methods, focusing on aspects such as data, measures, and tasks. Furthermore, this tutorial will discuss the requirements of an effective evaluation and the impact that the adoption of a specific evaluation method can have on its research field. This discussion will be based on trends in computer vision as well as in related fields such as natural language processing.

Bio:

After completing the doctoral program in the Graduate School of Information Sciences at Nara Institute of Science and Technology in 2018, she joined CyberAgent. She is engaged in research on computer vision and machine learning.

Tutorial 3 (August 6, 16:30-18:00)

Lecturer: Mariko Isogawa (Keio University)

Title: Scene state estimation using various sensors and modalities

Abstract:

When estimating the scene state in a real-world environment, we need to consider various aspects such as the impact on estimation accuracy due to lighting conditions or occlusions, the resource cost such as memory and power consumption, and the privacy of personal information in captured data. On the other hand, even if there are constraints that are difficult to solve with a certain measurement, it may be possible to partially resolve them by using other measurements or modalities. This tutorial discusses the merits and demerits of various measurement methods using visible light sensors such as event cameras and transient cameras, as well as modalities other than visible light such as wireless signals and acoustic signals. I will also discuss the challenges and prospects when dealing with these methods for scene state estimation tasks such as human posture estimation and 3D shape reconstruction, based on the latest research trends.

Bio:

She received her M.S., and Ph.D. degrees from Osaka University, Japan, in 2013, and 2019, respectively. From 2013 to 2022, she worked at Nippon Telegraph and Telephone Corporation as a Researcher. She was a Visiting Scholar at Carnegie Mellon University from 2019 to 2020. Since 2022, she has been affiliated with Keio University. She is currently an Associate Professor in the Department of Information and Computer Science, Faculty of Science and Technology, Keio University. Her research interests include computer vision such as scene and human state estimation with various modalities.


Copyright (c) 2024, MIRU2024; all rights reserved.