Targeted Adversarial Attacks on Generalizable Neural Radiance Fields

Description of video

Date: 11/15/23
Speaker :Horváth András (PPKE)

Keywords

    Contemporary robotics relies heavily on addressing key challenges like odometry, localization, depth perception, semantic segmentation, the creation of new viewpoints, and navigation with precision and efficiency. Implicit neural representation techniques, notably Neural Radiance Fields (NeRFs) and Generalizable NeRFs (GeNeRFs), are increasingly employed to tackle these issues.

    This talk focuses on exposing certain critical, but subtle flaws inherent in GeNeRFs. Adversarial attacks, while not new to various machine learning frameworks, present a significant threat. This presentation will briefly review these adversarial tactics to illustrate their importance as a vulnerability in machine learning applications. Additionally, it will show that applications of neural radiance fields are susceptible to such attacks, underscoring the need for heightened security measures in their deployment.

    Downloads