
Defending machine learning and computer vision
Defending machine learning models from attacks and manipulation.
With the continued rise of machine learning in production level environments, a deeper understanding and prevention of adversarial attacks on these models is needed.
A vast body of work has been done in research communities to tackle this, largely related to identification obfuscation and manipulation of intent (think removal of signage that helps guide AVs or tricking human tracking models).
Other Thoughts