IEEE CVPR WORKSHOP ON

FAIR, DATA EFFICIENT AND TRUSTED COMPUTER VISION

in conjunction with IEEE CVPR 2020
June 14 and 15, 2020, Seattle, Washington

Keynotes

Opening Keynote: MIT Connection Science

Alex Pentland

Keywords: fairness, legality

Talk Video      PDF

Decision-making in Robotics with Vision-in-the-Loop: Best Practices and Open Problems

Debadeepta Dey

Keywords: robotics, UAVs, drones, anytime, neural, network, pipeline, optimization, airsim, simulation

Talk Video PDF

Tackling Data Scarcity Through 3D Simulation

Manolis Savva

Keywords: trust, data efficient

Talk Video PDF

How Biased is My Dataset? Reasoning About Dataset Bias with Task Transferability

Tal Hassner

Keywords: bias, fairness

Talk Video

Synthesis of High-Quality Face Videos

Christian Theobalt

Keywords: trust, data efficient

Talk Video

Detecting Deep-Fake Videos from Appearance and Behavior

Hany Farid

Keywords: trust, data efficient

Talk Video PDF

Enabling Safe, Reliable and Trustworthy Artificial Intelligence

Pushmeet Kohli

Keywords: trust, reliablility and safty

Talk Video PDF

Understanding the Perils of Black Box Explanations

Hima Lakkaraju

Keywords: trust, explainability

Talk Video PDF
Trustworthy AI

Aleksandra (Saška) Mojsilović

Keywords: trust, reliablility and safty

Talk Video PDF

Data Ethics

Timnit Gebru

Keywords: trust, explainability

Talk Video

Learning with Less (More) Data

Tsung-Yi Lin

Keywords: data efficient, trust

Talk Video PDF

Schedule

PDT Time

June 14

CVPR Virtual 
(same program repeats from 21:00)

  •  09:00-09:25 
    Opening Keynote: MIT Connection Science 
    Alex Pentland
  •  09:25-09:30 
    An Analytical Framework for Trusted Machine Learning and Computer Vision Running with Blockchain
    Tao X Wang; Maggie Du; Xinmin Wu; Taiping He
    Paper |
  •  09:30-09:40 
    Privacy Enhanced Decision Tree Inference
    Kanthi K Sarpatwar ; Nalini Ratha ; Karthik Nandakumar ; Karthikeyan Shanmugam ; James Rayfield ; sharath pankanti ; Roman Vaculin
    Paper |
  •  09:40-09:50 
    Invited Talk: ENSEI: Efficient Secure Inference via Frequency-Domain Homomorphic Convolution for Privacy-Preserving Visual Recognition
    Song Bian, Tianchen Wang, Masayuki Hiromoto, Yiyu Shi, Takashi Sato
    Paper |
  •  09:50-10:00 
    Invited Talk: StegaStamp: Invisible Hyperlinks in Physical Photos
    Matt Tancik, Ben Mildenhall, Ren Ng
    Paper |
  •  10:00-10:30 
    Keynote: Decision-making in Robotics with Vision-in-the-Loop: Best Practices and Open Problems 
    Debadeepta Dey
  •  10:35-11:05 
    Keynote: Tackling Data Scarcity Through 3D Simulation 
    Manolis Savva
  •  11:05-11:15 
    Invited Talk: Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild
    Shangzhe Wu, Christian Rupprecht, Andrea Vedaldi
    Paper |
  •  11:15-11:25 
    Invited Talk: Towards Efficient Model Compression via Learned Global Ranking
    Ting-Wu(Rudy)Chin, RuizhouDing, Cha Zhang, Diana Marculescu
    Paper |
  •  11:25-11:30 
    Minimizing Supervision in Multi-label Categorization
    Rajat ; Munender Varshney ; Pravendra Singh ; Vinay P Namboodiri
    Paper |
  •  13:00-13:30 
    Keynote: How Biased is My Dataset? Reasoning About Dataset Bias with Task Transferability 
    Tal Hassner
  •  13:30-13:40 
    Enhancing Facial Data Diversity with Style-based Face Aging
    Markos Georgopoulos ; James A Oldfield Mihalis A Nicolaou; Yannis Panagakis ; Maja Pantic
    Paper |
  •  13:40-13:50 
    Imparting Fairness to Pre-Trained Biased Representations
    Bashir Sadeghi ; Vishnu Boddeti
    Paper |
  •  13:50-14:00 
    Exploring Racial Bias within Face Recognition via per-subject Adversarially-Enabled Data Augmentation
    Seyma Yucer ; Samet Akcay ; Noura Al Moubayed; Toby Breckon
    Paper |
  •  14:00-14:30 
    Keynote: Synthesis of High-Quality Face Videos 
    Christian Theobalt
  •  14:30-15:00 
    Keynote: Detecting Deep-Fake Videos from Appearance and Behavior 
    Hany Farid
  •  15:00-15:05 
    DNDNet: Reconfiguring CNN for Adversarial Robustness
    Akhil Goel ; Akshay Agarwal ;Mayank Vatsa ; Richa Singh ; Nalini Ratha
    Paper |
  •  15:05-15:15 
    Plug-And-Pipeline: Efficient Regularization for Single-Step Adversarial Training
    Vivek B S ; Ambareesh Revanur ; Naveen Venkat ; Venkatesh Babu RADHAKRISHNAN
    Paper |
  •  15:15-15:25 
    Invited Talk: Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization
    Saehyung Lee Hyungyu Lee Sungroh Yoon
    Paper |
  •  15:25-15:35 
    Invited Talk: A U-Net Based Discriminator for Generative Adversarial Networks
    Edgar Schönfeld, Bernt Schiele, Anna Khoreva
    Paper |
  •  15:40-16:40 
    Live Q & A 
    All Authors

June 15

CVPR Virtual
(same program repeats from 21:00)

  •  09:00-09:30 
    Keynote: Trustworthy AI 
    Aleksandra (Saška) Mojsilovic
  •  09:30-09:40 
    SAM: The Sensitivity of Attribution Methods to Hyperparameters
    Naman Bansal ; Chirag Agarwal ; Anh Nguyen
    Paper |
  •  09:40-09:50 
    Revisiting the Evaluation of Uncertainty Estimation and Its Application to Explore Model Complexity-Uncertainty Trade-Off
    Yukun Ding; Jinglan Liu ; Jinjun Xiong; Yiyu Shi
    Paper |
  •  09:50-10:00 
    Interpreting Interpretations: Organizing Attribution Methods by Criteria
    Zifan Wang ; Piotr Mardziel ; Matt Fredrikson ; Anupam Datta
    Paper |
  •  10:00-10:30 
    Keynote: Enabling Safe, Reliable and Trustworthy Artificial Intelligence 
    Pushmeet Kohli
  •  10:30-11:00 
    Keynote: Data Ethics 
    Timnit Gebru
  •  11:00-11:10 
    Explaining Failure: Investigation of Surprise and Expectation in CNNs
    Thomas Hartley ; Kirill Sidorov ; Chris Willis ; David Marshall
    Paper |
  •  11:10-11:20 
    Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
    Haofan Wang ; Zifan Wang; Mengnan Du ; Fan Yang ; Zijian Zhang ; Sirui Ding ; Piotr Mardziel ; Xia Hu
    Paper |
  •  11:20-11:25 
    On Privacy Preserving Anonymization of Finger-selfies
    Aakarsh Malhotra ; Saheb Chhabra ; Mayank Vatsa ; Richa Singh
    Paper |
  •  11:25-11:55 
    Keynote: Learning with Less (More) Data 
    Tsung-Yi Lin
  •  13:00-13:30 
    Keynote: Understanding the Perils of Black Box Explanations 
    Hima Lakkaraju
  •  13:30-13:40 
    Identity Preserve Transform: Understand What Activity Classification Models Have Learnt
    Jialing Lyu ; Weichao Qiu; Alan Yuille
    Paper |
  •  13:40-13:50 
    Invited Talk: Deep Active Learning for Biased Datasets via Fisher Kernel Self-Supervision
    Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Sotaro Tsukizawa
    Paper |
  •  13:50-14:00 
    e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations
    Virginie Do ; Oana-Maria Camburu ; Zeynep Akata ; Thomas Lukasiewicz
    Paper |
  •  14:40-14:50 
    Bias in Multimodal AI: Testbed for Fair Automatic Recruitment
    Alejandro Peña ; Ignacio Serna ;Aythami Morales ; Julian Fierrez
    Paper |
  •  14:50-15:00 
    Attribute Aware Filter-Drop for Bias Invariant Classification
    Shruti Nagpal ; Maneet Singh ; Richa Singh ; Mayank Vatsa
    Paper |
  •  15:00-15:05 
    Face Recognition: Too Bias, or Not Too Bias?
    Joseph P Robinson ; YUN FU ; Yann Henon ; Gennady Livitz ; Can Qin ; Samson Timoner
    Paper |
  •  15:30-16:30 
    Live Q & A 
    All Authors

About the Workshop

In every walk of life, computer vision and AI systems are playing a significant and increasing role. They are being employed for making mundane day to day decisions such as healthy food choices and dress choices from the wardrobe to match the occasion of the day as well as mission-critical and life-changing decisions such as diagnosis of diseases, detection of financial frauds, and selecting new employees. Many upcoming applications such as autonomous driving to automated cancer treatment recommendations has everyone worrying about the level of trust associated with vision systems today. The concerns are genuine as many weaker sides of modern vision systems have been exposed through adversarial attacks, bias, and lack of explainability in the current rapidly evolving vision systems. While these vision systems are reaping the advantage of the novel learning methods, they exhibit brittleness to minor changes in the input data and lack the capability to explain its decisions to a human. Furthermore, they are unable to address the bias in their training data and are often highly opaque in terms of revealing the lineage of the system and how they were trained and tested. It has been conjectured that the current use of AI is based on only about 20% of the data the world has access to. Rest 80% of the data that can help AI systems is not available because of regulations and compliance requirements around security and privacy. The present AI systems haven’t demonstrated the ability to learn without compromising on the privacy and security of data. Nor can they even assign appropriate credit to the data sources. 

With the ever increasing appetite for data in machine learning, we need to face the reality that for many applications, sufficient data may not be available. Even if raw data is plenty, quality labeled data may be scarce, and if it is not, then relevant labeled data for a particular objective function may not be sufficient. The latter is often the case in tail end of the distribution problems, such as recognizing in autonomous driving that a baby stroller is rolling on the street. The event is rare in training and testing data, but certainly highly critical for the objective function of personal and property damage. Even the performance evaluation of such a situation is challenging. One may stage experiments geared towards particular situations, but this is not a guarantee that the staging conforms to the natural distribution of events, and even if, then there are many tail ends in high dimensional distributions, that are by their nature hard to enumerate manually. 

Many publicly available computer vision datasets are responsible for great progress in visual recognition and analytics. These datasets serve as source of large amounts of training data as well as assessing performance of state-of-the-art competing algorithms. Performance saturation on such datasets has led the community to believe many general visual recognition problems to be close to be solved, with various commercial offerings stemming from models trained on such data. However, such datasets present significant biases in terms of both categories and image quality, thus creating a significant gap between their distribution and the data coming from the real world. For example, many of the publicly available datasets underrepresent certain ethnic and cultural communities and over represent others. Many variations have been observed to impact visual recognition including resolution, illumination and simple cultural variations of similar objects. Systems based on a skewed training dataset are bound to produce skewed results. This mismatch has been evidenced in the significant drop in performance of state of the art models trained on those datasets when applied to images for example of particular gender and/or ethnicity groups for face analytics. It has been shown that such biases may have serious impacts on performance in challenging situations where the outcome is critical either for the subject or to a community. Often research evaluations are quite unaware of those issues, while focusing on saturating the performance on skewed datasets. 

In order to progress toward fair visual recognition truly in the wild, we propose this workshop to understand the underlying issues in bias free and culturally diverse visual recognition. 

Under such circumstances, our workshop on Fair, Data Efficient and Trusted Computer Vision will address four critical issues in enhancing user trust in AI and computer vision systems namely: (i) Fairness, (ii) Data Efficient learning and critical aspects of trust including (ii) explainability, (iii) mitigating adversarial attacks robustly and (iv) improve privacy and security in model building with right level of credit assignment to the data sources along with transparency in lineage.

>

Submission


Submission Instructions

SUBMISSION INSTRUCTIONS We solicit submissions of technical papers (5 to 8 pages). Please submit at the
FA.DE.TR.CV@CVPR2020 CMT web site 

Submitted technical papers must follow the CVPR paper format and guidelines (see CVPR2020 Author Guidelines). All accepted submissions must be presented by one of the authors. 

Submission deadline for technical papers is April 3 2020 11.59pm Pacific Time  

We invite submissions of original work. Accepted work will be presented as either an oral or a poster presentation. The review will be a double-blind


Topics

    We solicit original research papers covering these areas to be submitted to the workshop:

  • Vision/AI and bias
  • Secure machine learning in vision and AI
  • Vision/AI model security using blockchain
  • Explainability in Vision/AI decisions
  • Analytics in encrypted domain
  • Secure Vision/AI computing and blockchain
  • Vision/AI provenance and lineage
  • Trust in Vision/AI
  • Privacy in Vision/AI
  • Robustness of Vision/AI models
  • Vision/AI forensics
  • Vision/AI models attribution
  • Work that spans across the many dimensions of trust
  • Algorithms and theories for learning computer vision models under bias and scarcity
  • Methods for exploiting prior knowledge to learn models under bias/scarcity
  • Optimization methods designed for learning models from side-channel/alternative/synthetic sources of data
  • Domain adaptation methods to bridge train/test data gap
  • Methods for studying generalization characteristics of vision models trained from alternative data sources
  • Methods of evaluating performance of models under bias/scarcity
  • Domain-specific methods designed for important computer vision applications
  • Performance characterization of vision algorithms and systems under bias and scarcity
  • Continuous re nement of vision models using active/online learning
  • Meta-learning models from various existing task-speci c models
  • Brave new ideas to learn computer vision models under bias and scarcity
  • New algorithms and architectures explicitly designed to reduce bias in visual analytics
  • New techniques to balance/manipulate data to reduce bias in visual analytics
  • New datasets to improve and measure bias/diversity in visual analytics
  • New evaluation protocols to assess and measure bias/diversity in visual analytics
  • Generative methods to reduce bias in visual analytics
  • Evaluations of bias/diversity of state of the art techniques in visual analytics
  • Transfer learning/domain adaptation techniques for more fair visual analytics

Invited Speakers

Organizers