Location

Hilton Hawaiian Village, Honolulu, Hawaii

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2024 12:00 AM

End Date

6-1-2024 12:00 AM

Description

As machine learning (ML) models are increasingly being used in real-life applications, ensuring their trustworthiness has become a rising concern. Previous research has extensively examined individual perspectives on trustworthiness, such as fairness, robustness, privacy, and explainability. Investigating their interrelations could be the next step in achieving an improved understanding of the trustworthiness of ML models. By conducting experiments within the context of facial analysis, we explore the feasibility of quantifying multiple aspects of trustworthiness within a unified evaluation framework. Our results indicate the viability of such a framework, achieved through the aggregation of diverse metrics into holistic scores. This framework can serve as a practical tool to assess ML models in terms of multiple aspects of trustworthiness, specifically enabling the quantification of their interactions and the impact of training data. Finally, we discuss potential solutions to key technical challenges in developing the framework and the opportunities of its transfer to other use cases.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 6th, 12:00 AM

Towards a Quantitative Evaluation Framework for Trustworthy AI in Facial Analysis

Hilton Hawaiian Village, Honolulu, Hawaii

As machine learning (ML) models are increasingly being used in real-life applications, ensuring their trustworthiness has become a rising concern. Previous research has extensively examined individual perspectives on trustworthiness, such as fairness, robustness, privacy, and explainability. Investigating their interrelations could be the next step in achieving an improved understanding of the trustworthiness of ML models. By conducting experiments within the context of facial analysis, we explore the feasibility of quantifying multiple aspects of trustworthiness within a unified evaluation framework. Our results indicate the viability of such a framework, achieved through the aggregation of diverse metrics into holistic scores. This framework can serve as a practical tool to assess ML models in terms of multiple aspects of trustworthiness, specifically enabling the quantification of their interactions and the impact of training data. Finally, we discuss potential solutions to key technical challenges in developing the framework and the opportunities of its transfer to other use cases.

https://aisel.aisnet.org/hicss-57/st/trustworthy_ai/2