Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
Emerging Artificial Intelligence (AI) applications are bringing with them both the potential for significant societal benefit and harm. Additionally, vulnerabilities within AI source code can make them susceptible to attacks ranging from stealing private data to stealing trained model parameters. Recently, with the adoption of open-source software (OSS) practices, the AI development community has introduced the potential to worsen the number of vulnerabilities present in emerging AI applications, building new applications on top of previous applications, naturally inheriting any vulnerabilities. With the AI OSS community growing rapidly to a scale that requires automated means of analysis for vulnerability management, we compare three categories of unsupervised graph embedding methods capable of generating repository embeddings that can be used to rank existing applications based on their functional similarity for AI developers. The resulting embeddings can be used to suggest alternatives to AI developers for potentially insecure AI repositories.
Recommended Citation
Lazarine, Ben; Samtani, Sagar; Zhu, Hongyi; and Venkataraman, Ramesh, "Suggesting Alternatives for Potentially Insecure Artificial Intelligence Repositories: An Unsupervised Graph Embedding Approach" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 3.
https://aisel.aisnet.org/hicss-57/cl/security/3
Suggesting Alternatives for Potentially Insecure Artificial Intelligence Repositories: An Unsupervised Graph Embedding Approach
Hilton Hawaiian Village, Honolulu, Hawaii
Emerging Artificial Intelligence (AI) applications are bringing with them both the potential for significant societal benefit and harm. Additionally, vulnerabilities within AI source code can make them susceptible to attacks ranging from stealing private data to stealing trained model parameters. Recently, with the adoption of open-source software (OSS) practices, the AI development community has introduced the potential to worsen the number of vulnerabilities present in emerging AI applications, building new applications on top of previous applications, naturally inheriting any vulnerabilities. With the AI OSS community growing rapidly to a scale that requires automated means of analysis for vulnerability management, we compare three categories of unsupervised graph embedding methods capable of generating repository embeddings that can be used to rank existing applications based on their functional similarity for AI developers. The resulting embeddings can be used to suggest alternatives to AI developers for potentially insecure AI repositories.
https://aisel.aisnet.org/hicss-57/cl/security/3