Location
Grand Wailea, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
7-1-2020 12:00 AM
End Date
10-1-2020 12:00 AM
Description
The usage of convolutional neural networks has revolutionized data processing and its application in the industry during the last few years. Especially detection in images, a historically hard task to automate is now available on every smart phone. Nonetheless, this technology has not yet spread in the industry of car production, where lots of visual tests and quality checks are still performed manually. Even though the vision capabilities convolutional neural networks can give machines are already respectable, they still need well prepared training data that is costly and time-consuming to produce. This paper describes our effort to test and improve a system to automatically synthesize training images. This existing system renders computer aided design models into scenes and out of that produces realistic images and corresponding labels. Two new models, Single Shot Detector and RetinaNet are retrained under the use of distractors and then tested against each other. The better performing RetinaNet is then tested for performance under training with a variety of datasets from different domains in order to observe the models strength and weakness under domain shifts. These domains are real photographs, rendered models and images of objects cut and pasted into different backgrounds. The results show that the model trained with a mixture of all domains performs best.
Advances in Automated Generation of Convolutional Neural Networks from Synthetic Data in Industrial Environments
Grand Wailea, Hawaii
The usage of convolutional neural networks has revolutionized data processing and its application in the industry during the last few years. Especially detection in images, a historically hard task to automate is now available on every smart phone. Nonetheless, this technology has not yet spread in the industry of car production, where lots of visual tests and quality checks are still performed manually. Even though the vision capabilities convolutional neural networks can give machines are already respectable, they still need well prepared training data that is costly and time-consuming to produce. This paper describes our effort to test and improve a system to automatically synthesize training images. This existing system renders computer aided design models into scenes and out of that produces realistic images and corresponding labels. Two new models, Single Shot Detector and RetinaNet are retrained under the use of distractors and then tested against each other. The better performing RetinaNet is then tested for performance under training with a variety of datasets from different domains in order to observe the models strength and weakness under domain shifts. These domains are real photographs, rendered models and images of objects cut and pasted into different backgrounds. The results show that the model trained with a mixture of all domains performs best.
https://aisel.aisnet.org/hicss-53/in/smart_production_systems/6