Location

Online

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2022 12:00 AM

End Date

7-1-2022 12:00 AM

Description

Online user-generated reviews provide a unique view into consumer perceptions of a business. Extant research has demonstrated that text mining provides insight from textual reviews. More recently, we haven seen the adoption of image mining techniques to analyze visual content as well. With data comprising of user-generated imagery (UGI) and textual reviews, we propose to perform a combination of text- and image mining techniques to extract relevant attributes from both modalities. The analysis allows for a comparison between textual and visual content in online reviews. For the UGI analysis, we use a Deep Embedded Clustering model and for the User Generated Text Analysis we use a TF-IDF based mechanism to obtain attributes and polarities. The overall goal is to extract maximum information from text and images and compare the insights we gather from both. We analyze if any modality is self-sufficient or better than the other and also if both modalities combine to give similar or contrasting insights.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 7th, 12:00 AM

Text vs. Image: An application of unsupervised multi-modal machine learning to online reviews

Online

Online user-generated reviews provide a unique view into consumer perceptions of a business. Extant research has demonstrated that text mining provides insight from textual reviews. More recently, we haven seen the adoption of image mining techniques to analyze visual content as well. With data comprising of user-generated imagery (UGI) and textual reviews, we propose to perform a combination of text- and image mining techniques to extract relevant attributes from both modalities. The analysis allows for a comparison between textual and visual content in online reviews. For the UGI analysis, we use a Deep Embedded Clustering model and for the User Generated Text Analysis we use a TF-IDF based mechanism to obtain attributes and polarities. The overall goal is to extract maximum information from text and images and compare the insights we gather from both. We analyze if any modality is self-sufficient or better than the other and also if both modalities combine to give similar or contrasting insights.

https://aisel.aisnet.org/hicss-55/in/electronic_marketing/4