Paper Type

Complete

Paper Number

1749

Description

With information network technology advancing rapidly, public opinion warfare intensifies, paralleling traditional battlefield combat. "Trolls" infiltrating public discourse manipulate sentiment via distorted factual content on online media. Governments globally recognize public opinion warfare's impact on national security, ideology, and culture, intervening in online social media content. The utilization of autonomous debate technology has the potential to enhance public trust more effectively than traditional methods like content deletion and search heat reduction in public opinion interventions. However, its adoption in the realm of public opinion governance remains limited. This paper aims to develop a defensive AIGC debate agent to counter troll influence on online media platforms. Building upon reinforcement learning autonomous debate models, a pro-and-con debate model is proposed, exploring response strategies through counterfactual action generation. Experimental results demonstrate enhanced efficacy in mitigating troll influence, with a 12.5% improvement in causal decoupling accuracy and reinforcement of trolls' semantic signals.

Comments

Interaction

Share

COinS
 
Jul 2nd, 12:00 AM

A Defensive AIGC Debate Intelligence Agent based on Counterfactual Action Generation

With information network technology advancing rapidly, public opinion warfare intensifies, paralleling traditional battlefield combat. "Trolls" infiltrating public discourse manipulate sentiment via distorted factual content on online media. Governments globally recognize public opinion warfare's impact on national security, ideology, and culture, intervening in online social media content. The utilization of autonomous debate technology has the potential to enhance public trust more effectively than traditional methods like content deletion and search heat reduction in public opinion interventions. However, its adoption in the realm of public opinion governance remains limited. This paper aims to develop a defensive AIGC debate agent to counter troll influence on online media platforms. Building upon reinforcement learning autonomous debate models, a pro-and-con debate model is proposed, exploring response strategies through counterfactual action generation. Experimental results demonstrate enhanced efficacy in mitigating troll influence, with a 12.5% improvement in causal decoupling accuracy and reinforcement of trolls' semantic signals.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.