I checked 7 public opinion journals on Sunday, January 18, 2026 using the Crossref API. For the period January 11 to January 17, I found 3 new paper(s) in 2 journal(s).

Journal of Elections, Public Opinion and Parties

The big ideological geographic sort? The role of ideological discrimination, social capital and social whispers in deciding where to live
Toni Rodon, Sofia Breitenstein, Guillem Riambau
Full text

Social Science Computer Review

The Dawn of Generative AI-Enabled Political Activism: How Kenyan Gen Z Used ChatGPT and Protest GPTs to Mobilize
John Maina Karanja, Macrina Mbaika Musili
Full text
In June 2024, youth-led protests in Kenya against a controversial Finance Bill demonstrated the connection between digital technologies and political activism in the Global South. This study examines how generative artificial intelligence (GAI) shapes political participation by focusing on Kenyan Gen Z activists who used ChatGPT to create custom models: Finance_Bill_GPT, Corrupt_Politicians_GPT, and MPs_Contribution_GPT (collectively called Protest_GPT_KE). These tools simplified complex laws, exposed corruption, and mobilized young people online, allowing them to bypass traditional sources such as media and elites. However, using GAI for activism raises ethical and political concerns, including surveillance, data rights, and state repression. The study surveyed 374 Kenyan Gen Z participants, primarily in Nairobi, and used Partial Least Squares Structural Equation Modeling (PLS-SEM) to analyze the connections among AI use, tool appropriation, and political participation. Results show that ChatGPT use alone did not directly increase offline activism; its effect appeared when combined with Protest_GPT_KE and online participation. This study is one of the first to document how youth in the Global South are creatively using GAI for grassroots mobilization, demonstrating that GAI’s political influence depends on user innovation and context.
Identifying Bots Through LLM-Generated Text in Open Narrative Responses: A Proof-of-Concept Study
Joshua Claassen, Jan Karem Höhne, Ruben Bach, Anna-Carolina Haensch
Full text
Online survey participants are frequently recruited through social media platforms, opt-in online access panels, and river sampling approaches. Such online surveys are threatened by bots that shift survey outcomes and exploit incentives. In this proof-of-concept study, we advance the identification of bots driven by Large Language Models (LLMs) through the prediction of LLM-generated text in open narrative responses. We conducted an online survey on same-gender partnerships, including three open narrative questions, and recruited 1512 participants through Facebook. In addition, we utilized two LLM-driven bots, each of which responded to the open narrative questions 400 times. Open narrative responses synthesized by our bots were labeled as containing LLM-generated text (“yes”). Facebook responses were assigned a proxy label (“unclear”) as they may contain bots themselves. Using this binary label as ground truth, we fine-tuned prediction models relying on the “Bidirectional Encoder Representations from Transformers” (BERT) model, resulting in an impressive prediction performance: The models accurately identified between 97% and 100% of bot responses. However, prediction performance decreases if the models make predictions about questions they were not fine-tuned with. Our study contributes to the ongoing discussion on bots and extends the methodological toolkit for protecting the quality and integrity of online survey data.