I checked 7 public opinion journals on Friday, April 03, 2026 using the Crossref API. For the period March 27 to April 02, I found 8 new paper(s) in 4 journal(s).

Journal of Elections, Public Opinion and Parties

Concurrent or not concurrent? Date selection of provincial elections in Argentina (1985–2023)
Andrés Lacher
Full text
Suiting up or speaking out: analyzing candidate appearance & message complexity as heuristics for voter evaluation
Steven Perry, Matt Lamb
Full text

Journal of Survey Statistics and Methodology

The Extended Crosswise Model Adjusted for Random Answering
Khadiga H A Sayed, Maarten J L F Cruyff, Andrea PetrĂłczi, Peter G M Van der Heijden
Full text
The Extended Crosswise Model is a popular randomized response design that employs a sensitive and innocuous statement, and asks respondents if one of these statements is true, or if none or both are true. Although the model has a degree of freedom, it is unable to detect random answering. In this article, we propose a new method to detect and correct for random answering. This method makes use of a non-sensitive control statement and a quasi-randomized innocuous statement to which both answers are known, which allows for the detection of and correction for random answering. A simulation study shows that this method yields unbiased estimates of the prevalence of sensitive attribute. For four surveys among elite athletes, we present prevalence estimates of doping use that are corrected for random answering.

Public Opinion Quarterly

Thinking Ideologically: The Limited Role of Left and Right Labels as Policy Shortcuts
Sarah Lachance, Clareta Treger
Full text
How do voters use left-right ideological labels as shortcuts for policy positions in evaluating electoral candidates? We offer a distinction between maximal and minimal forms of ideological thinking. While maximal thinking implies that voters rely on ideological proximity as a proxy for policy congruence with candidates, minimal thinking requires only that voters use ideological labels to infer candidates’ positions—even if their own ideological identification is inconsistent with policy preferences. Drawing on original experimental data from Canada (N = 1,087)—a multiparty system with a fluid ideological landscape—we find that voters’ ideological self-placement is often misaligned with their policy positions, especially among right-leaning individuals. However, voters still use ideological proximity to infer candidates’ policy stances in the absence of policy information, supporting the Minimal Theory. These findings contribute to theories of political decision-making beyond the United States and have implications for substantive representation in systems with centrist or ideologically flexible parties.
Scaling Open-Ended Survey Responses Using LLM-Paired Comparisons
Matthew R DiGiuseppe, Michael E Flynn
Full text
Survey researchers rely heavily on closed-ended questions to measure latent respondent characteristics like knowledge, policy positions, emotions, ideology, and various other traits. Closed-ended questions are easy to analyze and collect, but necessarily limit the depth and variability of responses. Open-ended responses allow for greater depth and variability in responses, but are labor intensive to code. Large language models (LLMs) may help with this problem, but existing approaches to using LLMs have a number of limitations. In this paper, we propose and test a pairwise comparison method to scale open-ended survey responses on a continuous scale. The approach relies on LLMs to make pairwise comparisons of statements that identify which statement “wins” and “loses.” With this information, we employ a Bayesian Bradley-Terry model to recover a “score” on a latent dimension for each statement. This approach allows for finer discrimination between items, reduced anchoring bias, better measurement of uncertainty, and is more flexible than methods relying on Maximum Likelihood Estimation techniques. We demonstrate the utility of this approach on an open-ended question probing knowledge of interest rates in the US economy. A comparison of six LLMs of various sizes reveals that pairwise comparisons show greater consistency than zero-shot 0–10 ratings across a variety of model sizes. Further, comparison of pairwise decisions is consistent with knowledgeable crowdsourced workers.

Social Science Computer Review

Evolution of Deep Learning Models for Misinformation Detection in Social Media Textual Data: Background, Architectures, Datasets, and Emerging LLM Applications
Ziad Elgammal, Reda Alhajj
Full text
With the exponential growth in social media usage, the rapid spread of misinformation has become a critical global challenge. Recent advances in large language models (LLMs) have shown promising potential in automated misinformation detection. This survey provides a comprehensive review of LLM-based approaches for detecting misinformation in textual data on social media platforms. In this work, we analyze 70+ recent papers; to examine the evolution, implementation, and effectiveness of various LLM architectures in this domain. Our analysis reveals that BERT-based models dominate the field, appearing in approximately 85% of studies, with domain-specific variants like CT-BERT demonstrating superior performance in specialized contexts such as COVID-19 misinformation detection. We provide detailed comparisons of model architectures, implementation strategies, and performance metrics across different domains. Additionally, seven major datasets commonly used in this field were analyzed, examining their characteristics, limitations, and suitability for different detection tasks. The survey also addresses key challenges, including linguistic nuances, model interpretability, and ethical considerations. Our findings indicate that while LLM-based approaches achieve impressive accuracy metrics, significant challenges remain in cross-domain generalization and real-time detection. This survey concludes by identifying promising research directions and providing recommendations for robust model evaluation frameworks.
How Much Data Should I Request? Balancing Richness and Compliance in Digital Trace Data Donations
Ernesto de LeĂłn, Laura Boeschoten, Fabio Votta, Joris Mulder, Bella Struminskaya, Daniel Oberski, Theo Araujo, Claes de Vreese
Full text
Digital trace “data donation” studies offer researchers a unique opportunity to collect high-quality behavioral data, but decisions about the scope of requested data can impact both dataset richness and participant compliance. This paper examines the tradeoffs between requesting larger data packages, which include more extensive historical records, and participants’ willingness to donate. In a randomized experiment with Facebook and Instagram data donations, we compare a control condition where participants are asked to request the default 1-year data period to a treatment condition in which they are asked to request data for their entire account history. We analyze how different request sizes affect (1) participants’ compliance rates and (2) the characteristics of the data resulting from these different requests. We find that participants asked to request more data are less likely to complete the task. However, we propose that this is not primarily due to heightened privacy concerns, but rather because these data packages are significantly larger and therefore take longer for the platforms to deliver. This additional time to deliver data packages results in increased attrition. In terms of the effects on the data itself, we show that decisions about the time-span of the data impacts not only the volume of data requested, but also has implications for measurement validity, as the temporal window fundamentally redefines what key constructs represent, potentially transforming intended static indicators into narrow snapshots of recent behavior. We provide guidance for researchers navigating these decisions, considering both the benefits of richer longitudinal data and the risks of reduced participation.
The Black Box, Animated Idols, and Racialization
Lamia Balafrej
Full text
This essay argues that the black box—both as cryptic device and as critique of illegibility—is not unique to modern technology and has deep roots in the medieval Mediterranean world. Technical opacity was frequently addressed in Latin and Arabic sources, often with a critical undertone. Then as now, technoskeptical writers saw the self-acting device as treacherous, due to its reliance on hidden labor and mechanisms. This critique arose especially in relation to unfamiliar or foreign devices, like animated idols; as such, it was often racializing, attributing opacity as well as deceit to the object and its makers. Modern critiques of technology that focus on invisible labor may reproduce similar biases by enforcing a privileged, first-world perspective. A transhistorical approach thus not only shows the enduring history of the black box; it also illuminates the religious genealogy of techno-skepticism, as well as the biases that inhere in the black box, especially when deployed as a critical discourse.