I checked 7 public opinion journals on Friday, May 08, 2026 using the Crossref API. For the period May 01 to May 07, I found 13 new paper(s) in 5 journal(s).

Journal of Elections, Public Opinion and Parties

Why survey turnout data misleads: probing, selection bias, and misinterpreted political participation
Kasper M. Hansen
Full text
First-time voter boost of turnout: new identification strategy
Maiko Shoji, Kentaro Fukumoto
Full text
Compulsory voting and youth representation in parliaments
Daniel Stockemer, Kamila Kolodziejczyk, Avery Chalmers, Sam Maher, Lauren Garcia
Full text
The conditional value of legislative effectiveness
Elizabeth N. Simas
Full text

Journal of Official Statistics

Comparison of Small Area Procedures Based on Gamma Distributions with Extension to Informative Sampling
Yanghyeon Cho, Emily Berg
Full text
The gamma distribution is a useful model for small area prediction of a skewed response variable. We study the use of the gamma distribution for small area prediction. We emphasize a model, called the gamma-gamma model, in which the area random effects have gamma distributions. We compare this model to a generalized linear mixed model. Each of these two models has been proposed independently in the literature, but the two models have not yet been formally compared. We evaluate the properties of two mean square error estimators for the gamma-gamma model, both of which incorporate corrections for the bias of the estimator of the leading term. Finally, we extend the gamma-gamma model to informative sampling. We conduct thorough simulation studies to assess the properties of the alternative predictors. We apply the proposed methods to data from an agricultural survey.
Geographical Rezoning as a Combinatorial Game: Case Study of the Australian Statistical Geography Standard
Filip Juricev-Martincev, Helen Thompson, Gentry White
Full text
Geographical rezoning is a combinatorial optimization problem of assigning basic spatial units into regions for statistical analysis and the publication of official government statistics, such as the Census. We propose a novel approach by reframing this problem into a single-player, deterministic, fully-informative, finite combinatorial game; or simply, a single-player puzzle. This definition allows effective and systematic exploration of state-space, and offers perfect information guarantees, as well as decouples the problem definition from a specific solver algorithm. We take an existing combinatorial algorithm called HeLP (Hierarchical Land Parcel Aggregation), and combine its rezoning heuristic with a game-playing algorithm, Monte Carlo tree search, to create three new game-playing solvers for the geographical rezoning problem. A case study was conducted on a real-world data set in Canberra, Australia, on the Australian Statistical Geography Standard (ASGS). Our Monte Carlo implementations, HeLP-MCTS, HeLP-RAVE, and HeLP-RHEA, were tested on simulated data and have yielded a 50.25%, 28.77%, and 57.02% increase in the mean value of the partitioning heuristic function, respectively, compared to a combinatorial solver HeLP. We have also computationally improved the original HeLP algorithm, offering significant speed increases.

Journal of Survey Statistics and Methodology

CAN RESPONSE TIME ADJUSTMENTS TO QUESTION DEMANDS TOGETHER WITH AVERAGE RESPONSE TIME DISTINGUISH BETWEEN LEVELS OF SATISFICING? FINDINGS FROM THREE SURVEYS IN A LARGE U.S. PANEL STUDY
Raymond Hernandez, Stefan Schneider, Erik Meijer, Titus Galama, Doerte U Junghaenel, Arie Kapteyn, Elizabeth Zelinski, Arthur A Stone
Full text
Satisficing is a behavior that often results in careless responding and has been defined as engagement in simplified question responding approaches to reduce mental effort, often at the cost of response quality. Our objective was to examine the possibility of using a new survey response time (RT) based metric, response time adjustment, in combination with average survey RT, to identify groups with different levels of satisficing behavior. The RT adjustment parameter reflects the extent to which individuals adjust the amount of time they spend on survey items as a function of how demanding an item is, with low adjustment along with a low average survey RT expected to be associated with a greater likelihood of satisficing. We estimated a mixture model with RT adjustment and average RT as inputs using three questionnaires (with sample sizes of 5,321, 1,616, and 4,093, respectively) from the Understanding America Study (UAS), a large US Internet-based longitudinal panel. Weak and non-satisficing groups were identified in all three studies, with the former making up 3 to 9 percent of the samples, but no strong satisficing group was detected. Evidence supporting a weak satisficing group was based on good accuracy on easy attention check items (indicating they likely were not careless) but low accuracy on more demanding attention check items involving carefully reading long blocks of instructional text. Additionally, consistent with prior literature on satisficing, this weak satisficing group generally had lower mean values on cognitive ability and motivation-related variables (e.g., conscientiousness) compared to other groups. In two of the three studies, the satisficing group produced notable bias in study results involving high time intensity items. RT regulation and average RT may be useful for identifying satisficing in surveys with a mix of high and low time intensity items such as some tests and surveys with vignette-based items.
Improving the Measurement of Gender in Surveys: Effects of Select-One or Select-all-that-Apply Response Format on Measurement and Data Quality Among College Students
Dana Garbarski, Jennifer Dykema, James A Yonker, Rosie Eungyuhl Bae, Michael Topping
Full text
Recent guidelines on the measurement of gender in surveys recommend a two-step strategy that asks about sex assigned at birth followed by current gender identity using a small number of offered response categories (e.g., female, male, transgender) and an open response field to capture any residual categories (“not listed, please tell us”). However, more research on several fronts is needed to continue to refine our measurement strategies across contexts, and this work continues to be critically important with the dismantling of federally funded research regarding gender identity. In the current study, we examine the impact of response format on the measurement of gender identity. This study reports results from a between-subjects experiment embedded in a campus climate survey about inclusion and belonging at a large Midwestern university in Fall 2024. Over 16,000 students were asked, “What is your gender?” and subsequently randomly assigned to respond using one of two response formats. The first allowed respondents to “select-one” of the response options “woman, man, nonbinary, not listed (please tell us),” and was followed by the question “Are you transgender? Yes/No.” The second included the response options “woman, man, nonbinary, transgender, not listed (please tell us)” and instructed respondents to “select all that apply.” We examine the distribution of responses, item nonresponse, the number of gender categories reported, response times, and concurrent validity (in terms of the association between gender and survey outcomes about campus climate) across the two formats. While the results mainly show similarities in outcomes between the response formats, respondents are more likely to indicate they are transgender in the “select-one” format. By contrast, respondents are more likely to indicate gender expansiveness other than “transgender” or “nonbinary” with the “select-all” format. We discuss the implications of these findings for future research.

Politics, Groups, and Identities

Anti-Indigenous attitudes and divided support: the Indigenous voice and public opinion in Australia
Raymond Foxworth, Carew Boulding, Sarah Maddison, Edana Beauvais
Full text
The racial politics of student debt relief policy
Serena Laws, Mallory E. SoRelle
Full text
“Hidden” racist attitudes against athletes of color in the German population: findings from a list experiment
Michael Mutz, Sebastian Braun, Ulrike Burrmann
Full text

Public Opinion Quarterly

Computer-Assisted Mobile Phone Interviews in Low- and Middle-Income Countries Through a Total Survey Error Framework
Abigail R Greenleaf, Huguette Diakabana, Charles Lau
Full text
Researchers increasingly use computer-assisted telephone interviewing (CATI) via mobile phones in low- and middle-income countries (LMIC). A nascent methodological literature explores representation and measurement error in these surveys, but knowledge is disparate, siloed across disciplines, countries, and research designs. Using the total survey error framework, this research synthesis summarizes findings from peer-reviewed methodological research on CATI in LMIC. We used a scoping review methodology to identify and review 38 peer-reviewed journal articles to answer two research questions: (1) Which study designs, topic areas, and total survey error components have been examined in CATI mobile phone surveys conducted in LMIC? and (2) What does the research say about representation and measurement errors in CATI mobile phone surveys in LMIC? Based on these findings, this research synthesis highlights when, where, and how CATI surveys can be used across LMIC.
Bad Mood Rising? Assessing Scalar Invariance Violations with Comparative Democratic Support Data
Philip Warncke, Ryan E Carlin
Full text
The advent of nearly global estimates of democratic mood has caused genuine optimism for comparative investigations into the linkages between public opinion and democracy. Scholarly enthusiasm in this field has particularly been boosted by recent claims that measuring latent democratic support with hierarchical IRT models overcomes differential item functioning (DIF)—a well-known challenge that typically foils the comparability of latent constructs across time and space. Focusing specifically on DIF-induced violations to scalar measurement invariance, we show mathematically and with statistical simulations that no commonly used latent variable modeling framework, including hierarchical IRT, is immune to bias stemming from systematic DIF. While some models can fully accommodate measurement invariance violations that are completely random between nations and across items, they begin to falter as soon as such violations exhibit a directional bias, that is, if respondents from different countries interpret or appraise survey items systematically differently. Equipped with democratic mood data from Latin America, we present suggestive evidence that systematic, directional bias in DIF is far more prevalent than random measurement noninvariance. We conclude with a number of practical recommendations for public opinion researchers to mitigate measurement invariance violations in their own work.