I checked 7 public opinion journals on Wednesday, December 31, 2025 using the Crossref API. For the period December 24 to December 30, I found 4 new paper(s) in 2 journal(s).

Journal of Survey Statistics and Methodology

Repeated Attempt Models for Nonresponse Bias: Small Sample Performance, Optimization of Contact Attempts, and an Application to Deer Hunters
Matthew J Clement, Matthew Karam
Full text
Nonresponse bias is a pervasive issue in surveys of people, affecting data quality across the social sciences. Such bias arises when the propensity to respond to a survey varies with the outcome variable, resulting in data that are missing not at random (MNAR). Repeated attempt models (RAMs) leverage information in nonresponses by survey subjects to estimate an outcome variable when data are MNAR. Here, we discuss parallels between RAMs and ecological occupancy models, which also use repeat surveys to account for imperfect detection of animal species during surveys. We describe a few likelihoods for RAMs, and we use simulations to evaluate the small sample performance of a RAM with a binary outcome. We also identify the optimal number of repeat surveys to complete, given different probabilities of response. Finally, we apply a RAM to surveys of deer hunters in Arizona, USA. Simulations indicated that RAMs generated unbiased estimates of a binary outcome for a variety of parameter values, but performance broke down when response probabilities or sample sizes were too low. We found that the optimal number of repeat surveys varied from about 4 to 20, depending on parameter values, but RAMs are likely inappropriate when data are missing at random. Our analysis of deer hunter data using a RAM produced plausible estimates of deer harvest. Relative to the RAM, a propensity model over-estimated deer harvest by 36โ€“129 percent. A RAM could be extended to address unmodeled variation in response probabilities, inaccurate responses, and changes in outcome variables through time. We suggest that RAMs could be broadly useful for many natural resource and social science applications.

Social Science Computer Review

Who Are the Online Commenters? A Large-Scale Representative Survey to Explore the Identity and Motivation of Online News Commenters in Comparison to Non-Commenters
Liesje C. A. van der Linden, Cedric Waterschoot, Ernst van den Hemel, Florian A. Kunneman, Antal P. J. van den Bosch, Emiel J. Krahmer
Full text
To better understand the demographic composition of people participating in commenting sections beneath online news articles, we conducted a large-scale survey ( n = 5,490) with a panel that is representative of the Dutch population โ€“ the LISS panel. We combined these data with demographic background variables and previously collected data on political views and values, to provide a detailed description of the identity of online news commenters in comparison to non-commenters. Our results show that the group of commenters contain more men (55%), and the age group of 45โ€“54 years old has the largest share of commenters (18% for men, 13% for women). Furthermore, we found little to no differences for education levels, income, location, political preferences, and cultural background, suggesting that there is no striking overrepresentation of specific groups among online commenters in general. However, when looking at the profiles of online commenters as a function of the topic and platform of discussion, differences start to emerge for gender, age, and education levels. We found no differences related to age and gender distributions for those with a higher commenting frequency, but a higher frequency does go hand in hand with more support for national populist and far-right political parties and a lower confidence in political parties.
Generational Divide in AI Adoption for Academic Writing: Evidence From Serbian Social Scientists
Marko Galjak, Marina Budiฤ‡
Full text
This cross-sectional study examines a generational divide in the adoption of AI for academic writing among academic researchers in Serbia. A survey of 823 social scientists analyzed usage patterns and measured age-related adoption rates through logistic regression analysis. The findings indicate that 27.2% of researchers employ AI for academic writing, with adoption rates varying significantly by age: 42.9% of researchers in their twenties use these tools, compared to 14.3% of those in their sixties. Researchers aged 23โ€“34 were twice as likely to adopt AI writing tools as those aged 49โ€“80. Each additional year of age reduced the odds of AI adoption by 3.8%, even when controlled for academic title, sex, and workplace type. This age effect persisted while gender and institutional context showed no significant association with adoption. The significant variation in AI adoption across age groups suggests potential shifts in academia. Senior faculty who avoid AI writing tools cannot effectively mentor graduate students who rely on them. Manuscripts now face inconsistent peer review standards; reviewers familiar with AI-assisted writing apply different criteria than those who reject it entirely. Universities face competing demands: junior researchers insist AI tools help them publish enough to secure tenure, yet senior faculty argue that students who depend on these tools never learn to construct arguments or evaluate evidence independently.
Generating the Past: How Artificial Intelligence Summaries of Historical Events Affect Knowledge
Daniel Karell, Matthew Shu, Thomas Davidson, Keitaro Okura
Full text
Many people now use AI chatbots to obtain summaries of complex topics, yet we know little about how this affects knowledge acquisition, including how the effects might vary across different groups of people. We conducted two experiments comparing how well people recalled factual information after reading AI-generated or human-written historical summaries. Participants who read AI-generated summaries scored significantly higher on knowledge tests than those who read expert-written blog posts (Study 1) or Wikipedia articles (Study 2). These improvements were present regardless of whether readers knew the content was AI-generated or if the AI summaries were politically biased. Moreover, AI summaries improved recall across various demographic groups, including gender, race, income, education, and digital literacy levels. This suggets that using AI tools for everyday factual queries does not create new knowledge inequalities but could still amplify existing ones through differential access. Our findings indicate that the increasingly routine use of AI for information-seeking could enhance factual learning, with implications for education policy and addressing inequality.