I checked 15 psychology journals on Tuesday, February 24, 2026 using the Crossref API. For the period February 17 to February 23, I found 36 new paper(s) in 11 journal(s).

Advances in Methods and Practices in Psychological Science

Using Heteroskedasticity-Consistent Standard Errors and the Bootstrap for Linear Regression Analysis Available in SPSS: A Tutorial
Hanna Rajh-Weber, Stefan Ernest Huber, Martin Arendasy
Full text
In the landscape of statistical software, from customizable programming-language-based to point-and-click systems, SPSS remains a popular choice among researchers. In SPSS, analyses with conventional methods, such as ordinary least squares regression, can be easily performed. However, violated assumptions, such as homoskedasticity or normality of the errors, can lead to altered Type I error rates or a reduction in statistical power. SPSS provides a multitude of alternative inference methods associated with linear regression, but accessing them is not always straightforward. To facilitate data analysis when assumptions for conventional inference methods are not met, in this tutorial, we aim to provide applied researchers, particularly SPSS users, with a guide for performing linear regression analyses using heteroskedasticity-consistent (HC) standard errors (HC3 and HC4) and two different bootstrap resampling methods (pairs bootstrap and wild bootstrap). Each bootstrap method can further be combined with a bootstrap p value, a percentile confidence interval, or a bias-corrected and accelerated confidence interval. For illustration, the methods are then compared using a computer-generated data set. Although the focus of this article is on applied researchers who use mainly SPSS for their analyses, a tutorial on how to do everything shown here in R (with custom functions) is included in the supplementary materials.
Handling Item-Level Missing Data in Linear Regression: A Tutorial
Guyin Zhang, Lihan Chen, Dexin Shi
Full text
With advances in methodology and statistical software, modern methods for handling missing data have become more accessible and straightforward to apply. In psychological studies, researchers often use questionnaires or scales composed of multiple items to measure constructs of interest. As a result, missing values frequently occur at the item level, whereas data analyses are typically conducted at the scale (composite) level. However, properly addressing item-level missing data remains a common challenge for many applied psychologists, including researchers who are otherwise well experienced in handling missing data at the scale level. In this tutorial, we introduce six approaches for handling item-level missing data: listwise deletion, hybrid methods that include proration with listwise deletion and proration with full-information maximum likelihood, item-level full-information maximum likelihood, item-level multiple imputation, two-stage maximum likelihood, and composite score factored regression. Using a published empirical data set, we provide step-by-step guidance on applying these methods in linear regression models. We include R code for each method and corresponding Mplus syntax if applicable. Finally, we summarize the key assumptions, advantages, and limitations of each approach and offer practical recommendations for researchers.
When Do Interaction/Moderation Effects Stabilize in Linear Regression?
Andrew Castillo, Joshua D. Miller, Colin Vize, David A. A. Baranger, Donald R. Lynam
Full text
Two-way interaction effects in linear regression occur when the relation between two variables changes depending on the level of a third. Despite their frequent use, interactions are notoriously difficult to estimate accurately and test for statistical significance because of small effect sizes and low reliability. In this study, we used Monte Carlo simulations to establish stability thresholds for two-way interactions between continuous variables across combinations of reliability (0.7–1.0), main effect size (0.1–0.5), collinearity (0.1–0.5), and interaction effect size (0.05–0.2). Stability was defined as the consistency of estimated effect sizes across repeated samples of the same size from the same population and operationalized using modified definitions of the corridor of stability and point of stability from Schönbrodt and Perugini. Results show that the stability of interaction estimates is primarily determined by sample size and predictor reliability. The case representing a realistic psychology field study, in which researchers have limited control over variables, stabilized at n = 3,800, requiring 72% statistical power. At n ≤≤ 100, 11% to 45% of the estimates were incorrectly signed (i.e., negative when the true effect was positive). Most psychology studies enroll far fewer than 500 participants, and our results indicate many published interactions may be unstable. Analyses involving highly reliable predictors, such as group assignment in experimental designs, may stabilize at lower sample sizes because they attenuate the expected effect size less than variables with more measurement error. Researchers are encouraged to avoid routine tests of two-way interactions unless sample size and reliability are adequate and hypotheses are specified a priori.

Behavior Research Methods

LeCoder: A large-scale automated coder for coding errors in word-production tasks
Shanhua Hu, Delaney DuVal, Brielle C. Stark, Nazbanou Nozari
Full text
Speech errors have been instrumental in advancing our understanding of the architecture of the language production system, the nature of its representations, and its disorders. To be most informative, researchers usually need large amounts of data. Hand-coding such data can be both cumbersome and subjective. This paper presents LeCoder, the first open-source, automated error coder for English word and naming data, which uses a data-driven approach grounded in large-scale corpora to quantify the target–response relationship, allowing it to be flexible, scalable, and generalizable across new datasets. By testing the coder on two datasets from two aphasia labs that have been carefully coded by trained research assistants, we first establish that LeCoder has high accuracy when compared to expert coders, and in certain cases, offers a more logical categorization than human coders. We then show, using robust machine-learning approaches, that LeCoder’s performance generalizes to new participants and items it has never encountered before. Collectively, these findings encourage the use of LeCoder across labs for more objective coding of speech errors, which will, in turn, increase replicability of findings in all subfields of research that use speech error analysis, including neuropsychological research.
How plausible is my model? Assessing model plausibility of structural equation models using Bayesian posterior probabilities (BPP)
Ivan Jacob Agaloos Pesigan, Shu Fai Cheung, Huiping Wu, Florbela Chang, Shing On Leung
Full text
In structural equation modeling (SEM), one method to select the most plausible model from several candidates, or to compare one or more hypothesized models with similar alternatives on plausibility, is to compare the models using Bayesian posterior probability (BPP). BPP can be computed from the Bayesian information criterion (BIC) scores (Wu et al. Multivariate Behavioral Research , 55 (1), 1–16, 2020). This approach complements conventional goodness-of-fit indices such as the Comparative Fit Index (CFI), the root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR) in giving concise BPP for assessing uncertainties among all models considered. It can also reveal evidence against a model otherwise hidden by these indices. However, Wu et al. Multivariate Behavioral Research , 55 (1), 1–16. (2020) did not provide guidelines on deciding the models that should be considered. To facilitate the use of BPP, we proposed a novel method for selecting this set of models, called neighboring models , to help researchers decide on the initial set. This novel method integrates seamlessly into the typical workflow for SEM analysis. Researchers can fit a model as usual and then use this method to assess whether it is the most plausible model compared with the neighboring models. We believe the proposed method will make it easier for researchers to make better-informed decisions when evaluating their models. We developed a user-friendly R package, , to automate all the steps: generating the set of neighboring models, fitting them, and computing the BPPs, all in a single function.
A tutorial for software options to aid in assessing functional relations in single-case experimental designs
Rumen Manolov
Full text
Single-case experimental designs (SCEDs) can be used for identifying effective interventions via the intensive study of one or a few individuals in different conditions, actively manipulated by the researcher. The assessment of SCED data entails both judging whether there is sufficient evidence of a functional relation (i.e., a causal effect of the intervention on the target behavior) and quantifying the magnitude of the effect. In the current text, the focus is on assessing the presence of a functional relation, considering all the attempts to demonstrate an effect that SCEDs include. Specifically, the aim is to review several freely available websites, which require no additional software to be installed, and offer graphical representations of the data, visual aids, and quantifications. Several data analytical steps are outlined for performing the assessment, both dealing with each basic effect separately and evaluating the consistency of effects. Software that is useful for carrying out these steps is reviewed, including the way in which the data files should be specified and the few clicks required by applied researchers to achieve the desired output. The interpretations of the output are illustrated with real data.
Generalized least squares transformation for single-case experimental design: Introducing the R package lmeSCED
Chendong Li, Eunkyeng Baek, Wen Luo
Full text
Comparing effect latencies in the visual world paradigm: Monte Carlo simulations to assess resampling-based procedures
Serge Minor
Full text
In a series of Monte Carlo simulation studies, we evaluated the power and Type I error rates of resampling-based procedures for comparing effect latencies between groups in the visual world paradigm (VWP). Resampling-based methods, while versatile, are known to fail in certain cases. Therefore, validation of such methods through simulation is crucial. We compared permutation- and bootstrapping-based tests combined with different methods for measuring effect latency while manipulating sample size and true effect size. Alongside previously used latency measures, we tested new measures involving the application of an effect size threshold. Simulations were based on existing VWP datasets representing different effect types (preferential looks triggered by lexical vs. grammatical cues, cohort competitor effects in word recognition) and data collection methods (infrared- vs. webcam-based eye tracking). A total of 156,000 simulations were conducted across five studies, involving 548 million resampled datasets. The main findings are as follows: (1) With sufficient sample sizes, tests were effective in detecting latency differences of 200–300 ms in sentence processing tasks, and as small as 100 ms in word recognition. (2) The permutation test and bootstrapped percentile CIs exhibited the highest overall power without inflation of Type I error rates. (3) Applying an effect size threshold in latency estimation led to consistent increases in statistical power. (4) Resampling by participant was robust to increases in cross-subject variability;in contrast, bootstrapping within participants and time bins led to elevated Type I error rates. Based on these results, we offer recommendations for using non-parametric resampling-based procedures to compare group latencies in VWP experiments.
The fundamentals of eye tracking part 6: Working with areas of interest
Ignace T. C. Hooge, Marcus Nyström, Diederick C. Niehorster, Richard Andersson, Tom Foulsham, Antje Nuthmann, Roy S. Hessels
Full text
Researchers use area of interest (AOI) analyses to interpret eye-tracking data. This article addresses four key aspects of AOI use: 1) how to report AOIs to support replicable analyses, 2) how to interpret AOI-related statistics, 3) methods for generating both static and dynamic AOIs, and 4) recent developments and future directions in AOI use. The article underscores the importance of aligning AOI design with the study’s conceptual and methodological foundations. It argues that critical decisions, such as the size, shape, and placement of AOIs, should be made early in the experimental design process and should involve eye-tracking data quality, the research question, participant tasks, and the nature of the visual stimulus. It also evaluates recent advances in AOI automation, outlining both their benefits and limitations. The article’s main message is that researchers should plan AOIs carefully and explain their choices openly so others can replicate the work.
Validating explicit rating tasks for measuring pronunciation biases: A case study of ING variation
Aini Li, Meredith Tamminga
Full text
Spoken language is highly variable, as words can have different pronunciation variants. A growing body of psycholinguistic research has employed experimental methods such as explicit rating tasks to obtain user biases toward different pronunciation variants. However, no prior work has empirically validated whether experimentally elicited user estimates accurately reflect real-world usage patterns. By correlating user estimates and conversational speech data for English variable ING pronunciations under different experimental prompts, we found that while rating tasks can provide word biases that do correlate significantly with corpus word biases, the correlations are only modest and there are asymmetries in the relationship between elicited word biases and corpus word biases. These findings call for future research to incorporate word biases into the study of sociolinguistic variation and language processing.
ConversationAlign: Open-source software for analyzing patterns of lexical use and alignment in conversation transcripts
Benjamin Sacks, Virginia Ulichney, Anna Duncan, Chelsea Helion, Sarah M. Weinstein, Tania Giovannetti, Gus Cooney, Jamie Reilly
Full text
Much of our scientific understanding of language processing has been informed by controlled experiments divorced from the real-world demands of naturalistic communication. Conversation requires synchronization of rate, amplitude, lexical complexity, affective coloring, shared reference, and countless other verbal and nonverbal dimensions. Conversation is not merely a vector for information transfer but also serves as a mechanism for establishing or maintaining social relationships. This process of language calibration between interlocutors is known as linguistic alignment . We developed an open-source R package, ConversationAlign , capable of computing novel indices of linguistic alignment and main effects of language use between interlocutors by evaluating word choice across numerous semantic, affective, and lexical dimensions (e.g., valence, concreteness, frequency, word length). We describe the operations of ConversationAlign, including its primary functions of cleaning and transforming raw language data into simultaneous time series objects aggregated by interlocutor, turn, and conversation. We then outline mathematical operations involved in computing complementary indices of linguistic alignment that capture both local (synchrony in turn-by-turn scores) and global relations (overall proximity) between interlocutors. We present a use case of ConversationAlign applied to interview transcripts from American radio legend Terry Gross and her many guests spanning 15 years. We identify caveats for use and potential sources of bias (e.g., polysemy, missing data, robustness to brief language samples) and close with a discussion of potential applications to other populations. ConversationAlign (v 0.4.0) is freely available for download and use via CRAN or GitHub. For technical instructions and download, visit https://github.com/Reilly-ConceptsCognitionLab/ConversationAlign .
Quantifying the stability landscapes of psychological networks
Jingmeng Cui, Gabriela Lunansky, Anna Lichtwarck-Aschoff, Norman B. Mendoza, Fred Hasselman
Full text
The network theory of psychopathology proposes that mental disorders can be represented as networks of interacting psychiatric symptoms. These direct symptom–symptom interactions can create a vicious cycle of symptom activation, pushing the network to a self-sustaining, dysfunctional phase of psychopathology: a mental disorder. Symptom network models can be estimated from empirical data through statistical models. Although simulation studies have established a relation between the structure of these symptom network models and the probability they end up in a self-sustaining dysfunctional phase, the general stability of the system is left implicit. The general stability includes both the stability of the dysfunctional phase and the stability of the healthy phase. In this paper, we present a novel method to quantify the stability landscapes of network models through stability landscapes. Our method is based on the Hamiltonian of the microstates of Ising models and can be used to show the stability of estimated Ising network models. Compared to simulation-based methods, our approach is computationally more efficient and quantifies the stability of all possible system states. Furthermore, we propose a set of stability metrics to quantify the stability of the healthy and dysfunctional phases and a bootstrapping method for range estimation of the stability metrics. To demonstrate the method’s utility, we apply it to an empirical data set and show how it can be used to compare the stability of phases between groups. The presented method is implemented in a freely available R package, Isinglandr .

Computers in Human Behavior

Offline Friendship Conflict and Adolescent Internet Addiction: Indirect Associations via Self-Esteem and the Moderating Role of Clique-Level Norms
Yanli Hou, Ruonan Guo, Shengcheng Song, Caina Li
Full text
Virtual peers reduce gambling symptoms and related problems of moderate-risk gamblers: A randomized controlled trial
Kenji Yokotani, Yosuke Seki, Nobuhito Abe, Masahiro Takamura, Tetsuya Yamamoto, Hideyuki Takahashi
Full text
AI Chatbots in Mental Health: How Emojis, Prompt Type, and Interactivity Shape User Perceptions in the United States and China
Jihye Lee, Zinan Darren Yang, Weijia Shi, Yan Liu
Full text
Evaluating user performance with RAG-based generative AI: A scenario-based experiment on AI-assisted information retrieval
Aktilek Sagynbayeva, Ajin Pyo, Sang-Hyeak Yoon, Sung-Byung Yang
Full text
Interdisciplinary Perspectives and Current Findings on the Role of Trust as a Psychological Mediator in Human Interaction with Artificial Intelligence: Editorial Overview
Irene Valori, Johannes Kraus, Merle T. Fairhurst
Full text
Blissful (A)Ignorance: Despite the widespread adoption of AI in communication, people do not suspect AI use in realistic contexts
Jiaqi Zhu, Andras Molnar
Full text
Official onsite event versus unofficial streaming: Understanding the wellbeing formation in esports spectatorship
Sungkyung Kim, Hee Jung Hong
Full text

Group Processes & Intergroup Relations

New kinds of group complexity in intergroup relations: An analysis of gender and sexuality
Valentina Palacio Posada, Daniel J. Chiacchia, Geoffrey J. Leonardelli
Full text
Recognizing social identity complexity as one form of group complexity, we introduce two new kinds. Intergroup complexity encompasses perceived overlap between the ingroup and outgroup(s), whereas outgroup complexity entails overlap among outgroups (greater overlap yields simpler perceptions). Both apply to domains with at least one ingroup and two outgroups. Testing ideas from the social identity, gender, and sexuality relations literatures, we collected peoples’ perceptions of intergroup and outgroup complexity among gender and sexuality categories separately, using an online survey with a convenience sample of American adults ( N = 287). Results revealed that people perceived greater intergroup than outgroup complexity, less sexual than gender complexity (especially so among sexual outgroups), and were more likely to report greater intergroup complexity as their ingroup’s status increased. Moreover, social dominance orientation moderated status effects. Implications focus on the applicability of these new forms of group complexity and their consequences.
The moderating role of collective narcissism in White Americans’ psychological defensiveness to the history of racism
H. Annie Vu, Luis M. Rivera
Full text
Since teaching about past racism in the United States often necessitates deliberations over White Americans’ ingroup transgressions, it can elicit historical defensiveness. We tested this hypothesis across three experiments ( N s = 109, 263, and 601) and further investigated if this effect was moderated by White collective narcissism. White American participants were randomly assigned to an ingroup transgression (presented with the history of racism) or an ingroup nontransgression (presented with the history of general events) condition. Across all experiments, (a) facing ingroup transgressions increased perceived ingroup responsibility among participants with low collective narcissism but not among those with high collective narcissism, and (b) among participants facing ingroup transgressions only, strong collective narcissism was consistently associated with less perceived ingroup responsibility. This research highlights the potential dangers of collective narcissism in erasing the history of racism.
Perceptions of an ally who confronts racism: The role of displayed emotion and response type
Adriana Lopez, Cheryl L. Dickter
Full text
Previous research has demonstrated that when White allies confront racist comments, it reduces future prejudicial behavior in perpetrators, establishes egalitarian norms, and has positive effects for confronters. The current studies sought to examine whether the delivery of the confrontation affects perceptions of these allies. Three studies examined whether emotional expression (angry vs. control, Study 1), response type (direct vs. indirect, Study 2), and the interaction of these factors (Study 3) affected perceptions of White allies. Participants ( N = 740) evaluated a White person in a vignette who confronted a racist comment. Results indicated that confronters who expressed anger were viewed more negatively than those who did not, due to perceptions that they were motivated to make the perpetrator look bad. Direct responses also elicited more positive perceptions of the confronter than indirect responses. These results may inform educational strategies that encourage allies to confront racist remarks.
Ignorance of history, political ideology, and attitudes toward Confederate symbols in the United States
Tyler J. Robinson, Sydney M. Rivera, Ethan Zell
Full text
Conservatives report much more favorable attitudes toward Confederate symbols (e.g., flags, monuments) than liberals. However, little work has examined factors that mediate or explain this robust political difference. Across two studies, we explored whether knowledge of historical racism mediates political differences in Confederate symbol attitudes. In a predominantly White internet sample ( N = 227, U.S. South), Study 1 found that the association between political conservatism and attitudes toward Confederate symbols was mediated by historical knowledge. Further, this mediation effect remained after adjusting for Southern identity. Study 2 replicated the predicted mediation effect in a racially diverse university sample and found that it obtains across racial-ethnic groups ( N = 557, U.S. Southeast). These results suggest that ignorance of historical racism helps to explain political differences in Confederate symbol attitudes. We discuss implications of these findings for research on the connection between historical knowledge and racial attitudes (i.e., the Marley hypothesis).

Journal of Experimental Social Psychology

Navigating ideological divides in digital spaces: How political ideology and moral rhetoric shape the promotion of causes online
Monica Gamez-Djokic, Marlon Mooijman, Matthew D. Rocklage, Maryam Kouchaki
Full text

Journal of Personality and Social Psychology

Are the metatraits fact or artifact? Ruling out alternative explanations for the higher-order factors of the Big Five.
Colin G. DeYoung, Ming Him Tai, Edward Chou, Boris Mlačić
Full text
Femininity culture: Theory and workplace implications.
Andrea C. Vial, Marta Beneda
Full text

Multivariate Behavioral Research

Penalized Subgrouping of Heterogeneous Time Series
Christopher M Crawford, Jonathan J Park, Sy-Miin Chow, Anja F Ernst, Vladas Pipiras, Zachary F Fisher
Full text

Personality and Social Psychology Bulletin

The Gendered Benefits of Communication Strategies: Women Leaders Are Less Effective but More Liked When They Use Prevention-Focused Language
M. Asher Lawson, Sandra C. Matz, Friedrich M. Götz, Ashley E. Martin
Full text
Research has identified a double-bind for female leaders: When acting in line with gender stereotypes, they are viewed as more likeable but less competent. Here, we test the impact of using gender stereotypical language—characterized by more prevention-focused language (e.g., avoiding risks) and less promotion-focused language (e.g., seeking gains)—on U.S. governors’ approval ratings during COVID-19 and their ability to promote effective social distancing behaviors. With a final dataset of 3,759 documents capturing governors’ communication, a 13-week panel of Google mobility data containing 6,534 observations (Study 1), U.S. nationally representative survey data from 57,532 participants (Study 2), and 24,247 tweets (Study 3), we find that female governors who use less prevention-focused, stereotypical language in their communications are more effective at increasing compliance with social distancing measures but receive lower approval ratings. As such, women leaders’ necessary approaches in crisis situations may undermine their sustainability in positions of power.
How Does Rejection Feel? Explaining Victims’ Reactions to Social Rejection From the Perspective of Self-Conscious Emotions
Irene Castro, Saulo Fernández
Full text
Research has characterized the emotional response to social rejection as a generalized negative affect, overlooking the diverse reactions of rejected individuals. We explored how humiliation and related emotions (anger, shame, and guilt) are linked to post-rejection behavior, and how two key appraisals (unfairness and internalization of devaluation) evoke specific emotions. In two studies—an experimental Cyberball study ( N = 186) and a large-scale correlational study ( N = 1,200)—we found that humiliation was associated with both unfairness and internalization, anger only with unfairness, shame with internalization, and guilt with internalization and negatively with unfairness. Humiliation was correlated with aggressive confrontation and avoidance, anger with aggressive and non-aggressive confrontation, shame with avoidance and negatively with non-aggressive confrontation, and guilt with reparation and non-aggressive confrontation. We discuss the relevance of these emotional pathways for understanding social rejection and informing targeted interventions to mitigate harmful responses.
Lay Attributions of Conspiracy Beliefs Predict Intentions to Correct Conspiracy Believers
Valentin Mang, Kai Epstude, Bob M. Fennis
Full text
What do laypeople think causes conspiracy beliefs? In six correlational studies ( N = 2,024) and a qualitative study ( N = 190), we examined laypeople’s attributions of others’ conspiracy beliefs and how these attributions predict their intentions to correct conspiracy believers. Attribution research suggests that dispositional (vs. situational) attributions should dominate lay beliefs and negatively predict correction intentions. Dispositional attributions of conspiracy beliefs were indeed more prevalent than attributions to situational causes, with two exceptions: Conspiracy beliefs were attributed most strongly to influence from social media and misinformation. Attributing another person’s conspiracy beliefs more strongly to social media or misinformation also predicted intentions to correct this person, more so than other attributions. Our results suggest that (a) assessing attributions at a more detailed level than is often done can help uncover yet unobserved nuance in laypeople’s attributions and (b) encouraging certain attributions of conspiracy beliefs could help foster their interpersonal correction.

Psychological Methods

Nested model comparisons between common factors and composites.
Danielle Siegel, Victoria Savalei, Mijke Rhemtulla
Full text
Timing of a just-in-time intervention to reduce alcohol consumption: A simulation approach to optimize decision rules.
Matthias Haucke, Dominic Reichert, Iris Reinhard, Rika GroĂź, Abhijit Sreepada, Ali Ghadami, Marvin Ganz, Christine Heim, Heike Tost, Ulrich W. Ebner-Priemer, Shuyan Liu, Markus Reichert
Full text

Psychological Science

To Believe or Not to Believe in Conspiracy Claims? That Is a Question for Signal Detection Theory
Maude Tagand, Dominique Muller, Cécile Nurra, Olivier Klein, Benjamin Aubert-Teillaud, Kenzo Nera
Full text
Conspiracy mentality is conceptualized as a continuum. Research on this topic has focused on unwarranted conspiracy claims and the upper end of the conspiracy-mentality continuum—people seeing conspiracies everywhere. This focus neglects warranted conspiracy claims and the lower end of the continuum. To better understand conspiracy mentality, we aimed to clarify both ends of the continuum using signal detection theory. We examined how people evaluate warranted and unwarranted conspiracy claims across levels of conspiracy mentality in two studies with 331 French-speaking adult participants from France, Switzerland, and Belgium (Study 1) and 576 English-speaking adult participants from the United States and the United Kingdom (Study 2), both groups recruited via Prolific. Compared with participants high in conspiracy mentality, those low in conspiracy mentality not only believed less in conspiracies but also underestimated their prevalence. However, participants low in conspiracy mentality were more accurate at distinguishing warranted from unwarranted conspiracy claims. These results provide a better understanding of conspiracy mentality and its relationship with the perceived truthfulness of conspiracies.

Psychology of Popular Media

Audiences on the dark side: Do antisocial personality traits predict motives for true crime listening?
Sofia V. Rhea, Laramie D. Taylor
Full text
Effects of avatar behavior on aggression: Mediation of moral self-perception and moderation of avatar identification.
Shupeng Heng, Ziwan Zhang, Danfeng Zheng
Full text
Video games are awesome: Understanding awe experiences in video games.
Ursula Thomson, Kongmeng Liew
Full text