In typing proficiency tests, like those used in job recruitment or research studies, individuals are evaluated based on their speed and accuracy. However, the difficulty of the typed text, its âtypabilityâ, can impact typing performance, introducing variability that is unrelated to skill. To ensure valid comparisons across individuals, time, and conditions, it is crucial to control for this variation in text difficulty. To address this issue, we develop the Typability Index, a model that predicts the relative typing speed of text. Building on earlier attempts to quantify typing difficulty from the 1940s, we create a more advanced typability model using the 136 Million (136Â M) Keystrokes Dataset (Dhakal et al., Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1â12, 2018), where over 168,000 participants each typed 15 sentences from a pool of 1,525 items. Through random forest regression, we identify eight key predictors from 30 candidate variables, including the proportion of lowercase letters, word frequency, and syllables per word. Trained on 80% of the dataset and validated on the remaining 20% and a novel dataset, the Typability Index explained 68â88% of the variance in typability, compared to the 34% explained by an earlier leading model (Bell, Unpublished Doctorâs Dissertation, University of Oklahoma, 1949). To promote higher control in typing research and assessments, we introduce a web-based tool to facilitate accurate measurement and fair comparisons of text typability.