Follow
Valentin Hofmann
Valentin Hofmann
Allen Institute for AI
Verified email at allenai.org - Homepage
Title
Cited by
Cited by
Year
Dynamic Contextualized Word Embeddings
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
652021
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
ACL, 2024
63*2024
Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
62*2021
DagoBERT: Generating Derivational Morphology with a Pretrained Language Model
V Hofmann, JB Pierrehumbert, H Schütze
EMNLP, 2020
312020
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers
V Hofmann, H Schütze, J Pierrehumbert
ACL, 2022
292022
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative
L Weissweiler, V Hofmann, A Köksal, H Schütze
EMNLP, 2022
242022
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
P Röttger*, V Hofmann*, V Pyatkin, M Hinck, HR Kirk, H Schütze, D Hovy
ACL, 2024
232024
AI generates covertly racist decisions about people based on their dialect
V Hofmann, PR Kalluri, D Jurafsky, S King
Nature, 2024
22*2024
Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity
V Hofmann, X Dong, J Pierrehumbert, H Schütze
NAACL Findings, 2022
19*2022
The Reddit Politosphere: A Large-Scale Text and Network Resource of Online Political Discourse
V Hofmann, H Schütze, JB Pierrehumbert
ICWSM, 2022
142022
Geographic Adaptation of Pretrained Language Models
V Hofmann, G Glavaš, N Ljubešić, JB Pierrehumbert, H Schütze
TACL, 2024
112024
Paloma: A Benchmark for Evaluating Language Model Fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
arXiv:2312.10523, 2023
11*2023
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model
L Weissweiler*, V Hofmann*, A Kantharuban, A Cai, R Dutt, A Hengle, ...
EMNLP, 2023
112023
A Graph Auto-encoder Model of Derivational Morphology
V Hofmann, H Schütze, JB Pierrehumbert
ACL, 2020
112020
Predicting the Growth of Morphological Families from Social and Linguistic Factors
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2020
112020
Graph-enhanced Large Language Models in Asynchronous Plan Reasoning
F Lin, E La Malfa, V Hofmann, EM Yang, A Cohn, JB Pierrehumbert
ICML, 2024
42024
Explaining Pretrained Language Models' Understanding of Linguistic Structures Using Construction Grammar
L Weissweiler, V Hofmann, A Köksal, H Schütze
Frontiers in Artificial Intelligence, 2023
22023
CaMEL: Case Marker Extraction without Labels
L Weissweiler, V Hofmann, MJ Sabet, H Schütze
ACL, 2022
22022
Computational investigations of derivational morphology
V Hofmann
University of Oxford, 2023
2023
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology
V Hofmann, J Pierrehumbert, H Schütze
ICML, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–20