Følg
Jos Rozen
Jos Rozen
Senior Scientist, Naver Labs Europe
Verifisert e-postadresse på naverlabs.com
Tittel
Sitert av
Sitert av
År
Multitask prompted training enables zero-shot task generalization
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
arXiv preprint arXiv:2110.08207, 2021
15692021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
15062023
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
10282022
Aligning language models with preferences through f-divergence minimization
D Go, T Korbak, G Kruszewski, J Rozen, N Ryu, M Dymetman
arXiv preprint arXiv:2302.08215, 2023
532023
Self-supervised and controlled multi-document opinion summarization
H Elsahar, M Coavoux, M Gallé, J Rozen
arXiv preprint arXiv:2004.14754, 2020
512020
Unsupervised and distributional detection of machine-generated text
M Gallé, J Rozen, G Kruszewski, H Elsahar
arXiv preprint arXiv:2111.02878, 2021
262021
Multitask prompted training enables zero-shot task generalization. arXiv
V Sanh, A Webson, C Raffel, SH Bach, L Sutawika, Z Alyafeai, A Chaffin, ...
arXiv preprint arXiv:2110.08207, 2021
162021
Compositional preference models for aligning LMs
D Go, T Korbak, G Kruszewski, J Rozen, M Dymetman
arXiv preprint arXiv:2310.13011, 2023
72023
Should you marginalize over possible tokenizations?
N Chirkova, G Kruszewski, J Rozen, M Dymetman
arXiv preprint arXiv:2306.17757, 2023
72023
Aligning Foundation Models for Language with Preferences through -divergence Minimization
D Go, T Korbak, G Kruszewski, J Rozen, N Ryu, M Dymetman
ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation …, 2023
22023
Insights of human-transportation system interactions inferred from public transit operational data
F Roulland, J Rozen, JC Handley
Transportation Research Procedia 32, 24-33, 2018
22018
Method for applying in-context learning for self-healing of language models
J Rozen, H Elsahar
US Patent App. 17/970,792, 2023
12023
Guaranteed Generation from Large Language Models
M Kim, T Thonet, J Rozen, H Lee, K Jung, M Dymetman
arXiv preprint arXiv:2410.06716, 2024
2024
ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
T Thonet, J Rozen, L Besacier
arXiv preprint arXiv:2403.20262, 2024
2024
disco: a toolkit for Distributional Control of Generative Models
G Kruszewski, J Rozen, M Dymetman
arXiv preprint arXiv:2303.05431, 2023
2023
List of abbreviations and acronyms
M Gallé, JMR Eurecat, H Elsahar, J Rozen, WP Leader
2019
Compositional preference models for alignment with scalable oversight
D Go, T Korbak, G Kruszewski, J Rozen, M Dymetman
Socially Responsible Language Modelling Research, 0
Systemet kan ikke utføre handlingen. Prøv på nytt senere.
Artikler 1–17