Følg
Xisen Jin
Tittel
Sitert av
Sitert av
År
Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures
W Lei, X Jin, MY Kan, Z Ren, X He, D Yin
ACL 2018, 1437-1447, 2018
3762018
Recurrent event network: Autoregressive structure inference over temporal knowledge graphs
W Jin, M Qu, X Jin, X Ren
arXiv preprint arXiv:1904.05530, 2019
3012019
Contextualizing hate speech classifiers with post-hoc explanation
B Kennedy, X Jin, AM Davani, M Dehghani, X Ren
ACL 2020, 2020
1542020
Gradient-based editing of memory examples for online task-free continual learning
X Jin, A Sadhu, J Du, X Ren
NeurIPS 2021, 2021
124*2021
Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models
X Jin, Z Wei, J Du, X Xue, X Ren
ICLR 2020, 2019
1202019
Lifelong pretraining: Continually adapting language models to emerging corpora
X Jin, D Zhang, H Zhu, W Xiao, SW Li, X Wei, A Arnold, X Ren
NAACL 2022, 2021
932021
Dataless knowledge fusion by merging weights of language models
X Jin, X Ren, D Preotiuc-Pietro, P Cheng
ICLR 2023, 2022
782022
On transferability of bias mitigation effects in language model fine-tuning
X Jin, F Barbieri, B Kennedy, AM Davani, L Neves, X Ren
NAACL 2021, 2020
68*2020
Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation
X Jin, W Lei, Z Ren, H Chen, S Liang, Y Zhao, D Yin
CIKM 2018, 1403-1412, 2018
57*2018
Learn continually, generalize rapidly: Lifelong knowledge accumulation for few-shot learning
X Jin, BY Lin, M Rostami, X Ren
EMNLP 2021 Findings, 2021
402021
Refining language models with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
NeurIPS 2021, 8954-8967, 2021
332021
Visually grounded continual learning of compositional phrases
X Jin, J Du, A Sadhu, R Nevatia, X Ren
EMNLP 2020, 2020
17*2020
Overcoming catastrophic forgetting in massively multilingual continual learning
GI Winata, L Xie, K Radhakrishnan, S Wu, X Jin, P Cheng, M Kulkarni, ...
ACL 2023 Findings, 2023
102023
Refining neural networks with compositional explanations
H Yao, Y Chen, Q Ye, X Jin, X Ren
NeurIPS 2021, 2021
102021
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
X Jin, X Ren
ICML 2024 Spotlight, 2024
22024
Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations
X Jin, X Ren
arXiv preprint arXiv:2406.14026, 2024
2024
Systemet kan ikke utføre handlingen. Prøv på nytt senere.
Artikler 1–16