Følg
Xinyu Ma
Xinyu Ma
Ph.d., Institute of Computing Techonology, CAS & Baidu
Verifisert e-postadresse på baidu.com - Startside
Tittel
Sitert av
Sitert av
År
Is ChatGPT good at search? investigating large language models as re-ranking agents
W Sun, L Yan, X Ma, S Wang, P Ren, Z Chen, D Yin, Z Ren
arXiv preprint arXiv:2304.09542, 2023
1872023
Prop: Pre-training with representative words prediction for ad-hoc retrieval
X Ma, J Guo, R Zhang, Y Fan, X Ji, X Cheng
Proceedings of the 14th ACM international conference on web search and data …, 2021
992021
Pre-training methods in information retrieval
Y Fan, X Xie, Y Cai, J Chen, X Ma, X Li, R Zhang, J Guo
Foundations and Trends® in Information Retrieval 16 (3), 178-317, 2022
672022
B-PROP: bootstrapped pre-training with representative words prediction for ad-hoc retrieval
X Ma, J Guo, R Zhang, Y Fan, Y Li, X Cheng
Proceedings of the 44th International ACM SIGIR Conference on Research and …, 2021
582021
Pre-train a discriminative text encoder for dense retrieval via contrastive span prediction
X Ma, J Guo, R Zhang, Y Fan, X Cheng
Proceedings of the 45th International ACM SIGIR Conference on Research and …, 2022
422022
Is ChatGPT good at search
W Sun, L Yan, X Ma, P Ren, D Yin, Z Ren
Investigating Large Language Models as Re-Ranking Agent. ArXiv abs/2304.09542, 2023
222023
A linguistic study on relevance modeling in information retrieval
Y Fan, J Guo, X Ma, R Zhang, Y Lan, X Cheng
Proceedings of the Web Conference 2021, 1053-1064, 2021
142021
Scattered or connected? an optimized parameter-efficient tuning approach for information retrieval
X Ma, J Guo, R Zhang, Y Fan, X Cheng
Proceedings of the 31st ACM International Conference on Information …, 2022
92022
Instruction distillation makes large language models efficient zero-shot rankers
W Sun, Z Chen, X Ma, L Yan, S Wang, P Ren, Z Chen, D Yin, Z Ren
arXiv preprint arXiv:2311.01555, 2023
82023
A contrastive pre-training approach to discriminative autoencoder for dense retrieval
X Ma, R Zhang, J Guo, Y Fan, X Cheng
Proceedings of the 31st ACM International Conference on Information …, 2022
82022
Pre-training Methods in Information Retrieval. CoRR abs/2111.13853 (2021)
Y Fan, X Xie, Y Cai, J Chen, X Ma, X Li, R Zhang, J Guo, Y Liu
arXiv preprint arXiv:2111.13853, 2021
42021
Pre-training with aspect-content text mutual prediction for multi-aspect dense retrieval
X Sun, K Bi, J Guo, X Ma, Y Fan, H Shan, Q Zhang, Z Liu
Proceedings of the 32nd ACM International Conference on Information and …, 2023
32023
The butterfly effect of model editing: Few edits can trigger large language models collapse
W Yang, F Sun, X Ma, X Liu, D Yin, X Cheng
arXiv preprint arXiv:2402.09656, 2024
22024
The Fall of ROME: Understanding the Collapse of LLMs in Model Editing
W Yang, F Sun, J Tan, X Ma, D Su, D Yin, H Shen
arXiv preprint arXiv:2406.11263, 2024
2024
Systemet kan ikke utføre handlingen. Prøv på nytt senere.
Artikler 1–14