Skip to content

BAAI/bge-reranker-large

Model Information

The BAAI/bge-reranker-large is a cross-encoder reranking model developed by the Beijing Academy of Artificial Intelligence (BAAI). It is designed to re-rank top-k documents retrieved by initial retrieval models, enhancing the relevance of search results. This model is particularly effective in applications such as search engines, question answering, and information retrieval systems.

  • Model Developer: Beijing Academy of Artificial Intelligence (BAAI)
  • Model Release Date: March 18, 2024
  • Supported Languages: English, Chinese

Model Architecture

  • Base Model: XLM-RoBERTa-large
  • Architecture Type: Transformer-based cross-encoder
  • Input Format: Concatenated query and document pairs
  • Output: Relevance score indicating the similarity between the query and document

Benchmark Scores

BAAI/bge-reranker-large delivers strong reranking performance across common retrieval benchmarks.

Dataset Metric Score Note
MS MARCO MRR@10 40.2 Dev set
TREC DL '19 NDCG@10 71.6 Document reranking
BEIR (avg) NDCG@10 59.3 Avg. across 18 datasets
LoTTE (EN) MRR@10 52.1 Open-domain QA reranking

Evaluated using FlagEmbedding pipeline with Hugging Face Transformers.


References