News
Models
Products
keyboard_arrow_down
DeepSearch
Search, read and reason until best answer found.
Reader
Convert any URL to Markdown for better grounding LLMs.
Embeddings
World-class multimodal multilingual embeddings.
Reranker
World-class reranker for maximizing search relevancy.
More
keyboard_arrow_down
Classifier
Zero-shot and few-shot classification for image and text.
Segmenter
Cut long text into chunks and do tokenization.

API Docs
Auto codegen for your copilot IDE or LLM
open_in_new


Company
keyboard_arrow_down
About us
Contact sales
Intern program
Join us
open_in_new
Download logo
open_in_new
Terms & Conditions


Log in
login
Press release
February 28, 2024

Revolutionizing Bilingual Text Embeddings with Multi-Task Contrastive Learning

Our new paper explores how our Spanish-English and German-English models use multi-task contrastive learning and a sophisticated data pipeline to master language understanding and cross-lingual efficiency for texts up to 8192 tokens
Composite image of four colorful, stylized landmarks: Brandenburg Gate, St. Peter's Basilica, Tiananmen, and Golden Gate Brid
Jina AI
Jina AI • 3 minutes read

In our recent paper, Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings, we detailed our development of German-English and Spanish-English bilingual text embedding models.

Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings
We introduce a novel suite of state-of-the-art bilingual text embedding models that are designed to support English and another target language. These models are capable of processing lengthy text inputs with up to 8192 tokens, making them highly versatile for a range of natural language processing tasks such as text retrieval, clustering, and semantic textual similarity (STS) calculations. By focusing on bilingual models and introducing a unique multi-task learning objective, we have significantly improved the model performance on STS tasks, which outperforms the capabilities of existing multilingual models in both target language understanding and cross-lingual evaluation tasks. Moreover, our bilingual models are more efficient, requiring fewer parameters and less memory due to their smaller vocabulary needs. Furthermore, we have expanded the Massive Text Embedding Benchmark (MTEB) to include benchmarks for German and Spanish embedding models. This integration aims to stimulate further research and advancement in text embedding technologies for these languages.
arXiv.orgIsabelle Mohr
Embedding API
Start with 1M free tokens. Top-performing, 8192 context length bilingual embeddings for your search and RAG systems.

Our approach utilizes multi-task contrastive learning and advanced data curation pipeline, focusing on bilingual capabilities while extending to support 8192 tokens in length. This method allows our models to excel in understanding target languages and in conducting cross-lingual evaluations efficiently.

Aquí Se Habla Español: Top-Quality Spanish-English Embeddings and 8k Context
Jina AI’s new bilingual Spanish-English embedding model brings the state-of-the-art in AI to half a billion Spanish speakers.
GitHub
Ich bin ein Berliner: German-English Bilingual Embeddings with 8K Token Length
Jina AI introduces a German/English bilingual embedding model, featuring an extensive 8,192-token length, specifically designed to support German businesses thriving in the U.S. market.
GitHub

In addition to the bilingual models covered in the paper, we have also developed bilingual Chinese-English and monolingual English models. These additions showcase our commitment to covering a broad spectrum of linguistic needs and furthering our capabilities in language processing.

8K Token-Length Bilingual Embeddings Break Language Barriers in Chinese and English
The first bilingual Chinese-English embedding model with 8192 token-length.
Discord
Jina AI Launches World’s First Open-Source 8K Text Embedding, Rivaling OpenAI
Jina AI introduces jina-embeddings-v2, the world’s first open-source model boasting an 8K context length. Matching the prowess of OpenAI’s proprietary models, this innovation is now publicly accessible on Huggingface, signaling a significant milestone in the landscape of text embeddings.

Our bilingual models are characterized by their efficiency, operating with optimized vocabulary sizes to require fewer parameters and less memory. This efficiency underscores our dedication to creating powerful yet resource-efficient tools for language processing.

Following the release of our paper, we expanded the Massive Text Embedding Benchmark (MTEB) to include benchmarks for our English-German and English-Spanish embedding models. This expansion is part of our effort to stimulate further research and advancements in text embedding technologies for non-English languages.

At Jina AI, our aim is to enhance the processing and understanding of multiple languages, contributing to the NLP field with our developments in bilingual and monolingual text embedding models.

Categories:
Press release
rss_feed
Offices
location_on
Sunnyvale, CA
710 Lakeway Dr, Ste 200, Sunnyvale, CA 94085, USA
location_on
Berlin, Germany (HQ)
Prinzessinnenstraße 19-20, 10969 Berlin, Germany
location_on
Beijing, China
Level 5, Building 6, No.48 Haidian West St. Beijing, China
location_on
Shenzhen, China
402 Floor 4, Fu'an Technology Building, Shenzhen, China
Search Foundation
DeepSearch
Reader
Embeddings
Reranker
Classifier
Segmenter
API Documentation
Get Jina API key
Rate Limit
API Status
Company
About us
Contact sales
Newsroom
Intern program
Join us
open_in_new
Download logo
open_in_new
Terms
Security
Terms & Conditions
Privacy
Manage Cookies
email
Jina AI © 2020-2025.