Skip to content

Embeddings

An embedding model is a tool that converts text data into a vector representation. The quality of the embedding model is crucial for the quality of the search results. You can configure multiple embedding models in your Django settings and use them for different fields in your documents.

Configuration

Default Embedding Model

Configure the default embedding model that will be used when no specific model is specified:

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.SentenceTransformerModel",
        "configuration": {
            "model_name": "sentence-transformers/all-MiniLM-L6-v2",
        },
    },
}

Named Embedding Models

You can define multiple named embedding models to use for different fields:

settings.py
SEMANTIC_SEARCH = {
    "embedding_models": {
        "title_model": {
            "model": "django_semantic_search.embeddings.SentenceTransformerModel",
            "configuration": {
                "model_name": "sentence-transformers/all-mpnet-base-v2",
                "document_prompt": "Title: ",
            },
        },
        "content_model": {
            "model": "django_semantic_search.embeddings.OpenAIEmbeddingModel",
            "configuration": {
                "model": "text-embedding-3-small",
            },
        },
    },
    ...
}

Then reference these models in your document definitions:

documents.py
@register_document
class BookDocument(Document):
    class Meta:
        model = Book
        indexes = [
            VectorIndex("title", embedding_model="title_model"),
            VectorIndex("content", embedding_model="content_model"),
            VectorIndex("summary"),  # Will use default_embeddings
        ]

Note: Fields without a specified embedding_model will use the model defined in default_embeddings.

Supported Models

Currently, django-semantic-search supports the following embedding models:

Sentence Transformers

The Sentence Transformers library provides a way to convert text data into a vector representation. There are over 5,000 pre-trained models available, and you can choose the one that fits your needs the best.

One of the available models is all-MiniLM-L6-v2, which is a lightweight model that provides a good balance between the quality of the search results and the resource consumption.

django_semantic_search.embeddings.SentenceTransformerModel

Bases: DenseTextEmbeddingModel

Sentence-transformers model for embedding text.

It is a wrapper around the sentence-transformers library. Users would rarely need to use this class directly, but rather specify it in the Django settings.

Requirements:

pip install django-semantic-search[sentence-transformers]

Usage:

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.SentenceTransformerModel",
        "configuration": {
            "model_name": "sentence-transformers/all-MiniLM-L6-v2",
        },
    },
    ...
}

Some models accept prompts to be used for the document and query. These prompts are used as additional instructions for the model to generate embeddings. For example, if the document_prompt is set to "Doc: ", the model will generate embeddings with the prompt "Doc: " followed by the document text. Similarly, the query_prompt is used for the query, if set.

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.SentenceTransformerModel",
        "configuration": {
            "model_name": "sentence-transformers/all-MiniLM-L6-v2",
            "document_prompt": "Doc: ",
            "query_prompt": "Query: ",
        },
    },
    ...
}
Source code in src/django_semantic_search/embeddings/sentence_transformers.py
class SentenceTransformerModel(DenseTextEmbeddingModel):
    """
    Sentence-transformers model for embedding text.

    It is a wrapper around the sentence-transformers library. Users would rarely need to use this class directly, but
    rather specify it in the Django settings.

    **Requirements:**

    ```shell
    pip install django-semantic-search[sentence-transformers]
    ```

    **Usage:**

    ```python title="settings.py"
    SEMANTIC_SEARCH = {
        "default_embeddings": {
            "model": "django_semantic_search.embeddings.SentenceTransformerModel",
            "configuration": {
                "model_name": "sentence-transformers/all-MiniLM-L6-v2",
            },
        },
        ...
    }
    ```

    Some models accept prompts to be used for the document and query. These prompts are used as additional
    instructions for the model to generate embeddings. For example, if the `document_prompt` is set to `"Doc: "`, the
    model will generate embeddings with the prompt `"Doc: "` followed by the document text. Similarly, the
    `query_prompt` is used for the query, if set.

    ```python title="settings.py"
    SEMANTIC_SEARCH = {
        "default_embeddings": {
            "model": "django_semantic_search.embeddings.SentenceTransformerModel",
            "configuration": {
                "model_name": "sentence-transformers/all-MiniLM-L6-v2",
                "document_prompt": "Doc: ",
                "query_prompt": "Query: ",
            },
        },
        ...
    }
    ```
    """

    def __init__(
        self,
        model_name: str,
        document_prompt: Optional[str] = None,
        query_prompt: Optional[str] = None,
    ):
        """
        Initialize the sentence-transformers model.

        Some models accept prompts to be used for the document and query. These prompts are used as additional
        instructions for the model to generate embeddings. For example, if the `document_prompt` is set to "Doc: ", the
        model will generate embeddings with the prompt "Doc: " followed by the document text.

        :param model_name: name of the model to use.
        :param document_prompt: prompt to use for the document, defaults to None.
        :param query_prompt: prompt to use for the query, defaults to None.
        """
        from sentence_transformers import SentenceTransformer

        self._model = SentenceTransformer(model_name)
        self._document_prompt = document_prompt
        self._query_prompt = query_prompt

    def vector_size(self) -> int:
        """
        Return the size of the individual embedding.
        :return: size of the embedding.
        """
        return self._model.get_sentence_embedding_dimension()

    def embed_document(self, document: str) -> DenseVector:
        """
        Embed a document into a vector.
        :param document: document to embed.
        :return: document embedding.
        """
        return self._model.encode(document, prompt=self._document_prompt).tolist()

    def embed_query(self, query: str) -> DenseVector:
        """
        Embed a query into a vector.
        :param query: query to embed.
        :return: query embedding.
        """
        return self._model.encode(query, prompt=self._query_prompt).tolist()

__init__(model_name, document_prompt=None, query_prompt=None)

Initialize the sentence-transformers model.

Some models accept prompts to be used for the document and query. These prompts are used as additional instructions for the model to generate embeddings. For example, if the document_prompt is set to "Doc: ", the model will generate embeddings with the prompt "Doc: " followed by the document text.

Parameters:

Name Type Description Default
model_name str

name of the model to use.

required
document_prompt Optional[str]

prompt to use for the document, defaults to None.

None
query_prompt Optional[str]

prompt to use for the query, defaults to None.

None
Source code in src/django_semantic_search/embeddings/sentence_transformers.py
def __init__(
    self,
    model_name: str,
    document_prompt: Optional[str] = None,
    query_prompt: Optional[str] = None,
):
    """
    Initialize the sentence-transformers model.

    Some models accept prompts to be used for the document and query. These prompts are used as additional
    instructions for the model to generate embeddings. For example, if the `document_prompt` is set to "Doc: ", the
    model will generate embeddings with the prompt "Doc: " followed by the document text.

    :param model_name: name of the model to use.
    :param document_prompt: prompt to use for the document, defaults to None.
    :param query_prompt: prompt to use for the query, defaults to None.
    """
    from sentence_transformers import SentenceTransformer

    self._model = SentenceTransformer(model_name)
    self._document_prompt = document_prompt
    self._query_prompt = query_prompt

embed_document(document)

Embed a document into a vector.

Parameters:

Name Type Description Default
document str

document to embed.

required

Returns:

Type Description
DenseVector

document embedding.

Source code in src/django_semantic_search/embeddings/sentence_transformers.py
def embed_document(self, document: str) -> DenseVector:
    """
    Embed a document into a vector.
    :param document: document to embed.
    :return: document embedding.
    """
    return self._model.encode(document, prompt=self._document_prompt).tolist()

embed_query(query)

Embed a query into a vector.

Parameters:

Name Type Description Default
query str

query to embed.

required

Returns:

Type Description
DenseVector

query embedding.

Source code in src/django_semantic_search/embeddings/sentence_transformers.py
def embed_query(self, query: str) -> DenseVector:
    """
    Embed a query into a vector.
    :param query: query to embed.
    :return: query embedding.
    """
    return self._model.encode(query, prompt=self._query_prompt).tolist()

vector_size()

Return the size of the individual embedding.

Returns:

Type Description
int

size of the embedding.

Source code in src/django_semantic_search/embeddings/sentence_transformers.py
def vector_size(self) -> int:
    """
    Return the size of the individual embedding.
    :return: size of the embedding.
    """
    return self._model.get_sentence_embedding_dimension()

OpenAI

OpenAI provides powerful embedding models through their API. The default model is text-embedding-3-small, which offers a good balance between quality and cost.

To use OpenAI embeddings, first install the required dependencies:

pip install django-semantic-search[openai]

Then configure it in your Django settings:

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.OpenAIEmbeddingModel",
        "configuration": {
            "model": "text-embedding-3-small",
            "api_key": "your-api-key",  # Optional if set in env
        },
    },
    ...
}

The API key can also be provided through the OPENAI_API_KEY environment variable.

django_semantic_search.embeddings.OpenAIEmbeddingModel

Bases: DenseTextEmbeddingModel

OpenAI text embedding model that uses the OpenAI API to generate dense embeddings.

Requirements:

pip install django-semantic-search[openai]

Usage:

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.OpenAIEmbeddingModel",
        "configuration": {
            "model": "text-embedding-3-small",
            "api_key": "your-api-key",  # Optional if set in env
        },
    },
    ...
}
Source code in src/django_semantic_search/embeddings/openai.py
class OpenAIEmbeddingModel(DenseTextEmbeddingModel):
    """
    OpenAI text embedding model that uses the OpenAI API to generate dense embeddings.

    **Requirements**:

    ```bash
    pip install django-semantic-search[openai]
    ```

    **Usage**:

    ```python title="settings.py"
    SEMANTIC_SEARCH = {
        "default_embeddings": {
            "model": "django_semantic_search.embeddings.OpenAIEmbeddingModel",
            "configuration": {
                "model": "text-embedding-3-small",
                "api_key": "your-api-key",  # Optional if set in env
            },
        },
        ...
    }
    ```
    """

    def __init__(
        self,
        model: str = "text-embedding-3-small",
        api_key: Optional[str] = None,
        **kwargs,
    ):
        """
        Initialize the OpenAI embedding model.

        :param model: OpenAI model to use for embeddings
        :param api_key: OpenAI API key. If not provided, will look for OPENAI_API_KEY env variable
        :param kwargs: Additional kwargs passed to OpenAI client
        """
        self._model = model
        api_key = api_key or os.getenv("OPENAI_API_KEY")
        if not api_key:
            raise ValueError(
                "OpenAI API key must be provided either through api_key parameter or OPENAI_API_KEY environment variable"
            )
        self._client = OpenAI(api_key=api_key, **kwargs)
        # Cache the vector size after first call
        self._vector_size: Optional[int] = None

    def vector_size(self) -> int:
        if self._vector_size is None:
            response = self._client.embeddings.create(
                model=self._model,
                input="test",
            )
            self._vector_size = len(response.data[0].embedding)
        return self._vector_size

    def embed_document(self, document: str) -> DenseVector:
        response = self._client.embeddings.create(
            model=self._model,
            input=document,
        )
        return response.data[0].embedding

    def embed_query(self, query: str) -> DenseVector:
        return self.embed_document(query)

__init__(model='text-embedding-3-small', api_key=None, **kwargs)

Initialize the OpenAI embedding model.

Parameters:

Name Type Description Default
model str

OpenAI model to use for embeddings

'text-embedding-3-small'
api_key Optional[str]

OpenAI API key. If not provided, will look for OPENAI_API_KEY env variable

None
kwargs

Additional kwargs passed to OpenAI client

{}
Source code in src/django_semantic_search/embeddings/openai.py
def __init__(
    self,
    model: str = "text-embedding-3-small",
    api_key: Optional[str] = None,
    **kwargs,
):
    """
    Initialize the OpenAI embedding model.

    :param model: OpenAI model to use for embeddings
    :param api_key: OpenAI API key. If not provided, will look for OPENAI_API_KEY env variable
    :param kwargs: Additional kwargs passed to OpenAI client
    """
    self._model = model
    api_key = api_key or os.getenv("OPENAI_API_KEY")
    if not api_key:
        raise ValueError(
            "OpenAI API key must be provided either through api_key parameter or OPENAI_API_KEY environment variable"
        )
    self._client = OpenAI(api_key=api_key, **kwargs)
    # Cache the vector size after first call
    self._vector_size: Optional[int] = None

embed_document(document)

Source code in src/django_semantic_search/embeddings/openai.py
def embed_document(self, document: str) -> DenseVector:
    response = self._client.embeddings.create(
        model=self._model,
        input=document,
    )
    return response.data[0].embedding

embed_query(query)

Source code in src/django_semantic_search/embeddings/openai.py
def embed_query(self, query: str) -> DenseVector:
    return self.embed_document(query)

vector_size()

Source code in src/django_semantic_search/embeddings/openai.py
def vector_size(self) -> int:
    if self._vector_size is None:
        response = self._client.embeddings.create(
            model=self._model,
            input="test",
        )
        self._vector_size = len(response.data[0].embedding)
    return self._vector_size

FastEmbed

FastEmbed is a lightweight and efficient embedding library that supports both dense and sparse embeddings. It provides fast, accurate embeddings suitable for production use.

Installation

To use FastEmbed embeddings, install the required dependencies:

pip install django-semantic-search[fastembed]

Dense Embeddings

For dense embeddings, configure FastEmbed in your Django settings:

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.FastEmbedDenseModel",
        "configuration": {
            "model_name": "BAAI/bge-small-en-v1.5",
        },
    },
    ...
}

django_semantic_search.embeddings.FastEmbedDenseModel

Bases: DenseTextEmbeddingModel

FastEmbed dense embedding model that uses the FastEmbed library to generate dense embeddings.

Requirements:

pip install django-semantic-search[fastembed]

Usage:

settings.py
SEMANTIC_SEARCH = {
    "default_embeddings": {
        "model": "django_semantic_search.embeddings.FastEmbedDenseModel",
        "configuration": {
            "model_name": "BAAI/bge-small-en-v1.5",
        },
    },
    ...
}
Source code in src/django_semantic_search/embeddings/fastembed.py
class FastEmbedDenseModel(DenseTextEmbeddingModel):
    """
    FastEmbed dense embedding model that uses the FastEmbed library to generate dense embeddings.

    **Requirements:**

    ```shell
    pip install django-semantic-search[fastembed]
    ```

    **Usage:**

    ```python title="settings.py"
    SEMANTIC_SEARCH = {
        "default_embeddings": {
            "model": "django_semantic_search.embeddings.FastEmbedDenseModel",
            "configuration": {
                "model_name": "BAAI/bge-small-en-v1.5",
            },
        },
        ...
    }
    ```
    """

    def __init__(
        self,
        model_name: str,
        **kwargs,
    ):
        """
        Initialize the FastEmbed dense model.

        :param model_name: name of the model to use
        :param kwargs: additional kwargs passed to FastEmbed
        """
        from fastembed import TextEmbedding

        self._model = TextEmbedding(
            model_name=model_name,
            **kwargs,
        )
        # Cache the vector size after first call
        self._vector_size: Optional[int] = None

    def vector_size(self) -> int:
        """
        Return the size of the individual embedding.
        :return: size of the embedding.
        """
        if self._vector_size is None:
            # Get vector size by embedding a test string
            vector = next(self._model.embed(["test"]))
            self._vector_size = len(vector)
        return self._vector_size

    def embed_document(self, document: str) -> DenseVector:
        """
        Embed a document into a vector.
        :param document: document to embed.
        :return: document embedding.
        """
        vector = next(self._model.passage_embed([document]))
        return vector.tolist()

    def embed_query(self, query: str) -> DenseVector:
        """
        Embed a query into a vector.
        :param query: query to embed.
        :return: query embedding.
        """
        vector = next(self._model.query_embed([query]))
        return vector.tolist()

__init__(model_name, **kwargs)

Initialize the FastEmbed dense model.

Parameters:

Name Type Description Default
model_name str

name of the model to use

required
kwargs

additional kwargs passed to FastEmbed

{}
Source code in src/django_semantic_search/embeddings/fastembed.py
def __init__(
    self,
    model_name: str,
    **kwargs,
):
    """
    Initialize the FastEmbed dense model.

    :param model_name: name of the model to use
    :param kwargs: additional kwargs passed to FastEmbed
    """
    from fastembed import TextEmbedding

    self._model = TextEmbedding(
        model_name=model_name,
        **kwargs,
    )
    # Cache the vector size after first call
    self._vector_size: Optional[int] = None

embed_document(document)

Embed a document into a vector.

Parameters:

Name Type Description Default
document str

document to embed.

required

Returns:

Type Description
DenseVector

document embedding.

Source code in src/django_semantic_search/embeddings/fastembed.py
def embed_document(self, document: str) -> DenseVector:
    """
    Embed a document into a vector.
    :param document: document to embed.
    :return: document embedding.
    """
    vector = next(self._model.passage_embed([document]))
    return vector.tolist()

embed_query(query)

Embed a query into a vector.

Parameters:

Name Type Description Default
query str

query to embed.

required

Returns:

Type Description
DenseVector

query embedding.

Source code in src/django_semantic_search/embeddings/fastembed.py
def embed_query(self, query: str) -> DenseVector:
    """
    Embed a query into a vector.
    :param query: query to embed.
    :return: query embedding.
    """
    vector = next(self._model.query_embed([query]))
    return vector.tolist()

vector_size()

Return the size of the individual embedding.

Returns:

Type Description
int

size of the embedding.

Source code in src/django_semantic_search/embeddings/fastembed.py
def vector_size(self) -> int:
    """
    Return the size of the individual embedding.
    :return: size of the embedding.
    """
    if self._vector_size is None:
        # Get vector size by embedding a test string
        vector = next(self._model.embed(["test"]))
        self._vector_size = len(vector)
    return self._vector_size

Sparse Embeddings (Coming Soon)

Note: Sparse embeddings support is currently under development and not yet available for use in django-semantic-search. This feature will be available in a future release.

While FastEmbed supports sparse embeddings (like BM25), the integration with django-semantic-search is still in progress.