Talk to Your Documents with VoiceSphere's AI Chat for Intelligent Answers

From Pinecone to pgVector: Our Embedding Storage Shift

Reading Time: 2 minutes

What are Text Embeddings

Every document consists of words, sentences, and paragraphs. At VoiceSphere, we transform these documents into what we term ‘vectors’ or ’embeddings’. Think of it as converting your document into a unique fingerprint that symbolizes its content. These embeddings are then stored in our vector database.

After storing these embeddings, we use them to quickly scan for matches when you ask a question. These embeddings help pinpoint the most relevant content, delivering accurate results swiftly.

Pinecone: The Initial Phase

In the initial stages of our product development, we relied on Pinecone, a specialized vector database tailored for storing embeddings. Leveraging Pinecone offered us the following advantages:

Hosted Embeddings:

Pinecone’s architecture takes away the hassle of hosting, managing, and maintaining the embeddings on our side.
API Operations: All interactions with these embeddings were streamlined through Pinecone’s API, ensuring a consistent interface.
However, we noticed a significant latency due to callbacks, primarily because Pinecone hosted the embeddings externally to our primary database servers. Our core data was managed in Postgres, which introduced this latency when switching between the systems.

The Shift to pgVector

About a year ago, the PostgreSQL team unveiled pgVector, a new extension dedicated to storing embeddings within the PostgreSQL database itself. This means the embeddings could be co-located with our core data.

When AWS announced support for pgVector on its Aurora instances, we enabled this extension and began storing embeddings in tandem with the primary data.

This transition rendered several immediate advantages:

Enhanced Query Speed: With co-located embeddings and core data, our search queries experienced a noticeable boost in performance.

Seamless Data Integration: The ability to perform JOIN operations between the embeddings and other table data streamlines data management and querying.

Unified Data Storage: Consolidating our data sources promotes customer trust and satisfaction, as all their data remains in a single, secure location.

Currently, the migration has proven immensely beneficial. However, as with all technical solutions, we are vigilant about potential challenges. As our embeddings dataset grows, we anticipate the need to optimize for data growth management and to maintain performance, especially when querying tables with a ton of embeddings.

Despite these anticipated challenges, the migration has, so far, been a beneficial for VoiceSphere.

Leave a Comment

Your email address will not be published. Required fields are marked *

@ 2023 VoiceSphere

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top