Sync APIĀ¶
Reproducibility is critical for AI. For code, it's easy to keep track of changes using Github or Gitlab. For data, it's not as easy. Most of the time, we're manually writing complicated data tracking code, wrestling with an external tool, and dealing with expensive duplicate snapshot copies with low granularity.
While working with most other vector databases, if we loaded in the wrong data (or any other such mistakes), we have to blow away the index, correct the mistake, and then completely rebuild it. It's really difficult to rollback to an earlier state, and any such corrective action destroys historical data and evidence, which may be useful down the line to debug and diagnose issues.
To our knowledge, LanceDB is the first and only vector database that supports full reproducibility and rollbacks natively. Taking advantage of the Lance columnar data format, LanceDB supports:
- Automatic versioning
- Instant rollback
- Appends, updates, deletions
- Schema evolution
This makes auditing, tracking, and reproducibility a breeze!
Let's see how this all works.
Pickle Rick!Ā¶
Let's first prepare the data. We will be using a CSV file with a bunch of quotes from Rick and Morty
!wget http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv
!head rick_and_morty_quotes.csv
--2024-12-17 11:54:43-- http://vectordb-recipes.s3.us-west-2.amazonaws.com/rick_and_morty_quotes.csv Resolving vectordb-recipes.s3.us-west-2.amazonaws.com (vectordb-recipes.s3.us-west-2.amazonaws.com)... 52.92.138.34, 3.5.82.160, 52.218.236.161, ... Connecting to vectordb-recipes.s3.us-west-2.amazonaws.com (vectordb-recipes.s3.us-west-2.amazonaws.com)|52.92.138.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 8236 (8.0K) [text/csv] Saving to: ārick_and_morty_quotes.csv.1ā rick_and_morty_quot 100%[===================>] 8.04K --.-KB/s in 0s 2024-12-17 11:54:43 (77.8 MB/s) - ārick_and_morty_quotes.csv.1ā saved [8236/8236] id,author,quote 1,Rick," Morty, you got to come on. You got to come with me." 2,Morty," Rick, whatās going on?" 3,Rick," I got a surprise for you, Morty." 4,Morty," Itās the middle of the night. What are you talking about?" 5,Rick," I got a surprise for you." 6,Morty," Ow! Ow! Youāre tugging me too hard." 7,Rick," I got a surprise for you, Morty." 8,Rick," What do you think of this flying vehicle, Morty? I built it out of stuff I found in the garage." 9,Morty," Yeah, Rick, itās great. Is this the surprise?"
Let's load this into a pandas dataframe.
It's got 3 columns, a quote id, the quote string, and the first name of the author of the quote:
import pandas as pd
df = pd.read_csv("rick_and_morty_quotes.csv")
df.head()
id | author | quote | |
---|---|---|---|
0 | 1 | Rick | Morty, you got to come on. You got to come wi... |
1 | 2 | Morty | Rick, whatās going on? |
2 | 3 | Rick | I got a surprise for you, Morty. |
3 | 4 | Morty | Itās the middle of the night. What are you ta... |
4 | 5 | Rick | I got a surprise for you. |
We'll start with a local LanceDB connection
!pip install lancedb -q
import lancedb
db = lancedb.connect("~/.lancedb")
Creating a LanceDB table from a pandas dataframe is straightforward using create_table
:
db.drop_table("rick_and_morty", ignore_missing=True)
table = db.create_table("rick_and_morty", df)
table.head().to_pandas()
id | author | quote | |
---|---|---|---|
0 | 1 | Rick | Morty, you got to come on. You got to come wi... |
1 | 2 | Morty | Rick, whatās going on? |
2 | 3 | Rick | I got a surprise for you, Morty. |
3 | 4 | Morty | Itās the middle of the night. What are you ta... |
4 | 5 | Rick | I got a surprise for you. |
UpdatesĀ¶
Now, since Rick is the smartest man in the multiverse, he deserves to have his quotes attributed to his full name: Richard Daniel Sanchez.
This can be done via LanceTable.update
. It needs two arguments:
- A
where
string filter (sql syntax) to determine the rows to update - A dict of
values
where the keys are the column names to update and the values are the new values
table.update(where="author='Rick'", values={"author": "Richard Daniel Sanchez"})
table.to_pandas()
id | author | quote | |
---|---|---|---|
0 | 2 | Morty | Rick, whatās going on? |
1 | 4 | Morty | Itās the middle of the night. What are you ta... |
2 | 6 | Morty | Ow! Ow! Youāre tugging me too hard. |
3 | 9 | Morty | Yeah, Rick, itās great. Is this the surprise? |
4 | 11 | Morty | What?! A bomb?! |
... | ... | ... | ... |
94 | 80 | Richard Daniel Sanchez | There you are, Morty. Listen to me. I got an ... |
95 | 82 | Richard Daniel Sanchez | Itās pretty obvious, Morty. I froze him. Now ... |
96 | 84 | Richard Daniel Sanchez | Do you have any concept of how much higher th... |
97 | 86 | Richard Daniel Sanchez | Iāll do it later, Morty. Heāll be fine. Letās... |
98 | 97 | Richard Daniel Sanchez | There she is. All right. Come on, Morty. Letā... |
99 rows Ć 3 columns
Schema evolutionĀ¶
Ok so this is a vector database, so we need actual vectors. We'll use sentence transformers here to avoid having to deal with API keys.
Let's create a basic model using the "all-MiniLM-L6-v2" model and embed the quotes:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("all-MiniLM-L6-v2", device="cpu")
vectors = model.encode(df.quote.values.tolist(),
convert_to_numpy=True,
normalize_embeddings=True).tolist()
We can then convert the vectors into a pyarrow Table and merge it to the LanceDB Table.
For the merge to work successfully, we need to have an overlapping column. Here the natural choice is to use the id column:
from lance.vector import vec_to_table
import numpy as np
import pyarrow as pa
embeddings = vec_to_table(vectors)
embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1))
embeddings.to_pandas().head()
vector | id | |
---|---|---|
0 | [-0.10369808, -0.038807657, -0.07471153, -0.05... | 1 |
1 | [-0.11813704, -0.0533092, 0.025554786, -0.0242... | 2 |
2 | [-0.09807682, -0.035231438, -0.04206024, -0.06... | 3 |
3 | [0.032292824, 0.038136397, 0.013615396, 0.0335... | 4 |
4 | [-0.050369408, -0.0043397923, 0.013419108, -0.... | 5 |
And now we'll use the LanceTable.merge
function to add the vector column into the LanceTable:
table.merge(embeddings, left_on="id")
table.head().to_pandas()
id | author | quote | vector | |
---|---|---|---|---|
0 | 2 | Morty | Rick, whatās going on? | [-0.11813704, -0.0533092, 0.025554786, -0.0242... |
1 | 4 | Morty | Itās the middle of the night. What are you ta... | [0.032292824, 0.038136397, 0.013615396, 0.0335... |
2 | 6 | Morty | Ow! Ow! Youāre tugging me too hard. | [-0.035019904, -0.070963725, 0.003859435, -0.0... |
3 | 9 | Morty | Yeah, Rick, itās great. Is this the surprise? | [-0.12578955, -0.019364933, 0.01606114, -0.082... |
4 | 11 | Morty | What?! A bomb?! | [0.0018287548, 0.07033146, -0.023754105, 0.047... |
If we look at the schema, we see that all-MiniLM-L6-v2
produces 384-dimensional vectors:
table.schema
id: int64 author: string quote: string vector: fixed_size_list<item: float>[384] child 0, item: float
RollbackĀ¶
Suppose we used the table and found that the all-MiniLM-L6-v2
model doesn't produce ideal results. Instead we want to try a larger model. How do we use the new embeddings without losing the change history?
First, major operations are automatically versioned in LanceDB. Version 1 is the table creation, with the initial insertion of data. Versions 2 and 3 represents the update (deletion + append) Version 4 is adding the new column.
table.list_versions()
[{'version': 1, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 613932), 'metadata': {}}, {'version': 2, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 626525), 'metadata': {}}, {'version': 3, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 27, 91378), 'metadata': {}}, {'version': 4, 'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 4, 513085), 'metadata': {}}]
We can restore version 3, before we added the old vector column
table.restore(3)
table.head().to_pandas()
id | author | quote | |
---|---|---|---|
0 | 2 | Morty | Rick, whatās going on? |
1 | 4 | Morty | Itās the middle of the night. What are you ta... |
2 | 6 | Morty | Ow! Ow! Youāre tugging me too hard. |
3 | 9 | Morty | Yeah, Rick, itās great. Is this the surprise? |
4 | 11 | Morty | What?! A bomb?! |
Notice that we now have one more, not less versions. When we restore an old version, we're not deleting the version history, we're just creating a new version where the schema and data is equivalent to the restored old version. In this way, we can keep track of all of the changes and always rollback to a previous state.
table.list_versions()
[{'version': 1, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 613932), 'metadata': {}}, {'version': 2, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 21, 626525), 'metadata': {}}, {'version': 3, 'timestamp': datetime.datetime(2024, 12, 17, 11, 57, 27, 91378), 'metadata': {}}, {'version': 4, 'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 4, 513085), 'metadata': {}}, {'version': 5, 'timestamp': datetime.datetime(2024, 12, 17, 11, 58, 27, 153807), 'metadata': {}}]
Switching ModelsĀ¶
Now we'll switch to the all-mpnet-base-v2
model and add the vectors to the restored dataset again. Note that this step can take a couple of minutes.
model = SentenceTransformer("all-mpnet-base-v2", device="cpu")
vectors = model.encode(df.quote.values.tolist(),
convert_to_numpy=True,
normalize_embeddings=True).tolist()
embeddings = vec_to_table(vectors)
embeddings = embeddings.append_column("id", pa.array(np.arange(len(table))+1))
table.merge(embeddings, left_on="id")
table.schema
id: int64 author: string quote: string vector: fixed_size_list<item: float>[768] child 0, item: float
DeletionĀ¶
What if the whole show was just Rick-isms? Let's delete any quote not said by Rick:
table.delete("author != 'Richard Daniel Sanchez'")
We can see that the number of rows has been reduced to 30
len(table)
28
Ok we had our fun, let's get back to the full quote set
table.restore(6)
len(table)
99
HistoryĀ¶
We now have 9 versions in the data. We can review the operations that corresponds to each version below:
table.version
8
Versions:
- 1 - Create and append
- 2 - Update (deletion)
- 3 - Update (append)
- 4 - Merge (vector column)
- 5 - Restore (4)
- 6 - Merge (new vector column)
- 7 - Deletion
- 8 - Restore
SummaryĀ¶
We never had to explicitly manage the versioning. And we never had to create expensive and slow snapshots. LanceDB automatically tracks the full history of operations and supports fast rollbacks. In production this is critical for debugging issues and minimizing downtime by rolling back to a previously successful state in seconds.