docs/tutorials/retrievers/ #28864
Replies: 5 comments 5 replies
-
I get an error when setting Mistral API key like this (as suggested in the code): The correct way is to set |
Beta Was this translation helpful? Give feedback.
-
I am trying out a simple vector similarity search and I am seeing poor performance across Nomic embeddings and Llama 32. 1b embeddings. Any help is appreciated. Code to reproduce
Output
^ Above output is not relevant at all to the question prompt. |
Beta Was this translation helpful? Give feedback.
-
They Voyage AI for import is wrong. Currently its Just some helpful tidbid, maybe the authors can quickly fix it aswell! |
Beta Was this translation helpful? Give feedback.
-
Forth the embeddings example, the model "text-embedding-3-large" by default returns a vector of length 3072 not 1536. |
Beta Was this translation helpful? Give feedback.
-
hi i have tried running sravan953's code and i also get (the same) poor results ("the Maldives; its Andaman and Nicobar Islands share a maritime border..."), but unlike sravan, even when i switch to faiss (i did not try chromadb) it still gives poor results (the exact same results). the top 3 look like this: RESULT: RESULT: i followed the tutorial code and it also has poor results: file_path = "./nke-10k-2023.pdf" text_splitter = RecursiveCharacterTextSplitter( embeddings = OllamaEmbeddings(model="llama3.2") assert len(vector_1) == len(vector_2) embedding_dim = len(vector_1) vector_store = FAISS( ids = vector_store.add_documents(documents=all_splits) results = vector_store.similarity_search( print("How many distribution centers does Nike have in the US?") tried a bunch of other prompts. zeroshot was bad to terrible and oneshot did not seem to improve it noticeably. changing the chunk size and overlap down (250/100 and 100/40) also did not help. is it just ollama is no good? how did sravan get it to be functional? any tips are greatly appreciated, i'm sure i'm doing something obvious wrong! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
docs/tutorials/retrievers/
This tutorial will familiarize you with LangChain's document loader, embedding, and vector store abstractions. These abstractions are designed to support retrieval of data-- from (vector) databases and other sources-- for integration with LLM workflows. They are important for applications that fetch data to be reasoned over as part of model inference, as in the case of retrieval-augmented generation, or RAG (see our RAG tutorial here).
https://python.langchain.com/docs/tutorials/retrievers/
Beta Was this translation helpful? Give feedback.
All reactions