Skip to content

Conversation

doomgrave
Copy link

No description provided.

@slundberg
Copy link
Collaborator

Hi @doomgrave , thanks for the PR. Happy to help facilitate langChain interop...but can you give a description of which scenarios this helps support? Thanks!

@doomgrave
Copy link
Author

doomgrave commented Mar 15, 2024

Hi @doomgrave , thanks for the PR. Happy to help facilitate langChain interop...but can you give a description of which scenarios this helps support? Thanks!

Sorry Scott, i pushed to main for error, not really skilled with git!
Anyway i made a simple interface to use the Llama-cpp model loded with Guidance and also be able to use Langchain Embeddings and Langchain generations/chains.

The advantage its simply that you load the model once without loading/unloading it from memory. I've put an use description in the class. Feel free to inspect the idea.

    #llama.cpp embedding models using Guidance
    #To use, you should have the llama-cpp-python and langchain library installed.
    #LlamaCpp istance must have embedding = true.
    #USAGE EXAMPLE (using Chroma database):

        llama2 = guidance.models.LlamaCpp(model=modelPath,n_gpu_layers=-1,n_ctx=4096,embedding = true)
        embeddings = GuidanceLlamaCppEmbeddings(model=llama2)
        vectordb = Chroma(persist_directory={path_to_chromadb}, embedding_function=embeddings)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants