docs/tutorials/llm_chain/ #27593
Replies: 19 comments 11 replies
-
Every clear and straight forward tutorial! Thanks for your brilliant work! |
Beta Was this translation helpful? Give feedback.
-
Hi |
Beta Was this translation helpful? Give feedback.
-
Congratulations, simple, objective and functional tutorial! Thank you |
Beta Was this translation helpful? Give feedback.
-
cool man |
Beta Was this translation helpful? Give feedback.
-
awesome! |
Beta Was this translation helpful? Give feedback.
-
cool |
Beta Was this translation helpful? Give feedback.
-
awesome!!! |
Beta Was this translation helpful? Give feedback.
-
short and sweet! |
Beta Was this translation helpful? Give feedback.
-
Easy to understand, short and crisp |
Beta Was this translation helpful? Give feedback.
-
In version 0.3, |
Beta Was this translation helpful? Give feedback.
-
I have configured aws in my system. How to configure aws credentials in jupyter notebook |
Beta Was this translation helpful? Give feedback.
-
Great tutorial! FYI I did have issues with the API key limits for OpenAI and Anthropic, but was successful with Groq. |
Beta Was this translation helpful? Give feedback.
-
I have a question about lcel, so, what about the lcel position in langchain? |
Beta Was this translation helpful? Give feedback.
-
What a simple and great way to get started. A scratch in the surface indeed! |
Beta Was this translation helpful? Give feedback.
-
I am seeing in older versions, tutorials use export LANGCHAIN_API_KEY="..." but in the newer documents it is replaced with export LANGSMITH_API_KEY="...", what should be used |
Beta Was this translation helpful? Give feedback.
-
No mention of LCEL? Isn't this the preferred way to chains? |
Beta Was this translation helpful? Give feedback.
-
for free API Key use Gemini Flash. I used the sample code with Gemini Free API key. |
Beta Was this translation helpful? Give feedback.
-
Is it a tutorial or an advertisement of LangSmith?! |
Beta Was this translation helpful? Give feedback.
-
I noticed an inconsistency in how ChatPromptTemplate handles variable interpolation depending on how the messages are defined. These two approaches work as expected — variables are correctly interpolated: ✅ Using from_messages: from langchain_core.prompts import ChatPromptTemplate prompt_template = ChatPromptTemplate.from_messages([ prompt = prompt_template.invoke({ ✅ Using message dictionaries: from langchain_core.prompts import ChatPromptTemplate prompt_template = ChatPromptTemplate([ prompt = prompt_template.invoke({ However, this version using SystemMessage and HumanMessage doesn't interpolate the variables: 🚫 Using SystemMessage and HumanMessage directly: from langchain_core.prompts import ChatPromptTemplate prompt_template = ChatPromptTemplate([ prompt = prompt_template.invoke({ In this case, {language} and {user_input} remain as-is in the final output. It seems that ChatPromptTemplate only interpolates variables when using strings or dicts, but not when passing message objects. This could lead to confusion, especially for developers expecting uniform behavior across all formats. Would be great if this could be unified, or at least mentioned in the docs for clarity 🙏 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
docs/tutorials/llm_chain/
In this quickstart we'll show you how to build a simple LLM application with LangChain. This application will translate text from English into another language. This is a relatively simple LLM application - it's just a single LLM call plus some prompting. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call!
https://python.langchain.com/docs/tutorials/llm_chain/
Beta Was this translation helpful? Give feedback.
All reactions