diff --git a/getting-started/llama-tools/llama-prompt-ops_101.ipynb b/getting-started/llama-tools/prompt-ops_101.ipynb
similarity index 92%
rename from getting-started/llama-tools/llama-prompt-ops_101.ipynb
rename to getting-started/llama-tools/prompt-ops_101.ipynb
index 7cec4fff7..998dd4994 100644
--- a/getting-started/llama-tools/llama-prompt-ops_101.ipynb
+++ b/getting-started/llama-tools/prompt-ops_101.ipynb
@@ -12,14 +12,14 @@
"\n",
"\n",
"\n",
- "
\n",
+ "
\n",
"\n",
"\n",
- "# Getting Started with [llama-prompt-ops](https://github.com/meta-llama/llama-prompt-ops)\n",
+ "# Getting Started with [prompt-ops](https://github.com/meta-llama/prompt-ops)\n",
"\n",
- "This notebook will guide you through the process of using [llama-prompt-ops](https://github.com/meta-llama/llama-prompt-ops) to optimize your prompts for Llama models. We'll cover:\n",
+ "This notebook will guide you through the process of using [prompt-ops](https://github.com/meta-llama/prompt-ops) to optimize your prompts for Llama models. We'll cover:\n",
"\n",
- "1. Introduction to llama-prompt-ops\n",
+ "1. Introduction to prompt-ops\n",
"2. Setting up your environment\n",
"3. Creating a sample project\n",
"4. Running prompt optimization\n",
@@ -31,15 +31,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 1. Introduction to llama-prompt-ops\n",
+ "## 1. Introduction to prompt-ops\n",
"\n",
- "### What is llama-prompt-ops?\n",
+ "### What is prompt-ops?\n",
"\n",
- "llama-prompt-ops is a Python package that **automatically optimizes prompts** for Llama models. It transforms prompts that work well with other LLMs into prompts that are optimized for Llama models, improving performance and reliability.\n",
+ "prompt-ops is a Python package that **automatically optimizes prompts** for Llama models. It transforms prompts that work well with other LLMs into prompts that are optimized for Llama models, improving performance and reliability.\n",
"\n",
"### How It Works\n",
"\n",
- "llama-prompt-ops takes three key inputs:\n",
+ "prompt-ops takes three key inputs:\n",
"1. Your existing system prompt\n",
"2. A dataset of query-response pairs for evaluation and optimization\n",
"3. A configuration file specifying model parameters and optimization details\n",
@@ -53,7 +53,7 @@
"source": [
"## 2. Setting up your environment\n",
"\n",
- "Let's start by installing the Llama Prompt Ops package and setting up our environment. You can install it either from PyPI or directly from the source code."
+ "Let's start by installing the Prompt ops package and setting up our environment. You can install it either from PyPI or directly from the source code."
]
},
{
@@ -63,7 +63,7 @@
"outputs": [],
"source": [
"# Install from PyPI\n",
- "!pip install llama-prompt-ops"
+ "!pip install prompt-ops"
]
},
{
@@ -72,7 +72,7 @@
"source": [
"### Setting up your API key\n",
"\n",
- "Llama Prompt Ops requires an API key to access LLM services. You can use OpenRouter, which provides access to various models including Llama models.\n",
+ "Prompt ops requires an API key to access LLM services. You can use OpenRouter, which provides access to various models including Llama models.\n",
"\n",
"Create a `.env` file in your project directory with your API key:"
]
@@ -123,7 +123,7 @@
"source": [
"## 3. Creating a Sample Project\n",
"\n",
- "Llama Prompt Ops provides a convenient way to create a sample project with all the necessary files. Let's create a sample project to get started."
+ "Prompt ops provides a convenient way to create a sample project with all the necessary files. Let's create a sample project to get started."
]
},
{
@@ -133,7 +133,7 @@
"outputs": [],
"source": [
"# Create a sample project\n",
- "!llama-prompt-ops create my-notebook-project"
+ "!prompt-ops create my-notebook-project"
]
},
{
@@ -223,7 +223,7 @@
"outputs": [],
"source": [
"# Run prompt optimization\n",
- "!cd my-notebook-project && llama-prompt-ops migrate"
+ "!cd my-notebook-project && prompt-ops migrate"
]
},
{
@@ -346,13 +346,13 @@
"\n",
"### Using Your Own Data\n",
"\n",
- "To use your own data with Llama Prompt Ops, you'll need to:\n",
+ "To use your own data with Prompt ops, you'll need to:\n",
"\n",
"1. Prepare your dataset in JSON format\n",
"2. Create a system prompt file\n",
"3. Create a configuration file\n",
"\n",
- "Check out the comprehensive guide [here](https://github.com/meta-llama/llama-prompt-ops/tree/main/docs) to learn more.\n",
+ "Check out the comprehensive guide [here](https://github.com/meta-llama/prompt-ops/tree/main/docs) to learn more.\n",
"\n",
"Now, let's see how to create a custom configuration file:\n",
"\n"
@@ -391,7 +391,7 @@
"\n",
"# Metric configuration\n",
"metric:\n",
- " class: \"llama_prompt_ops.core.metrics.StandardJSONMetric\"\n",
+ " class: \"prompt_ops.core.metrics.StandardJSONMetric\"\n",
" strict_json: false\n",
" output_field: \"answer\"\n",
"\n",
@@ -406,7 +406,7 @@
"source": [
"### Using Different Metrics\n",
"\n",
- "Llama Prompt Ops supports different metrics for evaluating prompt performance. The default is `StandardJSONMetric`, but you can use other metrics like `FacilityMetric` for specific use cases.\n",
+ "Prompt ops supports different metrics for evaluating prompt performance. The default is `StandardJSONMetric`, but you can use other metrics like `FacilityMetric` for specific use cases.\n",
"\n",
"Here's an example of using the `FacilityMetric` for the facility support analyzer use case:"
]
@@ -437,7 +437,7 @@
"\n",
"# Metric configuration\n",
"metric:\n",
- " class: \"llama_prompt_ops.core.metrics.FacilityMetric\"\n",
+ " class: \"prompt_ops.core.metrics.FacilityMetric\"\n",
" strict_json: false\n",
" output_field: \"answer\"\n",
"\n",
@@ -452,7 +452,7 @@
"source": [
"### Using Different Models\n",
"\n",
- "Llama Prompt Ops supports different models through various inference providers. You can use OpenRouter, vLLM, or NVIDIA NIMs depending on your infrastructure needs.\n",
+ "Prompt ops supports different models through various inference providers. You can use OpenRouter, vLLM, or NVIDIA NIMs depending on your infrastructure needs.\n",
"\n",
"Here's an example of using a different model through OpenRouter:"
]
@@ -483,7 +483,7 @@
"\n",
"# Metric configuration\n",
"metric:\n",
- " class: \"llama_prompt_ops.core.metrics.StandardJSONMetric\"\n",
+ " class: \"prompt_ops.core.metrics.StandardJSONMetric\"\n",
" strict_json: false\n",
" output_field: \"answer\"\n",
"\n",
@@ -500,16 +500,16 @@
"\n",
"In this notebook, we've covered:\n",
"\n",
- "1. Introduction to Llama Prompt Ops and its benefits\n",
+ "1. Introduction to Prompt ops and its benefits\n",
"2. Creating a sample project\n",
"3. Setting up your environment and API key\n",
"4. Running prompt optimization\n",
"5. Analyzing the results\n",
"6. Advanced usage and customization options\n",
"\n",
- "Llama Prompt Ops provides a powerful way to optimize your prompts for Llama models, improving performance and reliability. By following the steps in this notebook, you can start optimizing your own prompts and building more effective LLM applications.\n",
+ "Prompt ops provides a powerful way to optimize your prompts for Llama models, improving performance and reliability. By following the steps in this notebook, you can start optimizing your own prompts and building more effective LLM applications.\n",
"\n",
- "For more information, check out the [llama-prompt-ops documentation](https://github.com/meta-llama/llama-prompt-ops/tree/main/docs) and explore the example use cases in the repository."
+ "For more information, check out the [prompt-ops documentation](https://github.com/meta-llama/prompt-ops/tree/main/docs) and explore the example use cases in the repository."
]
}
],