Skip to content

feat: add providers #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 30 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
cd37d92
Update pre-commit configuration
SevenFo Mar 30, 2025
bfd8295
feat: migrate Anthropic provider to use metagpt-core dependency
SevenFo Mar 29, 2025
556bda6
update tests and setup.py
SevenFo Mar 30, 2025
3f42de8
Feature: Migrate Ark provider from main repo to MetaGPT-Ext
SevenFo Mar 29, 2025
5f4afa8
feat: migrate AWS Bedrock provider from MetaGPT to MetaGPT-Ext
SevenFo Mar 30, 2025
9a6e3b5
更新单元测试
SevenFo Mar 30, 2025
b16f358
更新setup.py,排除tests目录下的包
SevenFo Mar 30, 2025
4170416
Add Azure OpenAI provider implementation
SevenFo Mar 30, 2025
731cf04
feat: migrate AWS Bedrock provider from MetaGPT to MetaGPT-Ext
SevenFo Mar 30, 2025
78a7f14
feat: migrate DashScope provider from MetaGPT main repository
SevenFo Mar 30, 2025
e14fa0e
feat: migrate google-gemini-provider from MetaGPT to MetaGPT-Ext
SevenFo Mar 30, 2025
bf7a2bf
run pre-commit
SevenFo Mar 30, 2025
e85c4fa
run pre-commit
SevenFo Mar 30, 2025
7b41f43
run pre-commit
SevenFo Mar 30, 2025
bb41eee
run pre-commit
SevenFo Mar 30, 2025
fc0cf44
run pre-commit
SevenFo Mar 30, 2025
0a117af
run pre-commit
SevenFo Mar 30, 2025
5cabf4c
Merge anthropic provider split
SevenFo Mar 30, 2025
637ccca
Merge ark provider split
SevenFo Mar 30, 2025
f87dfc4
Merge azure openai provider split
SevenFo Mar 30, 2025
a6a302d
Merge bedrock provider split
SevenFo Mar 30, 2025
abcbed0
Merge dashscope provider split
SevenFo Mar 30, 2025
7182b4f
Merge google gemini provider split
SevenFo Mar 30, 2025
6af9c0d
remove redundant directory
SevenFo Mar 30, 2025
4d8cade
Merge bedrock provider split
SevenFo Mar 30, 2025
290bc9f
refactor: remove redundant UTF-8 encoding declarations
SevenFo Apr 6, 2025
ff91ed8
feat: 添加asyncio.run到所有provider示例代码
SevenFo Apr 6, 2025
3279e53
统一 requirements-test.txt
SevenFo Apr 6, 2025
8004440
修正 anthropic provider 的示例代码依赖
SevenFo Apr 6, 2025
9cb6ec5
update requirements.txt of azure-openai provider
SevenFo Apr 6, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
default_stages: [ commit ]
default_stages: [ pre-commit ]

# Install
# 1. pip install metagpt[dev]
Expand Down
59 changes: 59 additions & 0 deletions provider/metagpt-provider-anthropic/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Test logs
test_logs/

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# VS Code
.vscode/
*.code-workspace

# PyCharm
.idea/
*.iml
*.iws
*.ipr

# Jupyter Notebook
.ipynb_checkpoints
44 changes: 44 additions & 0 deletions provider/metagpt-provider-anthropic/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# MetaGPT Provider Anthropic

This package provides Anthropic (Claude) integration for MetaGPT.

## Installation

```bash
pip install metagpt-provider-anthropic
```

## Usage

```python
import asyncio
from metagpt.provider.anthropic import AnthropicLLM
from metagpt.core.configs.llm_config import LLMConfig

async def main():
config = LLMConfig(
api_type="anthropic",
api_key="your-api-key",
model="claude-3-opus-20240229"
)

# Initialize the Anthropic LLM
llm = AnthropicLLM(config)

# Ask a question
response = await llm.aask("What is artificial intelligence?")
print(response)

# Run the async function
if __name__ == "__main__":
asyncio.run(main())
```

## Configuration

The following configuration parameters are supported:

- `api_type`: Must be set to "anthropic" to use this provider
- `api_key`: Your Anthropic API key
- `model`: The Claude model to use (default: "claude-3-opus-20240229")
- `base_url`: Optional base URL for the Anthropic API
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/usr/bin/env python

from metagpt.provider.anthropic.base import AnthropicLLM

__all__ = ["AnthropicLLM"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
#!/usr/bin/env python

from anthropic import AsyncAnthropic
from anthropic.types import Message, Usage
from metagpt.core.configs.llm_config import LLMConfig, LLMType
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没有安装pre-commit?格式看着有问题

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是跑过 pre-commit run --all-files 应该没问题吧?

from metagpt.core.const import USE_CONFIG_TIMEOUT
from metagpt.core.logs import log_llm_stream
from metagpt.core.provider.base_llm import BaseLLM
from metagpt.core.provider.llm_provider_registry import register_provider


@register_provider([LLMType.ANTHROPIC, LLMType.CLAUDE])
class AnthropicLLM(BaseLLM):
def __init__(self, config: LLMConfig):
self.config = config
self.__init_anthropic()

def __init_anthropic(self):
self.model = self.config.model
self.aclient: AsyncAnthropic = AsyncAnthropic(api_key=self.config.api_key, base_url=self.config.base_url)

def _const_kwargs(self, messages: list[dict], stream: bool = False) -> dict:
kwargs = {
"model": self.model,
"messages": messages,
"max_tokens": self.config.max_token,
"stream": stream,
}

if self.use_system_prompt:
# if the model support system prompt, extract and pass it
if messages[0]["role"] == "system":
kwargs["messages"] = messages[1:]
kwargs["system"] = messages[0]["content"] # set system prompt here

if self.config.reasoning:
kwargs["thinking"] = {"type": "enabled", "budget_tokens": self.config.reasoning_max_token}

return kwargs

def _update_costs(self, usage: Usage, model: str = None, local_calc_usage: bool = True):
usage = {"prompt_tokens": usage.input_tokens, "completion_tokens": usage.output_tokens}
super()._update_costs(usage, model)

def get_choice_text(self, resp: Message) -> str:
if len(resp.content) > 1:
self.reasoning_content = resp.content[0].thinking
text = resp.content[1].text
else:
text = resp.content[0].text
return text

async def _achat_completion(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> Message:
resp: Message = await self.aclient.messages.create(**self._const_kwargs(messages))
self._update_costs(resp.usage, self.model)
return resp

async def acompletion(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> Message:
return await self._achat_completion(messages, timeout=self.get_timeout(timeout))

async def _achat_completion_stream(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> str:
stream = await self.aclient.messages.create(**self._const_kwargs(messages, stream=True))
collected_content = []
collected_reasoning_content = []
usage = Usage(input_tokens=0, output_tokens=0)

async for event in stream:
event_type = event.type
if event_type == "message_start":
usage.input_tokens = event.message.usage.input_tokens
usage.output_tokens = event.message.usage.output_tokens
elif event_type == "content_block_delta":
delta_type = event.delta.type
if delta_type == "thinking_delta":
collected_reasoning_content.append(event.delta.thinking)
elif delta_type == "text_delta":
content = event.delta.text
log_llm_stream(content)
collected_content.append(content)
elif event_type == "message_delta":
usage.output_tokens = event.usage.output_tokens # update final output_tokens

log_llm_stream("\n")
self._update_costs(usage)
full_content = "".join(collected_content)
if collected_reasoning_content:
self.reasoning_content = "".join(collected_reasoning_content)

return full_content
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/usr/bin/env python

"""Utility functions for the Anthropic provider."""


def get_default_anthropic_model():
"""Return the default Anthropic model."""
return "claude-3-opus-20240229"
4 changes: 4 additions & 0 deletions provider/metagpt-provider-anthropic/requirements-test.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
pytest>=7.3.1
pytest-asyncio>=0.21.0
pytest-mock>=3.10.0
pytest-cov>=4.1.0
2 changes: 2 additions & 0 deletions provider/metagpt-provider-anthropic/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
anthropic>=0.15.0
metagpt-core>=1.0.0
32 changes: 32 additions & 0 deletions provider/metagpt-provider-anthropic/setup.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
#!/usr/bin/env python

from setuptools import find_namespace_packages, setup

with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()

with open("requirements.txt", "r", encoding="utf-8") as f:
required = f.read().splitlines()

setup(
name="metagpt-provider-anthropic",
version="0.1.0",
description="Anthropic (Claude) provider for MetaGPT",
long_description=long_description,
long_description_content_type="text/markdown",
author="MetaGPT Team",
author_email="[email protected]",
url="https://github.com/geekan/MetaGPT-Ext",
packages=find_namespace_packages(include=["metagpt.*"], exclude=["*.tests", "*.tests.*", "tests.*", "tests"]),
install_requires=required,
python_requires=">=3.9",
classifiers=[
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
],
)
Empty file.
10 changes: 10 additions & 0 deletions provider/metagpt-provider-anthropic/tests/mock_llm_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/usr/bin/env python
"""
Mock LLM configurations for testing
"""

from metagpt.core.configs.llm_config import LLMConfig

mock_llm_config_anthropic = LLMConfig(
api_type="anthropic", api_key="xxx", base_url="https://api.anthropic.com", model="claude-3-opus-20240229"
)
10 changes: 10 additions & 0 deletions provider/metagpt-provider-anthropic/tests/pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = -xvs
log_file = test_logs/pytest.log
log_file_level = DEBUG
log_file_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_file_date_format = %Y-%m-%d %H:%M:%S
67 changes: 67 additions & 0 deletions provider/metagpt-provider-anthropic/tests/req_resp_const.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
#!/usr/bin/env python
"""
Default request & response data for provider unittest
"""

from anthropic.types import (
ContentBlockDeltaEvent,
Message,
MessageStartEvent,
TextBlock,
TextDelta,
)
from anthropic.types import Usage as AnthropicUsage
from metagpt.core.provider.base_llm import BaseLLM

# Common test data
prompt = "who are you?"
messages = [{"role": "user", "content": prompt}]
resp_cont_tmpl = "I'm {name}"
default_resp_cont = resp_cont_tmpl.format(name="GPT")


# For Anthropic
def get_anthropic_response(name: str, stream: bool = False) -> Message:
if stream:
return [
MessageStartEvent(
message=Message(
id="xxx",
model=name,
role="assistant",
type="message",
content=[TextBlock(text="", type="text")],
usage=AnthropicUsage(input_tokens=10, output_tokens=10),
),
type="message_start",
),
ContentBlockDeltaEvent(
index=0,
delta=TextDelta(text=resp_cont_tmpl.format(name=name), type="text_delta"),
type="content_block_delta",
),
]
else:
return Message(
id="xxx",
model=name,
role="assistant",
type="message",
content=[TextBlock(text=resp_cont_tmpl.format(name=name), type="text")],
usage=AnthropicUsage(input_tokens=10, output_tokens=10),
)


# For llm general chat functions call
async def llm_general_chat_funcs_test(llm: BaseLLM, prompt: str, messages: list[dict], resp_cont: str):
resp = await llm.aask(prompt, stream=False)
assert resp == resp_cont

resp = await llm.aask(prompt)
assert resp == resp_cont

resp = await llm.acompletion_text(messages, stream=False)
assert resp == resp_cont

resp = await llm.acompletion_text(messages, stream=True)
assert resp == resp_cont
45 changes: 45 additions & 0 deletions provider/metagpt-provider-anthropic/tests/test_anthropic_api.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#!/usr/bin/env python
"""
Test for the Anthropic (Claude) provider
"""

import pytest
from anthropic.resources.completions import Completion
from metagpt.provider.anthropic import AnthropicLLM
from tests.mock_llm_config import mock_llm_config_anthropic
from tests.req_resp_const import (
get_anthropic_response,
llm_general_chat_funcs_test,
messages,
prompt,
resp_cont_tmpl,
)

name = "claude-3-opus-20240229"
resp_cont = resp_cont_tmpl.format(name=name)


async def mock_anthropic_messages_create(
self, messages: list[dict], model: str, stream: bool = True, max_tokens: int = None, system: str = None
) -> Completion:
if stream:

async def aresp_iterator():
resps = get_anthropic_response(name, stream=True)
for resp in resps:
yield resp

return aresp_iterator()
else:
return get_anthropic_response(name)


@pytest.mark.asyncio
async def test_anthropic_acompletion(mocker):
mocker.patch("anthropic.resources.messages.AsyncMessages.create", mock_anthropic_messages_create)

anthropic_llm = AnthropicLLM(mock_llm_config_anthropic)
resp = await anthropic_llm.acompletion(messages)
assert resp.content[0].text == resp_cont

await llm_general_chat_funcs_test(anthropic_llm, prompt, messages, resp_cont)
Loading