-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Description
Bug description
When using gpt-5 i get the following error:
Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 16.915(s), this was the 6th time calling it. exp: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}
Bug solved method
Used latest docker image
Environment information
windows 11
- LLM type and model name: GPT-5
- System version: docker image
- Python version:
- MetaGPT version or branch: main
Screenshots or logs
Traceback (most recent call last):
File "/app/metagpt/metagpt/utils/common.py", line 650, in wrapper
result = await func(self, *args, **kwargs)
File "/app/metagpt/metagpt/team.py", line 134, in run
await self.env.run()
openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}