简体中文| English| Project Homepage | Documentation
- 💫 Complete end-to-end solution for creating digital avatars, including chat data export, preprocessing, model training, and deployment
- 💬 Fine-tune LLM using chat history with support for image modal data, infusing it with that authentic "flavor"
- 🔗 Integrate with Telegram, WhatsApp (coming soon) to create your own digital avatar
- 🛡️ Privacy information filtering with localized fine-tuning and deployment for secure and controllable data
Platform | Text | Images | Voice | Video | Animated Emojis/Stickers | Links (Sharing) | Quote | Forward | Location | Files |
---|---|---|---|---|---|---|---|---|---|---|
Telegram | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | |
🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | |
Discord | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 |
Slack | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 |
Platform | Deployment Support |
---|---|
Telegram | ✅ |
🚧 | |
Discord | ✅ |
Slack | ✅ |
Important
- WeClone is still in rapid iteration phase, current performance does not represent final results.
- LLM fine-tuning effectiveness largely depends on model size, quantity and quality of chat data. Theoretically, larger models with more data yield better results.
- 7B models are prone to becoming "dumb", 14B models can barely communicate, while 32B+ models perform much better.
- Windows environment has not been rigorously tested. You can use WSL as the runtime environment.
[25/07/10] Data source added Telegram
[25/06/05] Support for image modal data fine-tuning
The project uses Qwen2.5-VL-7B-Instruct model by default with LoRA method for SFT stage fine-tuning. You can also use other models and methods supported by LLaMA Factory.
Estimated VRAM requirements:
Method | Precision | 7B | 14B | 30B | 70B | x B |
---|---|---|---|---|---|---|
Full (bf16 or fp16 ) |
32 | 120GB | 240GB | 600GB | 1200GB | 18x GB |
Full (pure_bf16 ) |
16 | 60GB | 120GB | 300GB | 600GB | 8x GB |
Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 2x GB |
QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | x GB |
QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | x/2 GB |
QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | x/4 GB |
-
CUDA installation (skip if already installed, requires version 12.6 or above)
-
It is recommended to use uv to install dependencies, which is a very fast Python environment manager. After installing uv, you can use the following commands to create a new Python environment and install dependencies.
git clone https://github.com/xming521/WeClone.git && cd WeClone
uv venv .venv --python=3.10
source .venv/bin/activate # windows .venv\Scripts\activate
uv pip install --group main -e .
- Copy the configuration file template and rename it to
settings.jsonc
, and make subsequent configuration changes in this file:
cp examples/tg.template.jsonc settings.jsonc
Note
Training and inference related configurations are unified in the file settings.jsonc
- Use the following command to test whether the CUDA environment is correctly configured and can be recognized by PyTorch (not needed for Mac):
python -c "import torch; print('CUDA Available:', torch.cuda.is_available());"
- (Optional) Install FlashAttention to accelerate training and inference:
uv pip install flash-attn --no-build-isolation
.
It is recommended to use Hugging Face to download models, or use the following command:
git lfs install
git clone https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct models/Qwen2.5-VL-7B-Instruct
Please use Telegram Desktop to export chat records. Click the top right corner in the chat interface, then click "Export chat history". Select Photos for message types and JSON for format. You can export multiple contacts (group chat records are not recommended), then place the exported ChatExport_*
in the ./dataset/telegram
directory, meaning put different people's chat record folders together in ./dataset/telegram
.
- First, modify the
language
,platform
, andinclude_type
in the configuration file according to your needs. - If you use telegram, you need to modify the
telegram_args.my_id
in the configuration file to your own telegram user ID. - By default, the project uses Microsoft Presidio to remove
phone numbers, email addresses, credit card numbers, IP addresses, geographic location names, international bank account numbers, cryptocurrency wallet addresses, age information, and generic ID numbers
from the data, but it cannot guarantee 100% identification. - Therefore, a blocklist
blocked_words
is provided insettings.jsonc
, allowing users to manually add words or phrases they want to filter (the entire sentence containing blocked words will be removed by default).
Important
🚨 Please be sure to protect personal privacy and do not leak personal information!
- Execute the following command to process the data. You can modify the
make_dataset_args
in settings.jsonc according to your own chat style.
weclone-cli make-dataset
More Parameter Details: Data Preprocessing
- (Optional) Modify
model_name_or_path
,template
,lora_target
insettings.jsonc
to select other locally downloaded models. - Modify
per_device_train_batch_size
andgradient_accumulation_steps
to adjust VRAM usage. - You can modify parameters like
num_train_epochs
,lora_rank
,lora_dropout
intrain_sft_args
based on your dataset's quantity and quality.
weclone-cli train-sft
Uncomment the deepspeed
line in settings.jsonc
and use the following command for multi-GPU training:
uv pip install "deepspeed<=0.16.9"
deepspeed --num_gpus=number_of_gpus weclone/train/train_sft.py
Test suitable temperature and top_p values, then modify infer_args
in settings.jsonc for subsequent inference use.
weclone-cli webchat-demo
weclone-cli server
Does not include questions asking for personal information, only daily conversation. Test results are in test_result-my.txt.
weclone-cli server
weclone-cli test-model
Tip
We're looking for interesting examples of native English speakers chatting with WeClone! Feel free to share them with us on Twitter.
AstrBot is an easy-to-use multi-platform LLM chatbot and development framework ✨ Supports Discord, Telegram, Slack, Feishu and other platforms.
Usage steps:
- Deploy AstrBot
- Deploy messaging platforms like Discord, Telegram, Slack in AstrBot
- Execute
weclone-cli server
to start the API service - Add a new service provider in AstrBot, select OpenAI type, fill in the API Base URL according to AstrBot's deployment method (e.g., for docker deployment it might be http://172.17.0.1:8005/v1), fill in the model as gpt-3.5-turbo, and enter any API Key
- Tool calling is not supported after fine-tuning, please turn off the default tools first by sending the command:
/tool off_all
on the messaging platform, otherwise the fine-tuned effect won't be visible. - Set the system prompt in AstrBot according to the default_system used during fine-tuning.
Important
Check the api_service logs to ensure that the large model service request parameters are consistent with those used during fine-tuning as much as possible, and turn off all tool plugin capabilities.
LangBot is an easy-to-use open-source LLM chatbot platform suitable for various scenarios. It connects to various global instant messaging platforms. You can set up your IM bot in just 5 minutes.

- Deploy LangBot
- Add a bot (Discord, Telegram, Slack, Lark e.g.) in LangBot
- Execute
weclone-cli server
to start the WeClone API service - Add a new model in the model page, name it
gpt-3.5-turbo
, select OpenAI as the provider, fill in the request URL as WeClone's address. For detailed connection methods, refer to the documentation, and enter any API Key.

- Select the model you just added in the pipeline configuration, or modify the prompt configuration

- Support more data sources
- Richer context: including contextual conversations, chat participant information, time, etc.
- Memory support
- Multimodal support: image support already implemented
- Data augmentation
- GUI support
- COT (Chain of Thought) thinking support
It is also recommended to use DeepWiki for problem solving.
Any Issues/Pull Requests are welcome!
You can contribute by checking Issues or helping review PRs (Pull Requests). For new feature additions, please discuss through Issues first.
Development environment:
uv pip install --group dev -e .
pre-commit install
The project uses pytest
for testing, pyright
for type checking, and ruff
for code formatting.
Before submitting your code, you should run pytest tests
to ensure all tests pass.
Thanks to the following code contributors and other community members for their contributions
This project also benefits from excellent open source projects such as PyWxDump, LLaMA-Factory, AstrBot, LangBot, and others.
Caution
This project is for learning, research and experimental purposes only. There are significant risks in using it for production environments, please assess carefully. Do not use for illegal purposes, consequences are at your own risk.
Important
WeClone is currently not partnered with any platform and has not issued any cryptocurrency. The only official website is: weclone.love. Beware of imitations.
Click to view disclaimer terms
- Users should fully understand and bear all related risks when using this project
- The project authors are not responsible for any direct or indirect losses arising from the use of this project
- Including but not limited to: data loss, financial loss, legal disputes, personal reputation damage, social relationship impact, psychological trauma, career development obstacles, business reputation damage, etc.
- Use for commercial purposes or providing external services requires bearing all risks yourself
- All consequences that may result from production environment use (including but not limited to service interruption, data security issues, user complaints, legal liability, etc.) are entirely borne by the user
- It is recommended to conduct thorough testing, verification and risk assessment before using in production environments
- Fine-tuned models may produce inaccurate, harmful or misleading content
- Model outputs do not represent the views or intentions of real persons
- Users should conduct manual review and verification of model outputs
- Users should ensure that uploaded chat records and other data comply with relevant laws and regulations
- Users should obtain appropriate authorization from data-related persons
- This project is not responsible for data leakage or privacy infringement
- Users should ensure that using this project complies with local laws and regulations
- Involving artificial intelligence, data protection, intellectual property and other related laws
- Users bear the consequences of illegal use
- This project is provided "as is" without any express or implied warranties
- Authors do not promise to provide continuous technical support or maintenance
- No guarantee of project stability, reliability or applicability
When using digital avatars generated by this project, it is strongly recommended to:
- Clearly identify as "AI Bot" or "Digital Avatar" at the beginning of each conversation
- Prominently mark "AI-generated content" in the user interface
- Avoid letting users mistake it for real human conversation, which could cause risks
If you must use in production environments, it is recommended to:
- Conduct comprehensive security testing
- Establish complete content review mechanisms
- Develop emergency response plans
- Purchase appropriate insurance coverage
- Consult legal professionals for advice
This disclaimer may be revised with project updates, users should regularly check the latest version. Continuing to use this project indicates agreement with the latest disclaimer terms.
Once you download, clone, modify, distribute or use the code or models of this project in any way, it indicates that you have fully read, understood and agreed to unconditionally accept all terms of this disclaimer.
Please carefully read and understand all contents of this disclaimer, ensuring strict compliance with relevant regulations when using this project.
Tip
If this project is helpful to you, or if you are interested in the future development of this project, please give the project a Star, thank you