Skip to content

Releases: fluffypony/llm-chatifier

v0.2.3: Intelligent Retry & UX Improvements

08 Jun 11:24
2c1fece
Compare
Choose a tag to compare

🔧 Release 0.2.3 - Intelligent Retry & UX Improvements

This release dramatically improves the user experience by implementing intelligent retry logic that handles temporary network failures gracefully without bothering the user.

Download: Pre-built binaries available below or install via pip install llm-chatifier==0.2.3

🔧 Improvements

  • Silent First Retry - Network failures now retry automatically without showing errors or prompting the user
  • Smart Error Display - Only shows error messages after the second failure, filtering out temporary glitches
  • Text Restoration - Failed messages are automatically restored to the input prompt for easy retry
  • Seamless Recovery - Users can simply press Enter to retry the exact same message after failures

🐛 Bug Fixes

  • Eliminated Annoying Prompts - No more "Request failed. Retry? (attempt 2/2)" interruptions for temporary issues
  • Better Error Categorization - Auth errors are handled immediately while network errors retry silently
  • Improved User Flow - Failed requests now gracefully restore input instead of requiring manual retyping

Installation

pip install llm-chatifier==0.2.3

🎉 v0.2.3 Successfully Released!

v0.2.2

08 Jun 10:30
8a06443
Compare
Choose a tag to compare
v0.2.2 - Super Robust Error Handling: Never crash on API errors, add …

v0.2.1: Enhanced API Detection & Perplexity Support

08 Jun 10:01
d77e935
Compare
Choose a tag to compare

🔧 Release 0.2.1 - Enhanced API Detection & Perplexity Support

This release significantly improves API detection reliability and adds native support for Perplexity AI's API endpoints.

Download: Pre-built binaries available below or install via pip install llm-chatifier==0.2.1

✨ New Features

  • Perplexity AI Support - Native detection and configuration for api.perplexity.ai
  • Enhanced Endpoint Detection - Automatic fallback to /chat/completions for compatible APIs

🔧 Improvements

  • Robust Error Handling - API detection now gracefully handles 404 responses without failing
  • Dynamic Endpoint Configuration - Clients automatically use the correct endpoints based on detection results
  • Better Domain Mapping - Enhanced domain hints for popular API providers

🐛 Bug Fixes

  • Detection Reliability - Fixed detection failures when APIs return 404 for some endpoints
  • Endpoint Configuration - Fixed hardcoded endpoint usage that prevented non-standard APIs from working
  • Exception Handling - Added proper error recovery during endpoint testing

📦 Installation

Via pip (recommended)

pip install llm-chatifier==0.2.1

Pre-built Binaries

Platform-specific zip files available below:

  • Linux (amd64) - llm-chatifier-linux-amd64.zip
  • Windows (amd64) - llm-chatifier-windows-amd64.exe.zip
  • macOS (amd64) - llm-chatifier-macos-amd64.zip

🔗 Links

v0.2.0: Enhanced Detection & Performance

08 Jun 09:02
6cfc735
Compare
Choose a tag to compare

🚀 Release 0.2.0 - Enhanced Detection & Performance

Major improvements to API autodetection speed and debugging capabilities.

Download: Pre-built binaries available below or install via pip install llm-chatifier==0.2.0

✨ New Features

  • Smart TLS Failure Handling - Autodetection now skips HTTPS immediately after TLS handshake failures, dramatically improving detection speed
  • Enhanced Verbose Output - Verbose flag (-v) now shows all autodetection attempts including ports, protocols, API types, and endpoints being tested

🔧 Improvements

  • Optimized Test Prompt - API testing now uses a minimal prompt requesting shorter responses for faster validation
  • Intelligent Protocol Switching - After detecting TLS failures, automatically switches to HTTP testing instead of retrying failed TLS connections
  • Better Debug Logging - Clear logging when TLS is skipped due to failures

📦 Installation

Via pip (recommended)

pip install llm-chatifier==0.2.0

Pre-built Binaries

Platform-specific zip files available below:

  • Linux (amd64) - llm-chatifier-linux-amd64.zip
  • Windows (amd64) - llm-chatifier-windows-amd64.exe.zip
  • macOS (amd64) - llm-chatifier-macos-amd64.zip

🔗 Links

v0.1.9: Smart TLS Failure Handling

08 Jun 08:27
39cf398
Compare
Choose a tag to compare

🔧 Release 0.1.9 - Smart TLS Failure Handling

Fixed autodetection to skip HTTPS immediately after TLS handshake failures, dramatically improving detection speed when servers don't support TLS.

Download: Pre-built binaries available below or install via pip install llm-chatifier==0.1.9

What's Fixed

  • Autodetection now detects TLS handshake failures (timeouts, SSL errors) on first HTTPS attempt
  • Immediately skips remaining HTTPS endpoints for that port and switches to HTTP testing
  • Eliminates wasted time repeatedly trying TLS on servers that don't support it
  • Improved verbose logging shows when TLS is skipped due to failures

Installation

pip install llm-chatifier==0.1.9

v0.1.8: Enhanced Verbose Output

08 Jun 08:20
6d6916b
Compare
Choose a tag to compare

🔧 Release 0.1.8 - Enhanced Verbose Output

Fixed verbose flag to properly show detailed autodetection process, making debugging API connection issues much easier.

Download: Pre-built binaries available below or install via pip install llm-chatifier==0.1.8

What's Fixed

  • Verbose flag (-v) now uses DEBUG logging level instead of INFO, showing all autodetection attempts
  • Users can now see exactly which ports, protocols, API types, and endpoints are being tested during autodetection
  • Improved debugging experience for API connection troubleshooting

Installation

pip install llm-chatifier==0.1.8

v0.1.7: Windows Build Fix

08 Jun 07:17
a5c13e4
Compare
Choose a tag to compare

🔧 Release 0.1.7 - Windows Build Fix

Fixed Windows binary packaging in GitHub Actions by adding explicit shell: bash directive. All platforms now properly generate zip files for release artifacts.

Download: Pre-built binaries available below or install via pip install llm-chatifier==0.1.7

What's Fixed

  • Added explicit shell: bash to GitHub Actions workflow steps
  • Windows runners now correctly execute bash commands instead of trying to run them in PowerShell
  • All platforms (Windows, macOS, Linux) now properly create zip file artifacts for releases

Installation

pip install llm-chatifier==0.1.7

Links

v0.1.6: Enhanced Release Infrastructure

08 Jun 07:01
53a2af6
Compare
Choose a tag to compare

🚀 Release 0.1.6 - Enhanced Release Infrastructure

🔧 GitHub Actions & Release Improvements

  • Fixed 403 release errors - added proper contents: write permissions for automated releases
  • Resolved build cancellations - disabled automatic job cancellation to prevent "Operation cancelled by user" errors
  • Updated deprecated actions - migrated from upload-artifact@v3 to v4 to eliminate deprecation warnings
  • Improved artifact packaging - binaries are now packaged as zip files for easier distribution

🏗️ Build System Enhancements

  • Better artifact management - excluded build directories from version control with updated .gitignore
  • Cross-platform binaries - reliable Windows, macOS, and Linux builds with proper packaging
  • Streamlined releases - automated zip file creation and publishing to GitHub releases
  • Enhanced concurrency controls - prevent workflow interference and race conditions

🐛 Infrastructure Fixes

  • Eliminated build failures from deprecated GitHub Actions components
  • Prevented race conditions in concurrent workflow runs
  • Fixed permission issues that were blocking automated release creation
  • Improved download experience with compressed binary distributions

📦 Installation

Via pip (recommended)

pip install llm-chatifier==0.1.6

Pre-built Binaries

Download platform-specific zip files from [GitHub Releases](https://github.com/fluffypony/llm-chatifier/releases/tag/v0.1.6):

  • Linux (amd64) - llm-chatifier-linux-amd64.zip
  • Windows (amd64) - llm-chatifier-windows-amd64.exe.zip
  • macOS (amd64) - llm-chatifier-macos-amd64.zip

Extract and run the binary directly - no installation required!

🔧 Usage Examples

# Check version
llm-chatifier --version

# Connect to OpenAI API
llm-chatifier api.openai.com

# Use with OpenRouter and specific model
llm-chatifier openrouter.ai -m gpt-4

# Verbose mode to see connection details
llm-chatifier api.anthropic.com -v

🎯 What's Fixed

  • GitHub Actions workflows now complete successfully without errors
  • Binary releases are automatically published as user-friendly zip files
  • No more mysterious build cancellations or permission errors
  • Deprecated action warnings completely eliminated
  • Cleaner download experience for binary users

🔗 Links

v0.1.5: Infrastructure Improvements

08 Jun 06:44
4a2483d
Compare
Choose a tag to compare

🚀 Release 0.1.5 - Infrastructure Improvements

🔧 GitHub Actions Fixes

  • Fixed 403 release errors - added proper contents: write permissions
  • Resolved build cancellations - disabled automatic job cancellation to prevent "Operation cancelled by user" errors
  • Updated deprecated actions - migrated from upload-artifact@v3 to v4
  • Improved build reliability - added concurrency controls to prevent interference between builds

🏗️ Build System Enhancements

  • Better artifact management - excluded build directories from version control
  • Cross-platform binaries - reliable Windows, macOS, and Linux builds
  • Streamlined releases - automated binary publishing to GitHub releases

🐛 Infrastructure Fixes

  • Eliminated build failures from deprecated GitHub Actions
  • Prevented race conditions in concurrent workflow runs
  • Fixed permission issues blocking release creation

📦 Installation

pip install llm-chatifier==0.1.5

💾 Binaries

Pre-built binaries are now available for:

  • Linux (amd64) - llm-chatifier-linux-amd64
  • Windows (amd64) - llm-chatifier-windows-amd64.exe
  • macOS (amd64) - llm-chatifier-macos-amd64

Download from GitHub Releases

🎯 What's Fixed

  • GitHub Actions workflows now complete successfully
  • Binary releases are automatically published
  • No more mysterious build cancellations
  • Deprecated action warnings eliminated

View on PyPI

v0.1.2: Smart API Key Handling & CLI Enhancements

08 Jun 05:50
ca7c8b3
Compare
Choose a tag to compare

🚀 Major Improvements

🔐 Smart API Key Handling

  • Multi-key authentication - automatically tries all relevant environment variables
  • Better error classification - distinguishes auth errors from model requirement errors
  • Intelligent fallback - stops trying keys when model is required, not auth

🛠️ CLI Enhancements

  • --version flag for easy version checking
  • Single-source versioning from pyproject.toml (no more version duplication!)
  • Better error messages for OpenRouter and similar APIs

🐛 Bug Fixes

  • Fixed empty response handling for APIs returning 204 No Content
  • Resolved authentication issues with OpenRouter requiring model specification
  • Improved API detection logic for edge cases

📦 Installation

pip install llm-chatifier==0.1.2

🔧 Usage Examples

# Check version
llm-chatifier --version

# OpenRouter with automatic key detection and model requirement handling
llm-chatifier openrouter.ai -m gpt-4

# Verbose mode to see key detection process
llm-chatifier api.openai.com -v

🎯 What's Fixed

  • OpenRouter authentication now works correctly
  • Multiple API keys are tried automatically
  • Better distinction between "needs auth" vs "needs model" errors
  • More robust handling of empty API responses

View on PyPI