Getting Started
यह कंटेंट अभी तक आपकी भाषा में उपलब्ध नहीं है।
Prerequisites
Section titled “Prerequisites”- Python 3.10+
- Ollama installed and running locally
- GPU with sufficient VRAM for your chosen model
Install
Section titled “Install”pip install polyglot-gpuChoose a model
Section titled “Choose a model”polyglot-gpu uses TranslateGemma via Ollama. Three sizes are available:
| Model | VRAM | Speed | Quality |
|---|---|---|---|
translategemma:4b | 3.3 GB | Fast | Good |
translategemma:12b | 8.1 GB | Balanced | Recommended |
translategemma:27b | 17 GB | Slow | Best |
The model is pulled automatically on first use. To pull it manually:
ollama pull translategemma:12bFirst translation
Section titled “First translation”import asynciofrom pypolyglot import translate
async def main(): result = await translate("Hello world", "en", "ja") print(result.translation)
asyncio.run(main())polyglot-gpu will auto-start Ollama if it’s not running and auto-pull the model if it’s not installed. On first run, expect a short delay while the model downloads.
Verify setup
Section titled “Verify setup”Use the check_status tool to confirm everything is working:
from pypolyglot.server import check_status
# Returns Ollama status and installed modelsstatus = await check_status()Next steps
Section titled “Next steps”- Library Usage — Translate text, markdown, and multiple languages
- MCP Server — Use with Claude Code
- Configuration — Tune model and concurrency