Salta ai contenuti

Getting Started

Questi contenuti non sono ancora disponibili nella tua lingua.

  • Python 3.10+
  • Ollama installed and running locally
  • GPU with sufficient VRAM for your chosen model
Terminal window
pip install polyglot-gpu

polyglot-gpu uses TranslateGemma via Ollama. Three sizes are available:

ModelVRAMSpeedQuality
translategemma:4b3.3 GBFastGood
translategemma:12b8.1 GBBalancedRecommended
translategemma:27b17 GBSlowBest

The model is pulled automatically on first use. To pull it manually:

Terminal window
ollama pull translategemma:12b
import asyncio
from pypolyglot import translate
async def main():
result = await translate("Hello world", "en", "ja")
print(result.translation)
asyncio.run(main())

polyglot-gpu will auto-start Ollama if it’s not running and auto-pull the model if it’s not installed. On first run, expect a short delay while the model downloads.

Use the check_status tool to confirm everything is working:

from pypolyglot.server import check_status
# Returns Ollama status and installed models
status = await check_status()