update
This commit is contained in:
@@ -248,3 +248,49 @@ docker-compose logs crawler | grep "Title translated"
|
||||
| 10 Articles | 90s | 25s | 3.6x |
|
||||
|
||||
**Tip:** GPU acceleration is most beneficial when processing many articles in batch.
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Integration Complete
|
||||
|
||||
### What's Included
|
||||
|
||||
✅ Ollama service integrated into Docker Compose
|
||||
✅ Automatic model download (phi3:latest, 2.2GB)
|
||||
✅ GPU support with automatic detection
|
||||
✅ CPU fallback when GPU unavailable
|
||||
✅ Internal-only access (secure)
|
||||
✅ Persistent model storage
|
||||
|
||||
### Quick Verification
|
||||
|
||||
```bash
|
||||
# Check Ollama is running
|
||||
docker ps | grep ollama
|
||||
|
||||
# Check model is downloaded
|
||||
docker-compose exec ollama ollama list
|
||||
|
||||
# Test from inside network
|
||||
docker-compose exec crawler python -c "
|
||||
from ollama_client import OllamaClient
|
||||
from config import Config
|
||||
client = OllamaClient(Config.OLLAMA_BASE_URL, Config.OLLAMA_MODEL, Config.OLLAMA_ENABLED)
|
||||
print(client.translate_title('Guten Morgen'))
|
||||
"
|
||||
```
|
||||
|
||||
### Performance
|
||||
|
||||
**CPU Mode:**
|
||||
- Translation: ~1.5s per title
|
||||
- Summarization: ~8s per article
|
||||
- Suitable for <20 articles/day
|
||||
|
||||
**GPU Mode:**
|
||||
- Translation: ~0.3s per title (5x faster)
|
||||
- Summarization: ~2s per article (4x faster)
|
||||
- Suitable for high-volume processing
|
||||
|
||||
See [GPU_SETUP.md](GPU_SETUP.md) for GPU acceleration setup.
|
||||
|
||||
Reference in New Issue
Block a user