update
This commit is contained in:
@@ -308,3 +308,113 @@ If you encounter issues:
|
||||
- Output of `nvidia-smi`
|
||||
- Output of `docker info | grep -i runtime`
|
||||
- Relevant logs
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Quick Start Guide
|
||||
|
||||
### 30-Second Setup
|
||||
|
||||
```bash
|
||||
# 1. Check GPU
|
||||
./check-gpu.sh
|
||||
|
||||
# 2. Start services
|
||||
./start-with-gpu.sh
|
||||
|
||||
# 3. Test
|
||||
docker-compose exec crawler python crawler_service.py 2
|
||||
```
|
||||
|
||||
### Command Reference
|
||||
|
||||
**Setup:**
|
||||
```bash
|
||||
./check-gpu.sh # Check GPU availability
|
||||
./configure-ollama.sh # Configure Ollama
|
||||
./start-with-gpu.sh # Start with GPU auto-detection
|
||||
```
|
||||
|
||||
**With GPU (manual):**
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml -f docker-compose.gpu.yml up -d
|
||||
```
|
||||
|
||||
**Without GPU:**
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
**Monitoring:**
|
||||
```bash
|
||||
docker exec munich-news-ollama nvidia-smi # Check GPU
|
||||
watch -n 1 'docker exec munich-news-ollama nvidia-smi' # Monitor GPU
|
||||
docker-compose logs -f ollama # Check logs
|
||||
```
|
||||
|
||||
**Testing:**
|
||||
```bash
|
||||
docker-compose exec crawler python crawler_service.py 2 # Test crawl
|
||||
docker-compose logs crawler | grep "Title translated" # Check timing
|
||||
```
|
||||
|
||||
### Performance Expectations
|
||||
|
||||
| Operation | CPU | GPU | Speedup |
|
||||
|-----------|-----|-----|---------|
|
||||
| Translation | 1.5s | 0.3s | 5x |
|
||||
| Summary | 8s | 2s | 4x |
|
||||
| 10 Articles | 115s | 31s | 3.7x |
|
||||
|
||||
---
|
||||
|
||||
## Integration Summary
|
||||
|
||||
### What Was Implemented
|
||||
|
||||
1. **Ollama Service in Docker Compose**
|
||||
- Runs on internal network (port 11434)
|
||||
- Automatic model download (phi3:latest)
|
||||
- Persistent storage in Docker volume
|
||||
- GPU support with automatic detection
|
||||
|
||||
2. **GPU Acceleration**
|
||||
- NVIDIA GPU support via docker-compose.gpu.yml
|
||||
- Automatic GPU detection script
|
||||
- 5-10x performance improvement
|
||||
- Graceful CPU fallback
|
||||
|
||||
3. **Helper Scripts**
|
||||
- `start-with-gpu.sh` - Auto-detect and start
|
||||
- `check-gpu.sh` - Diagnose GPU availability
|
||||
- `configure-ollama.sh` - Interactive configuration
|
||||
- `test-ollama-setup.sh` - Comprehensive tests
|
||||
|
||||
4. **Security**
|
||||
- Ollama is internal-only (not exposed to host)
|
||||
- Only accessible via Docker network
|
||||
- Prevents unauthorized access
|
||||
|
||||
### Files Created
|
||||
|
||||
- `docker-compose.gpu.yml` - GPU configuration override
|
||||
- `start-with-gpu.sh` - Auto-start script
|
||||
- `check-gpu.sh` - GPU detection script
|
||||
- `test-ollama-setup.sh` - Test suite
|
||||
- `docs/GPU_SETUP.md` - This documentation
|
||||
- `docs/OLLAMA_SETUP.md` - Ollama setup guide
|
||||
- `docs/PERFORMANCE_COMPARISON.md` - Benchmarks
|
||||
|
||||
### Quick Commands
|
||||
|
||||
```bash
|
||||
# Start with GPU
|
||||
docker-compose -f docker-compose.yml -f docker-compose.gpu.yml up -d
|
||||
|
||||
# Or use helper script
|
||||
./start-with-gpu.sh
|
||||
|
||||
# Verify GPU usage
|
||||
docker exec munich-news-ollama nvidia-smi
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user