update
This commit is contained in:
32
README.md
32
README.md
@@ -2,6 +2,8 @@
|
||||
|
||||
A fully automated news aggregation and newsletter system that crawls Munich news sources, generates AI summaries, and sends daily newsletters with engagement tracking.
|
||||
|
||||
**🚀 NEW:** GPU acceleration support for 5-10x faster AI processing! See [QUICK_START_GPU.md](QUICK_START_GPU.md)
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
@@ -47,6 +49,7 @@ That's it! The system will automatically:
|
||||
|
||||
### Components
|
||||
|
||||
- **Ollama**: AI service for summarization and translation (port 11434)
|
||||
- **MongoDB**: Data storage (articles, subscribers, tracking)
|
||||
- **Backend API**: Flask API for tracking and analytics (port 5001)
|
||||
- **News Crawler**: Automated RSS feed crawler with AI summarization
|
||||
@@ -57,9 +60,9 @@ That's it! The system will automatically:
|
||||
|
||||
- Python 3.11
|
||||
- MongoDB 7.0
|
||||
- Ollama (phi3:latest model for AI)
|
||||
- Docker & Docker Compose
|
||||
- Flask (API)
|
||||
- Ollama (AI summarization)
|
||||
- Schedule (automation)
|
||||
- Jinja2 (email templates)
|
||||
|
||||
@@ -68,7 +71,8 @@ That's it! The system will automatically:
|
||||
### Prerequisites
|
||||
|
||||
- Docker & Docker Compose
|
||||
- (Optional) Ollama for AI summarization
|
||||
- 4GB+ RAM (for Ollama AI models)
|
||||
- (Optional) NVIDIA GPU for 5-10x faster AI processing
|
||||
|
||||
### Setup
|
||||
|
||||
@@ -84,11 +88,31 @@ That's it! The system will automatically:
|
||||
# Edit backend/.env with your settings
|
||||
```
|
||||
|
||||
3. **Start the system**
|
||||
3. **Configure Ollama (AI features)**
|
||||
```bash
|
||||
docker-compose up -d
|
||||
# Option 1: Use integrated Docker Compose Ollama (recommended)
|
||||
./configure-ollama.sh
|
||||
# Select option 1
|
||||
|
||||
# Option 2: Use external Ollama server
|
||||
# Install from https://ollama.ai/download
|
||||
# Then run: ollama pull phi3:latest
|
||||
```
|
||||
|
||||
4. **Start the system**
|
||||
```bash
|
||||
# Auto-detect GPU and start (recommended)
|
||||
./start-with-gpu.sh
|
||||
|
||||
# Or start manually
|
||||
docker-compose up -d
|
||||
|
||||
# First time: Wait for Ollama model download (2-5 minutes)
|
||||
docker-compose logs -f ollama-setup
|
||||
```
|
||||
|
||||
📖 **For detailed Ollama setup & GPU acceleration:** See [docs/OLLAMA_SETUP.md](docs/OLLAMA_SETUP.md)
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
Edit `backend/.env`:
|
||||
|
||||
Reference in New Issue
Block a user