This commit is contained in:
2025-11-19 13:20:56 +01:00
parent 84fce9a82c
commit 5e9820136f
9 changed files with 0 additions and 1847 deletions

View File

@@ -1,487 +0,0 @@
# Design Document - AI Article Summarization
## Overview
This design integrates Ollama AI into the news crawler workflow to automatically generate concise summaries of articles. The system will extract full article content, send it to Ollama for summarization, and store both the original content and the AI-generated summary in MongoDB.
## Architecture
### High-Level Flow
```
RSS Feed → Extract Content → Summarize with Ollama → Store in MongoDB
↓ ↓ ↓
Full Article Text AI Summary (≤150 words) Both Stored
```
### Component Diagram
```
┌─────────────────────────────────────────────────────────────┐
│ News Crawler Service │
│ │
│ ┌────────────────┐ ┌──────────────────┐ │
│ │ RSS Parser │──────→│ Content Extractor│ │
│ └────────────────┘ └──────────────────┘ │
│ │ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Ollama Client │ │
│ │ (New Component) │ │
│ └──────────────────┘ │
│ │ │
│ ↓ │
│ ┌──────────────────┐ │
│ │ Database Writer │ │
│ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌──────────────────┐
│ Ollama Server │
│ (External) │
└──────────────────┘
┌──────────────────┐
│ MongoDB │
└──────────────────┘
```
## Components and Interfaces
### 1. Ollama Client Module
**File:** `news_crawler/ollama_client.py`
**Purpose:** Handle communication with Ollama server for summarization
**Interface:**
```python
class OllamaClient:
def __init__(self, base_url, model, api_key=None, enabled=True):
"""Initialize Ollama client with configuration"""
def summarize_article(self, content: str, max_words: int = 150) -> dict:
"""
Summarize article content using Ollama
Args:
content: Full article text
max_words: Maximum words in summary (default 150)
Returns:
{
'summary': str, # AI-generated summary
'word_count': int, # Summary word count
'success': bool, # Whether summarization succeeded
'error': str or None, # Error message if failed
'duration': float # Time taken in seconds
}
"""
def is_available(self) -> bool:
"""Check if Ollama server is reachable"""
def test_connection(self) -> dict:
"""Test connection and return server info"""
```
**Key Methods:**
1. **summarize_article()**
- Constructs prompt for Ollama
- Sends HTTP POST request
- Handles timeouts and errors
- Validates response
- Returns structured result
2. **is_available()**
- Quick health check
- Returns True/False
- Used before attempting summarization
3. **test_connection()**
- Detailed connection test
- Returns server info and model list
- Used for diagnostics
### 2. Enhanced Crawler Service
**File:** `news_crawler/crawler_service.py`
**Changes:**
```python
# Add Ollama client initialization
from ollama_client import OllamaClient
# Initialize at module level
ollama_client = OllamaClient(
base_url=os.getenv('OLLAMA_BASE_URL'),
model=os.getenv('OLLAMA_MODEL'),
api_key=os.getenv('OLLAMA_API_KEY'),
enabled=os.getenv('OLLAMA_ENABLED', 'false').lower() == 'true'
)
# Modify crawl_rss_feed() to include summarization
def crawl_rss_feed(feed_url, feed_name, max_articles=10):
# ... existing code ...
# After extracting content
article_data = extract_article_content(article_url)
# NEW: Summarize with Ollama
summary_result = None
if ollama_client.enabled and article_data.get('content'):
print(f" 🤖 Summarizing with AI...")
summary_result = ollama_client.summarize_article(
article_data['content'],
max_words=150
)
if summary_result['success']:
print(f" ✓ Summary generated ({summary_result['word_count']} words)")
else:
print(f" ⚠ Summarization failed: {summary_result['error']}")
# Build article document with summary
article_doc = {
'title': article_data.get('title'),
'author': article_data.get('author'),
'link': article_url,
'content': article_data.get('content'),
'summary': summary_result['summary'] if summary_result and summary_result['success'] else None,
'word_count': article_data.get('word_count'),
'summary_word_count': summary_result['word_count'] if summary_result and summary_result['success'] else None,
'source': feed_name,
'published_at': extract_published_date(entry),
'crawled_at': article_data.get('crawled_at'),
'summarized_at': datetime.utcnow() if summary_result and summary_result['success'] else None,
'created_at': datetime.utcnow()
}
```
### 3. Configuration Module
**File:** `news_crawler/config.py` (new file)
**Purpose:** Centralize configuration management
```python
import os
from dotenv import load_dotenv
load_dotenv(dotenv_path='../.env')
class Config:
# MongoDB
MONGODB_URI = os.getenv('MONGODB_URI', 'mongodb://localhost:27017/')
DB_NAME = 'munich_news'
# Ollama
OLLAMA_BASE_URL = os.getenv('OLLAMA_BASE_URL', 'http://localhost:11434')
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'phi3:latest')
OLLAMA_API_KEY = os.getenv('OLLAMA_API_KEY', '')
OLLAMA_ENABLED = os.getenv('OLLAMA_ENABLED', 'false').lower() == 'true'
OLLAMA_TIMEOUT = int(os.getenv('OLLAMA_TIMEOUT', '30'))
# Crawler
RATE_LIMIT_DELAY = 1 # seconds between requests
MAX_CONTENT_LENGTH = 50000 # characters
```
## Data Models
### Updated Article Schema
```javascript
{
_id: ObjectId,
title: String,
author: String,
link: String, // Unique index
content: String, // Full article content
summary: String, // AI-generated summary (≤150 words)
word_count: Number, // Original content word count
summary_word_count: Number, // Summary word count
source: String,
published_at: String,
crawled_at: DateTime,
summarized_at: DateTime, // When AI summary was generated
created_at: DateTime
}
```
### Ollama Request Format
```json
{
"model": "phi3:latest",
"prompt": "Summarize the following article in 150 words or less. Focus on the key points and main message:\n\n[ARTICLE CONTENT]",
"stream": false,
"options": {
"temperature": 0.7,
"max_tokens": 200
}
}
```
### Ollama Response Format
```json
{
"model": "phi3:latest",
"created_at": "2024-11-10T16:30:00Z",
"response": "The AI-generated summary text here...",
"done": true,
"total_duration": 5000000000
}
```
## Error Handling
### Error Scenarios and Responses
| Scenario | Handling | User Impact |
|----------|----------|-------------|
| Ollama server down | Log warning, store original content | Article saved without summary |
| Ollama timeout (>30s) | Cancel request, store original | Article saved without summary |
| Empty summary returned | Log error, store original | Article saved without summary |
| Invalid response format | Log error, store original | Article saved without summary |
| Network error | Retry once, then store original | Article saved without summary |
| Model not found | Log error, disable Ollama | All articles saved without summaries |
### Error Logging Format
```python
{
'timestamp': datetime.utcnow(),
'article_url': article_url,
'error_type': 'timeout|connection|invalid_response|empty_summary',
'error_message': str(error),
'ollama_config': {
'base_url': OLLAMA_BASE_URL,
'model': OLLAMA_MODEL,
'enabled': OLLAMA_ENABLED
}
}
```
## Testing Strategy
### Unit Tests
1. **test_ollama_client.py**
- Test summarization with mock responses
- Test timeout handling
- Test error scenarios
- Test connection checking
2. **test_crawler_with_ollama.py**
- Test crawler with Ollama enabled
- Test crawler with Ollama disabled
- Test fallback when Ollama fails
- Test rate limiting
### Integration Tests
1. **test_end_to_end.py**
- Crawl real RSS feed
- Summarize with real Ollama
- Verify database storage
- Check all fields populated
### Manual Testing
1. Test with Ollama enabled and working
2. Test with Ollama disabled
3. Test with Ollama unreachable
4. Test with slow Ollama responses
5. Test with various article lengths
## Performance Considerations
### Timing Estimates
- Article extraction: 2-5 seconds
- Ollama summarization: 5-15 seconds (depends on article length and model)
- Database write: <1 second
- **Total per article: 8-21 seconds**
### Optimization Strategies
1. **Sequential Processing**
- Process one article at a time
- Prevents overwhelming Ollama
- Easier to debug
2. **Timeout Management**
- 30-second timeout per request
- Prevents hanging on slow responses
3. **Rate Limiting**
- 1-second delay between articles
- Respects server resources
4. **Future: Batch Processing**
- Queue articles for summarization
- Process in batches
- Use Celery for async processing
### Resource Usage
- **Memory**: ~100MB per crawler instance
- **Network**: ~1-5KB per article (to Ollama)
- **Storage**: +150 words per article (~1KB)
- **CPU**: Minimal (Ollama does the heavy lifting)
## Security Considerations
1. **API Key Storage**
- Store in environment variables
- Never commit to git
- Use secrets management in production
2. **Content Sanitization**
- Don't log full article content
- Sanitize URLs in logs
- Limit error message detail
3. **Network Security**
- Support HTTPS for Ollama
- Validate SSL certificates
- Use secure connections
4. **Rate Limiting**
- Prevent abuse of Ollama server
- Implement backoff on errors
- Monitor usage patterns
## Deployment Considerations
### Environment Variables
```bash
# Required
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=phi3:latest
OLLAMA_ENABLED=true
# Optional
OLLAMA_API_KEY=your-api-key
OLLAMA_TIMEOUT=30
```
### Docker Deployment
```yaml
# docker-compose.yml
services:
crawler:
build: ./news_crawler
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- OLLAMA_ENABLED=true
depends_on:
- ollama
- mongodb
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
```
### Monitoring
1. **Metrics to Track**
- Summarization success rate
- Average summarization time
- Ollama server uptime
- Error frequency by type
2. **Logging**
- Log all summarization attempts
- Log errors with context
- Log performance metrics
3. **Alerts**
- Alert if Ollama is down >5 minutes
- Alert if success rate <80%
- Alert if average time >20 seconds
## Migration Plan
### Phase 1: Add Ollama Client (Week 1)
- Create ollama_client.py
- Add configuration
- Write unit tests
- Test with sample articles
### Phase 2: Integrate with Crawler (Week 1)
- Modify crawler_service.py
- Add summarization step
- Update database schema
- Test end-to-end
### Phase 3: Update Backend API (Week 2)
- Update news routes
- Add summary fields to responses
- Update frontend to display summaries
- Deploy to production
### Phase 4: Monitor and Optimize (Ongoing)
- Monitor performance
- Tune prompts for better summaries
- Optimize rate limiting
- Add batch processing if needed
## Rollback Plan
If issues arise:
1. **Immediate**: Set `OLLAMA_ENABLED=false`
2. **Short-term**: Revert crawler code changes
3. **Long-term**: Remove Ollama integration
System will continue to work with original content if Ollama is disabled.
## Success Metrics
- ✅ 95%+ of articles successfully summarized
- ✅ Average summarization time <15 seconds
- ✅ Zero data loss (all articles stored even if summarization fails)
- ✅ Ollama uptime >99%
- ✅ Summary quality: readable and accurate (manual review)
## Future Enhancements
1. **Multi-language Support**
- Detect article language
- Use appropriate model
- Translate summaries
2. **Custom Summary Lengths**
- Allow configuration per feed
- Support different lengths for different use cases
3. **Sentiment Analysis**
- Add sentiment score
- Categorize as positive/negative/neutral
4. **Keyword Extraction**
- Extract key topics
- Enable better search
5. **Batch Processing**
- Queue articles
- Process in parallel
- Use Celery for async
6. **Caching**
- Cache summaries
- Avoid re-processing
- Use Redis for cache

View File

@@ -1,164 +0,0 @@
# Requirements Document
## Introduction
This feature integrates Ollama AI into the news crawler to automatically summarize articles before storing them in the database. Instead of storing full article content, the system will generate concise 150-word summaries using AI, making the content more digestible for newsletter readers and reducing storage requirements.
## Glossary
- **Crawler Service**: The standalone microservice that fetches and processes article content from RSS feeds
- **Ollama Server**: The AI inference server that provides text summarization capabilities
- **Article Content**: The full text extracted from a news article webpage
- **Summary**: A concise AI-generated version of the article content (max 150 words)
- **MongoDB**: The database where articles and summaries are stored
## Requirements
### Requirement 1: Ollama Integration in Crawler
**User Story:** As a system administrator, I want the crawler to use Ollama for summarization, so that articles are automatically condensed before storage.
#### Acceptance Criteria
1. WHEN the crawler extracts article content, THE Crawler Service SHALL send the content to the Ollama Server for summarization
2. WHEN sending content to Ollama, THE Crawler Service SHALL include a prompt requesting a summary of 150 words or less
3. WHEN Ollama returns a summary, THE Crawler Service SHALL validate that the summary is not empty
4. IF the Ollama Server is unavailable, THEN THE Crawler Service SHALL store the original content without summarization and log a warning
5. WHEN summarization fails, THE Crawler Service SHALL continue processing other articles without stopping
### Requirement 2: Configuration Management
**User Story:** As a system administrator, I want to configure Ollama settings, so that I can control the summarization behavior.
#### Acceptance Criteria
1. THE Crawler Service SHALL read Ollama configuration from environment variables
2. THE Crawler Service SHALL support the following configuration options:
- OLLAMA_BASE_URL (server URL)
- OLLAMA_MODEL (model name)
- OLLAMA_ENABLED (enable/disable flag)
- OLLAMA_API_KEY (optional authentication)
3. WHERE OLLAMA_ENABLED is false, THE Crawler Service SHALL store original content without summarization
4. WHERE OLLAMA_ENABLED is true AND Ollama is unreachable, THE Crawler Service SHALL log an error and store original content
### Requirement 3: Summary Storage
**User Story:** As a developer, I want summaries stored in the database, so that the frontend can display concise article previews.
#### Acceptance Criteria
1. WHEN a summary is generated, THE Crawler Service SHALL store it in the `summary` field in MongoDB
2. WHEN storing an article, THE Crawler Service SHALL include both the original content and the AI summary
3. THE Crawler Service SHALL store the following fields:
- `content` (original full text)
- `summary` (AI-generated, max 150 words)
- `word_count` (original content word count)
- `summary_word_count` (summary word count)
- `summarized_at` (timestamp when summarized)
4. WHEN an article already has a summary, THE Crawler Service SHALL not re-summarize it
### Requirement 4: Error Handling and Resilience
**User Story:** As a system administrator, I want the crawler to handle AI failures gracefully, so that the system remains reliable.
#### Acceptance Criteria
1. IF Ollama returns an error, THEN THE Crawler Service SHALL log the error and store the original content
2. IF Ollama times out (>30 seconds), THEN THE Crawler Service SHALL cancel the request and store the original content
3. IF the summary is empty or invalid, THEN THE Crawler Service SHALL store the original content
4. WHEN an error occurs, THE Crawler Service SHALL include an error indicator in the database record
5. THE Crawler Service SHALL continue processing remaining articles after any summarization failure
### Requirement 5: Performance and Rate Limiting
**User Story:** As a system administrator, I want the crawler to respect rate limits, so that it doesn't overwhelm the Ollama server.
#### Acceptance Criteria
1. THE Crawler Service SHALL wait at least 1 second between Ollama API calls
2. THE Crawler Service SHALL set a timeout of 30 seconds for each Ollama request
3. WHEN processing multiple articles, THE Crawler Service SHALL process them sequentially to avoid overloading Ollama
4. THE Crawler Service SHALL log the time taken for each summarization
5. THE Crawler Service SHALL display progress indicators showing summarization status
### Requirement 6: Monitoring and Logging
**User Story:** As a system administrator, I want detailed logs of summarization activity, so that I can monitor and troubleshoot the system.
#### Acceptance Criteria
1. THE Crawler Service SHALL log when summarization starts for each article
2. THE Crawler Service SHALL log the original word count and summary word count
3. THE Crawler Service SHALL log any errors or warnings from Ollama
4. THE Crawler Service SHALL display a summary of total articles summarized at the end
5. THE Crawler Service SHALL include summarization statistics in the final report
### Requirement 7: API Endpoint Updates
**User Story:** As a frontend developer, I want API endpoints to return summaries, so that I can display them to users.
#### Acceptance Criteria
1. WHEN fetching articles via GET /api/news, THE Backend API SHALL include the `summary` field if available
2. WHEN fetching a single article via GET /api/news/<url>, THE Backend API SHALL include both `content` and `summary`
3. THE Backend API SHALL include a `has_summary` boolean field indicating if AI summarization was performed
4. THE Backend API SHALL include `summarized_at` timestamp if available
5. WHERE no summary exists, THE Backend API SHALL return a preview of the original content (first 200 chars)
### Requirement 8: Backward Compatibility
**User Story:** As a developer, I want the system to work with existing articles, so that no data migration is required.
#### Acceptance Criteria
1. THE Crawler Service SHALL work with articles that don't have summaries
2. THE Backend API SHALL handle articles with or without summaries gracefully
3. WHERE an article has no summary, THE Backend API SHALL generate a preview from the content field
4. THE Crawler Service SHALL not re-process articles that already have summaries
5. THE system SHALL continue to function if Ollama is disabled or unavailable
## Non-Functional Requirements
### Performance
- Summarization SHALL complete within 30 seconds per article
- The crawler SHALL process at least 10 articles per minute (including summarization)
- Database operations SHALL not be significantly slower with summary storage
### Reliability
- The system SHALL maintain 99% uptime even if Ollama is unavailable
- Failed summarizations SHALL not prevent article storage
- The crawler SHALL recover from Ollama errors without manual intervention
### Security
- Ollama API keys SHALL be stored in environment variables, not in code
- Article content SHALL not be logged to prevent sensitive data exposure
- API communication with Ollama SHALL support HTTPS
### Scalability
- The system SHALL support multiple Ollama servers for load balancing (future)
- The crawler SHALL handle articles of any length (up to 50,000 words)
- The database schema SHALL support future enhancements (tags, categories, etc.)
## Dependencies
- Ollama server must be running and accessible
- `requests` Python library for HTTP communication
- Environment variables properly configured
- MongoDB with sufficient storage for both content and summaries
## Assumptions
- Ollama server is already set up and configured
- The phi3:latest model (or configured model) supports summarization tasks
- Network connectivity between crawler and Ollama server is reliable
- Articles are in English or the configured Ollama model supports the article language
## Future Enhancements
- Support for multiple languages
- Customizable summary length
- Sentiment analysis integration
- Keyword extraction
- Category classification
- Batch summarization for improved performance
- Caching of summaries to avoid re-processing

View File

@@ -1,92 +0,0 @@
# Implementation Plan
- [x] 1. Create Ollama client module
- Create `news_crawler/ollama_client.py` with OllamaClient class
- Implement `summarize_article()` method with prompt construction and API call
- Implement `is_available()` method for health checks
- Implement `test_connection()` method for diagnostics
- Add timeout handling (30 seconds)
- Add error handling for connection, timeout, and invalid responses
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5, 4.1, 4.2, 4.3, 5.2_
- [x] 2. Create configuration module for crawler
- Create `news_crawler/config.py` with Config class
- Load environment variables (OLLAMA_BASE_URL, OLLAMA_MODEL, OLLAMA_ENABLED, OLLAMA_API_KEY, OLLAMA_TIMEOUT)
- Add validation for required configuration
- Add default values for optional configuration
- _Requirements: 2.1, 2.2, 2.3, 2.4_
- [x] 3. Integrate Ollama client into crawler service
- Import OllamaClient in `news_crawler/crawler_service.py`
- Initialize Ollama client at module level using Config
- Modify `crawl_rss_feed()` to call summarization after content extraction
- Add conditional logic to skip summarization if OLLAMA_ENABLED is false
- Add error handling to continue processing if summarization fails
- Add logging for summarization start, success, and failure
- Add rate limiting delay after summarization
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5, 2.3, 2.4, 4.1, 4.5, 5.1, 5.3, 6.1, 6.2, 6.3_
- [x] 4. Update database schema and storage
- Modify article document structure in `crawl_rss_feed()` to include:
- `summary` field (AI-generated summary)
- `summary_word_count` field
- `summarized_at` field (timestamp)
- Update MongoDB upsert logic to handle new fields
- Add check to skip re-summarization if article already has summary
- _Requirements: 3.1, 3.2, 3.3, 3.4, 8.4_
- [x] 5. Update backend API to return summaries
- Modify `backend/routes/news_routes.py` GET /api/news endpoint
- Add `summary`, `summary_word_count`, `summarized_at` fields to response
- Add `has_summary` boolean field to indicate if AI summarization was performed
- Modify GET /api/news/<url> endpoint to include summary fields
- Add fallback to content preview if no summary exists
- _Requirements: 7.1, 7.2, 7.3, 7.4, 7.5, 8.1, 8.2, 8.3_
- [x] 6. Update database schema documentation
- Update `backend/DATABASE_SCHEMA.md` with new summary fields
- Add example document showing summary fields
- Document the summarization workflow
- _Requirements: 3.1, 3.2, 3.3_
- [x] 7. Add environment variable configuration
- Update `backend/env.template` with Ollama configuration
- Add comments explaining each Ollama setting
- Document default values
- _Requirements: 2.1, 2.2_
- [x] 8. Create test script for Ollama integration
- Create `news_crawler/test_ollama.py` to test Ollama connection
- Test summarization with sample article
- Test error handling (timeout, connection failure)
- Display configuration and connection status
- _Requirements: 1.1, 1.2, 1.3, 1.4, 2.1, 2.2, 4.1, 4.2_
- [x] 9. Update crawler statistics and logging
- Add summarization statistics to final report in `crawl_all_feeds()`
- Track total articles summarized vs failed
- Log average summarization time
- Display progress indicators during summarization
- _Requirements: 5.4, 6.1, 6.2, 6.3, 6.4, 6.5_
- [x] 10. Create documentation for AI summarization
- Create `news_crawler/AI_SUMMARIZATION.md` explaining the feature
- Document configuration options
- Provide troubleshooting guide
- Add examples of usage
- _Requirements: 2.1, 2.2, 2.3, 2.4, 6.1, 6.2, 6.3_
- [x] 11. Update main README with AI summarization info
- Add section about AI summarization feature
- Document Ollama setup requirements
- Add configuration examples
- Update API endpoint documentation
- _Requirements: 2.1, 2.2, 7.1, 7.2_
- [x] 12. Test end-to-end workflow
- Run crawler with Ollama enabled
- Verify articles are summarized correctly
- Check database contains all expected fields
- Test API endpoints return summaries
- Verify error handling when Ollama is disabled/unavailable
- _Requirements: 1.1, 1.2, 1.3, 1.4, 1.5, 3.1, 3.2, 3.3, 3.4, 4.1, 4.2, 4.3, 4.4, 4.5, 7.1, 7.2, 7.3, 7.4, 7.5, 8.1, 8.2, 8.3, 8.4, 8.5_

View File

@@ -1,328 +0,0 @@
# Design Document: Article Title Translation
## Overview
This feature extends the existing Ollama AI integration to translate German article titles to English during the crawling process. The translation will be performed immediately after article content extraction and before AI summarization. Both the original German title and English translation will be stored in the MongoDB article document, and the newsletter template will be updated to display the English title prominently with the original as a subtitle.
The design leverages the existing Ollama infrastructure (same server, configuration, and error handling patterns) to minimize complexity and maintain consistency with the current summarization feature.
## Architecture
### Component Interaction Flow
```
RSS Feed Entry
Crawler Service (extract_article_content)
Article Data (with German title)
Ollama Client (translate_title) ← New Method
Translation Result
Crawler Service (prepare article_doc)
MongoDB (articles collection with title + title_en)
Newsletter Service (fetch articles)
Newsletter Template (display English title + German subtitle)
Email to Subscribers
```
### Integration Points
1. **Ollama Client** - Add new `translate_title()` method alongside existing `summarize_article()` method
2. **Crawler Service** - Call translation after content extraction, before summarization
3. **Article Document Schema** - Add `title_en` and `translated_at` fields
4. **Newsletter Template** - Update title display logic to show English/German titles
## Components and Interfaces
### 1. Ollama Client Extension
**New Method: `translate_title(title, target_language='English')`**
```python
def translate_title(self, title, target_language='English'):
"""
Translate article title to target language
Args:
title (str): Original German title
target_language (str): Target language (default: 'English')
Returns:
dict: {
'success': bool,
'translated_title': str or None,
'error': str or None,
'duration': float
}
"""
```
**Implementation Details:**
- **Prompt Engineering**: Clear, concise prompt instructing the model to translate only the headline without explanations
- **Temperature**: 0.3 (lower than summarization's 0.7) for more consistent, deterministic translations
- **Token Limit**: 100 tokens (sufficient for title-length outputs)
- **Response Cleaning**:
- Remove surrounding quotes (single and double)
- Extract first line only (ignore any extra text)
- Trim whitespace
- **Error Handling**: Same pattern as `summarize_article()` - catch timeouts, connection errors, HTTP errors
- **Validation**: Check for empty title input before making API call
### 2. Crawler Service Integration
**Location**: In `crawl_rss_feed()` function, after content extraction
**Execution Order**:
1. Extract article content (existing)
2. **Translate title** (new)
3. Summarize article (existing)
4. Save to database (modified)
**Implementation Pattern**:
```python
# After article_data extraction
translation_result = None
original_title = article_data.get('title') or entry.get('title', '')
if Config.OLLAMA_ENABLED:
# Translate title
print(f" 🌐 Translating title...")
translation_result = ollama_client.translate_title(original_title)
if translation_result and translation_result['success']:
print(f" ✓ Title translated ({translation_result['duration']:.1f}s)")
else:
print(f" ⚠ Translation failed: {translation_result['error']}")
# Then summarize (existing code)
...
```
**Console Output Format**:
- Success: `✓ Title translated (0.8s)`
- Failure: `⚠ Translation failed: Request timed out`
### 3. Data Models
**MongoDB Article Document Schema Extension**:
```javascript
{
// Existing fields
title: String, // Original German title
author: String,
link: String,
content: String,
summary: String,
word_count: Number,
summary_word_count: Number,
source: String,
category: String,
published_at: Date,
crawled_at: Date,
summarized_at: Date,
created_at: Date,
// New fields
title_en: String, // English translation of title (nullable)
translated_at: Date // Timestamp when translation completed (nullable)
}
```
**Field Behavior**:
- `title_en`: NULL if translation fails or Ollama is disabled
- `translated_at`: NULL if translation fails, set to `datetime.utcnow()` on success
### 4. Newsletter Template Updates
**Current Title Display**:
```html
<h2 style="...">
{{ article.title }}
</h2>
```
**New Title Display Logic**:
```html
<!-- Primary title: English if available, otherwise German -->
<h2 style="margin: 12px 0 8px 0; font-size: 19px; font-weight: 700; line-height: 1.3; color: #1a1a1a;">
{{ article.title_en if article.title_en else article.title }}
</h2>
<!-- Subtitle: Original German title (only if English translation exists and differs) -->
{% if article.title_en and article.title_en != article.title %}
<p style="margin: 0 0 12px 0; font-size: 13px; color: #999999; font-style: italic;">
Original: {{ article.title }}
</p>
{% endif %}
```
**Display Rules**:
1. If `title_en` exists and differs from `title`: Show English as primary, German as subtitle
2. If `title_en` is NULL or same as `title`: Show only the original title
3. Subtitle styling: Smaller font (13px), gray color (#999999), italic
## Error Handling
### Translation Failure Scenarios
| Scenario | Behavior | User Impact |
|----------|----------|-------------|
| Ollama server unavailable | Skip translation, continue with summarization | Newsletter shows German title only |
| Translation timeout | Log error, store NULL in title_en | Newsletter shows German title only |
| Empty title input | Return error immediately, skip API call | Newsletter shows German title only |
| Ollama disabled in config | Skip translation entirely | Newsletter shows German title only |
| Network error | Catch exception, log error, continue | Newsletter shows German title only |
### Error Handling Principles
1. **Non-blocking**: Translation failures never prevent article processing
2. **Graceful degradation**: Fall back to original German title
3. **Consistent logging**: All errors logged with descriptive messages
4. **No retry logic**: Single attempt per article (same as summarization)
5. **Silent failures**: Newsletter displays seamlessly regardless of translation status
### Console Output Examples
**Success Case**:
```
🔍 Crawling: Neuer U-Bahn-Ausbau in München geplant...
🌐 Translating title...
✓ Title translated (0.8s)
🤖 Summarizing with AI...
✓ Summary: 45 words (from 320 words, 2.3s)
✓ Saved (320 words)
```
**Translation Failure Case**:
```
🔍 Crawling: Neuer U-Bahn-Ausbau in München geplant...
🌐 Translating title...
⚠ Translation failed: Request timed out after 30 seconds
🤖 Summarizing with AI...
✓ Summary: 45 words (from 320 words, 2.3s)
✓ Saved (320 words)
```
## Testing Strategy
### Unit Testing
**Ollama Client Tests** (`test_ollama_client.py`):
1. Test successful translation with valid German title
2. Test empty title input handling
3. Test timeout handling
4. Test connection error handling
5. Test response cleaning (quotes, newlines, whitespace)
6. Test translation with special characters
7. Test translation with very long titles
**Test Data Examples**:
- Simple: "München plant neue U-Bahn-Linie"
- With quotes: "\"Historischer Tag\" für München"
- With special chars: "Oktoberfest 2024: 7,5 Millionen Besucher"
- Long: "Stadtrat beschließt umfassende Maßnahmen zur Verbesserung der Verkehrsinfrastruktur..."
### Integration Testing
**Crawler Service Tests**:
1. Test article processing with translation enabled
2. Test article processing with translation disabled
3. Test article processing when translation fails
4. Test database document structure includes new fields
5. Test console output formatting
### Manual Testing
**End-to-End Workflow**:
1. Enable Ollama in configuration
2. Trigger crawl with `max_articles=2`
3. Verify console shows translation status
4. Check MongoDB for `title_en` and `translated_at` fields
5. Send test newsletter
6. Verify email displays English title with German subtitle
**Test Scenarios**:
- Fresh crawl with Ollama enabled
- Re-crawl existing articles (should skip translation)
- Crawl with Ollama disabled
- Crawl with Ollama server stopped (simulate failure)
### Performance Testing
**Metrics to Monitor**:
- Translation duration per article (target: < 2 seconds)
- Impact on total crawl time (translation + summarization)
- Ollama server resource usage
**Expected Performance**:
- Translation: ~0.5-1.5 seconds per title
- Total per article: ~3-5 seconds (translation + summarization)
- Acceptable for batch processing during scheduled crawls
## Configuration
### No New Configuration Required
The translation feature uses existing Ollama configuration:
```python
# From config.py (existing)
OLLAMA_ENABLED = True/False
OLLAMA_BASE_URL = "http://ollama:11434"
OLLAMA_MODEL = "phi3:latest"
OLLAMA_TIMEOUT = 30
```
**Rationale**: Simplifies deployment and maintains consistency. Translation is automatically enabled/disabled with the existing `OLLAMA_ENABLED` flag.
## Deployment Considerations
### Docker Container Updates
**Affected Services**:
- `crawler` service: Needs rebuild to include new translation code
- `sender` service: Needs rebuild to include updated newsletter template
**Deployment Steps**:
1. Update code in `news_crawler/ollama_client.py`
2. Update code in `news_crawler/crawler_service.py`
3. Update template in `news_sender/newsletter_template.html`
4. Rebuild containers: `docker-compose up -d --build crawler sender`
5. No database migration needed (new fields are nullable)
### Backward Compatibility
**Existing Articles**: Articles without `title_en` will display German title only (graceful fallback)
**No Breaking Changes**: Newsletter template handles NULL `title_en` values
### Rollback Plan
If issues arise:
1. Revert code changes
2. Rebuild containers
3. Existing articles with `title_en` will continue to work
4. New articles will only have German titles
## Future Enhancements
### Potential Improvements (Out of Scope)
1. **Batch Translation**: Translate multiple titles in single API call for efficiency
2. **Translation Caching**: Cache common phrases/words to reduce API calls
3. **Multi-language Support**: Add configuration for target language selection
4. **Translation Quality Metrics**: Track and log translation quality scores
5. **Retry Logic**: Implement retry with exponential backoff for failed translations
6. **Admin API**: Add endpoint to re-translate existing articles
These enhancements are not included in the current implementation to maintain simplicity and focus on core functionality.

View File

@@ -1,75 +0,0 @@
# Requirements Document
## Introduction
This feature adds automatic translation of German article titles to English using the Ollama AI service. The translation will occur during the article crawling process and both the original German title and English translation will be stored in the database. The newsletter will display the English title prominently with the original German title as a subtitle when available.
## Glossary
- **Crawler Service**: The Python service that fetches articles from RSS feeds and processes them
- **Ollama Client**: The Python client that communicates with the Ollama AI server for text processing
- **Article Document**: The MongoDB document structure that stores article data
- **Newsletter Template**: The HTML template used to render the email newsletter sent to subscribers
- **Translation Result**: The response object returned by the Ollama translation function containing the translated title and metadata
## Requirements
### Requirement 1
**User Story:** As a newsletter subscriber, I want to see article titles in English, so that I can quickly understand the content without knowing German
#### Acceptance Criteria
1. WHEN the Crawler Service processes an article, THE Ollama Client SHALL translate the German title to English
2. THE Article Document SHALL store both the original German title and the English translation
3. THE Newsletter Template SHALL display the English title as the primary heading
4. WHERE an English translation exists, THE Newsletter Template SHALL display the original German title as a subtitle
5. IF the translation fails, THEN THE Newsletter Template SHALL display the original German title as the primary heading
### Requirement 2
**User Story:** As a system administrator, I want translation to be integrated with the existing Ollama service, so that I don't need to configure additional services
#### Acceptance Criteria
1. THE Ollama Client SHALL provide a translate_title method that accepts a German title and returns an English translation
2. THE translate_title method SHALL use the same Ollama server configuration as the existing summarization feature
3. THE translate_title method SHALL use a temperature setting of 0.3 for consistent translations
4. THE translate_title method SHALL limit the response to 100 tokens maximum for title-length outputs
5. THE translate_title method SHALL return a Translation Result containing success status, translated title, error message, and duration
### Requirement 3
**User Story:** As a developer, I want translation errors to be handled gracefully, so that article processing continues even when translation fails
#### Acceptance Criteria
1. IF the Ollama server is unavailable, THEN THE Crawler Service SHALL continue processing articles without translations
2. IF a translation request times out, THEN THE Crawler Service SHALL log the error and store the article with only the original title
3. THE Crawler Service SHALL display translation status in the console output during crawling
4. THE Article Document SHALL include a translated_at timestamp field when translation succeeds
5. THE Article Document SHALL store NULL in the title_en field when translation fails
### Requirement 4
**User Story:** As a newsletter subscriber, I want translations to be accurate and natural, so that the English titles read fluently
#### Acceptance Criteria
1. THE Ollama Client SHALL provide a clear prompt instructing the model to translate German news headlines to English
2. THE Ollama Client SHALL instruct the model to provide only the translation without explanations
3. THE Ollama Client SHALL clean the translation output by removing quotes and extra text
4. THE Ollama Client SHALL extract only the first line of the translation response
5. THE Ollama Client SHALL trim whitespace from the translated title
### Requirement 5
**User Story:** As a system operator, I want to see translation performance metrics, so that I can monitor the translation feature effectiveness
#### Acceptance Criteria
1. THE Crawler Service SHALL log the translation duration for each article
2. THE Crawler Service SHALL display a success indicator when translation completes
3. THE Crawler Service SHALL display an error message when translation fails
4. THE Translation Result SHALL include the duration in seconds
5. THE Article Document SHALL store the translated_at timestamp for successful translations

View File

@@ -1,47 +0,0 @@
# Implementation Plan
- [x] 1. Add translate_title method to Ollama client
- Create the `translate_title()` method in `news_crawler/ollama_client.py` that accepts a title string and target language parameter
- Implement the translation prompt that instructs the model to translate German headlines to English without explanations
- Configure Ollama API call with temperature=0.3 and num_predict=100 for consistent title-length translations
- Implement response cleaning logic to remove quotes, extract first line only, and trim whitespace
- Add error handling for timeout, connection errors, HTTP errors, and empty title input
- Return a dictionary with success status, translated_title, error message, and duration fields
- _Requirements: 2.1, 2.2, 2.3, 2.4, 2.5, 4.1, 4.2, 4.3, 4.4, 4.5_
- [x] 2. Integrate translation into crawler service
- [x] 2.1 Add translation call in crawl_rss_feed function
- Locate the article processing section in `news_crawler/crawler_service.py` after content extraction
- Store the original title from article_data or entry
- Add conditional check for Config.OLLAMA_ENABLED before calling translation
- Call `ollama_client.translate_title()` with the original title
- Store the translation_result for later use in article document
- _Requirements: 1.1, 2.1_
- [x] 2.2 Add console logging for translation status
- Add "🌐 Translating title..." message before translation call
- Add success message with duration: "✓ Title translated (X.Xs)"
- Add failure message with error: "⚠ Translation failed: {error}"
- _Requirements: 5.1, 5.2, 5.3_
- [x] 2.3 Update article document structure
- Modify the article_doc dictionary to include `title_en` field with translated title or None
- Add `translated_at` field set to datetime.utcnow() on success or None on failure
- Ensure the original `title` field still contains the German title
- _Requirements: 1.2, 3.5_
- [x] 3. Update newsletter template for bilingual title display
- Modify `news_sender/newsletter_template.html` to display English title as primary heading when available
- Add conditional logic to show original German title as subtitle only when English translation exists and differs
- Style the subtitle with smaller font (13px), gray color (#999999), and italic formatting
- Ensure fallback to German title when title_en is NULL or missing
- _Requirements: 1.3, 1.4, 1.5_
- [x] 4. Test the translation feature end-to-end
- Rebuild the crawler Docker container with the new translation code
- Clear existing articles from the database for clean testing
- Trigger a test crawl with max_articles=2 to process fresh articles
- Verify console output shows translation status messages
- Check MongoDB to confirm title_en and translated_at fields are populated
- Send a test newsletter email to verify English titles display correctly with German subtitles
- _Requirements: 1.1, 1.2, 1.3, 1.4, 5.1, 5.2, 5.4, 5.5_

View File

@@ -1,407 +0,0 @@
# Email Tracking System Design
## Overview
The email tracking system enables Munich News Daily to measure subscriber engagement through email opens and link clicks. The system uses industry-standard techniques (tracking pixels and redirect URLs) while maintaining privacy compliance and performance.
## Architecture
### High-Level Components
```
┌─────────────────────────────────────────────────────────────┐
│ Newsletter System │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Sender │─────▶│ Tracking │ │
│ │ Service │ │ Generator │ │
│ └──────────────┘ └──────────────┘ │
│ │ │ │
│ │ ▼ │
│ │ ┌──────────────┐ │
│ │ │ MongoDB │ │
│ │ │ (tracking) │ │
│ │ └──────────────┘ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Email │ │
│ │ Client │ │
│ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ ▲
│ │
▼ │
┌─────────────────────────────────────────────────────────────┐
│ Backend API Server │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Pixel │ │ Link │ │
│ │ Endpoint │ │ Redirect │ │
│ └──────────────┘ └──────────────┘ │
│ │ │ │
│ └──────────┬───────────┘ │
│ ▼ │
│ ┌──────────────┐ │
│ │ MongoDB │ │
│ │ (tracking) │ │
│ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### Technology Stack
- **Backend**: Flask (Python) - existing backend server
- **Database**: MongoDB - existing database with new collections
- **Email**: SMTP (existing sender service)
- **Tracking**: UUID-based unique identifiers
- **Image**: 1x1 transparent PNG (base64 encoded)
## Components and Interfaces
### 1. Tracking ID Generator
**Purpose**: Generate unique tracking identifiers for emails and links
**Module**: `backend/services/tracking_service.py`
**Functions**:
```python
def generate_tracking_id() -> str:
"""Generate a unique tracking ID using UUID4"""
return str(uuid.uuid4())
def create_newsletter_tracking(newsletter_id: str, subscriber_email: str) -> dict:
"""Create tracking record for a newsletter send"""
# Returns tracking document with IDs for pixel and links
```
### 2. Tracking Pixel Endpoint
**Purpose**: Serve 1x1 transparent PNG and log email opens
**Endpoint**: `GET /api/track/pixel/<tracking_id>`
**Flow**:
1. Receive request with tracking_id
2. Look up tracking record in database
3. Log open event (email, timestamp, user-agent)
4. Return 1x1 transparent PNG image
5. Handle multiple opens (update last_opened_at)
**Response**:
- Status: 200 OK
- Content-Type: image/png
- Body: 1x1 transparent PNG (43 bytes)
### 3. Link Tracking Endpoint
**Purpose**: Track link clicks and redirect to original URL
**Endpoint**: `GET /api/track/click/<tracking_id>`
**Flow**:
1. Receive request with tracking_id
2. Look up tracking record and original URL
3. Log click event (email, article_url, timestamp, user-agent)
4. Redirect to original article URL (302 redirect)
5. Handle errors gracefully (redirect to homepage if invalid)
**Response**:
- Status: 302 Found
- Location: Original article URL
- Performance: < 200ms redirect time
### 4. Newsletter Template Modifier
**Purpose**: Inject tracking pixel and replace article links
**Module**: `news_sender/tracking_integration.py`
**Functions**:
```python
def inject_tracking_pixel(html: str, tracking_id: str, api_url: str) -> str:
"""Inject tracking pixel before closing </body> tag"""
pixel_url = f"{api_url}/api/track/pixel/{tracking_id}"
pixel_html = f'<img src="{pixel_url}" width="1" height="1" alt="" />'
return html.replace('</body>', f'{pixel_html}</body>')
def replace_article_links(html: str, articles: list, tracking_map: dict, api_url: str) -> str:
"""Replace article links with tracking URLs"""
# For each article link, replace with tracking URL
```
### 5. Analytics Service
**Purpose**: Calculate engagement metrics and identify active users
**Module**: `backend/services/analytics_service.py`
**Functions**:
```python
def get_open_rate(newsletter_id: str) -> float:
"""Calculate percentage of subscribers who opened"""
def get_click_rate(article_url: str) -> float:
"""Calculate percentage of subscribers who clicked"""
def get_subscriber_activity_status(email: str) -> str:
"""Return 'active', 'inactive', or 'dormant'"""
def update_subscriber_activity_statuses():
"""Batch update all subscriber activity statuses"""
```
## Data Models
### Newsletter Sends Collection (`newsletter_sends`)
Tracks each newsletter sent to each subscriber.
```javascript
{
_id: ObjectId,
newsletter_id: String, // Unique ID for this newsletter batch (date-based)
subscriber_email: String, // Recipient email
tracking_id: String, // Unique tracking ID for this send (UUID)
sent_at: DateTime, // When email was sent
opened: Boolean, // Whether email was opened
first_opened_at: DateTime, // First open timestamp (null if not opened)
last_opened_at: DateTime, // Most recent open timestamp
open_count: Number, // Number of times opened
created_at: DateTime // Record creation time
}
```
**Indexes**:
- `tracking_id` (unique) - Fast lookup for pixel requests
- `newsletter_id` - Analytics queries
- `subscriber_email` - User activity queries
- `sent_at` - Time-based queries
### Link Clicks Collection (`link_clicks`)
Tracks individual link clicks.
```javascript
{
_id: ObjectId,
tracking_id: String, // Unique tracking ID for this link (UUID)
newsletter_id: String, // Which newsletter this link was in
subscriber_email: String, // Who clicked
article_url: String, // Original article URL
article_title: String, // Article title for reporting
clicked_at: DateTime, // When link was clicked
user_agent: String, // Browser/client info
created_at: DateTime // Record creation time
}
```
**Indexes**:
- `tracking_id` (unique) - Fast lookup for redirect requests
- `newsletter_id` - Analytics queries
- `article_url` - Article performance queries
- `subscriber_email` - User activity queries
### Subscriber Activity Collection (`subscriber_activity`)
Aggregated activity status for each subscriber.
```javascript
{
_id: ObjectId,
email: String, // Subscriber email (unique)
status: String, // 'active', 'inactive', or 'dormant'
last_opened_at: DateTime, // Most recent email open
last_clicked_at: DateTime, // Most recent link click
total_opens: Number, // Lifetime open count
total_clicks: Number, // Lifetime click count
newsletters_received: Number, // Total newsletters sent
newsletters_opened: Number, // Total newsletters opened
updated_at: DateTime // Last status update
}
```
**Indexes**:
- `email` (unique) - Fast lookup
- `status` - Filter by activity level
- `last_opened_at` - Time-based queries
## Error Handling
### Tracking Pixel Failures
- **Invalid tracking_id**: Return 1x1 transparent PNG anyway (don't break email rendering)
- **Database error**: Log error, return pixel (fail silently)
- **Multiple opens**: Update existing record, don't create duplicate
### Link Redirect Failures
- **Invalid tracking_id**: Redirect to website homepage
- **Database error**: Log error, redirect to homepage
- **Missing original URL**: Redirect to homepage
### Privacy Compliance
- **Data retention**: Anonymize tracking data after 90 days
- Remove email addresses
- Keep aggregated metrics
- **Opt-out**: Check subscriber preferences before tracking
- **GDPR deletion**: Provide endpoint to delete all tracking data for a user
## Testing Strategy
### Unit Tests
1. **Tracking ID Generation**
- Test UUID format
- Test uniqueness
2. **Pixel Endpoint**
- Test valid tracking_id returns PNG
- Test invalid tracking_id returns PNG
- Test database logging
3. **Link Redirect**
- Test valid tracking_id redirects correctly
- Test invalid tracking_id redirects to homepage
- Test click logging
4. **Analytics Calculations**
- Test open rate calculation
- Test click rate calculation
- Test activity status classification
### Integration Tests
1. **End-to-End Newsletter Flow**
- Send newsletter with tracking
- Simulate email open (pixel request)
- Simulate link click
- Verify database records
2. **Privacy Compliance**
- Test data anonymization
- Test user data deletion
- Test opt-out handling
### Performance Tests
1. **Redirect Speed**
- Measure redirect time (target: < 200ms)
- Test under load (100 concurrent requests)
2. **Pixel Serving**
- Test pixel response time
- Test caching headers
## API Endpoints
### Tracking Endpoints
```
GET /api/track/pixel/<tracking_id>
- Returns: 1x1 transparent PNG
- Logs: Email open event
GET /api/track/click/<tracking_id>
- Returns: 302 redirect to article URL
- Logs: Link click event
```
### Analytics Endpoints
```
GET /api/analytics/newsletter/<newsletter_id>
- Returns: Open rate, click rate, engagement metrics
GET /api/analytics/article/<article_id>
- Returns: Click count, click rate for specific article
GET /api/analytics/subscriber/<email>
- Returns: Activity status, engagement history
POST /api/analytics/update-activity
- Triggers: Batch update of subscriber activity statuses
- Returns: Update count
```
### Privacy Endpoints
```
DELETE /api/tracking/subscriber/<email>
- Deletes: All tracking data for subscriber
- Returns: Deletion confirmation
POST /api/tracking/anonymize
- Triggers: Anonymize tracking data older than 90 days
- Returns: Anonymization count
```
## Implementation Phases
### Phase 1: Core Tracking (MVP)
- Tracking ID generation
- Pixel endpoint
- Link redirect endpoint
- Database collections
- Newsletter template integration
### Phase 2: Analytics
- Open rate calculation
- Click rate calculation
- Activity status classification
- Analytics API endpoints
### Phase 3: Privacy & Compliance
- Data anonymization
- User data deletion
- Opt-out handling
- Privacy notices
### Phase 4: Optimization
- Caching for pixel endpoint
- Performance monitoring
- Batch processing for activity updates
## Security Considerations
1. **Rate Limiting**: Prevent abuse of tracking endpoints
2. **Input Validation**: Validate all tracking_ids (UUID format)
3. **SQL Injection**: Use parameterized queries (MongoDB safe by default)
4. **Privacy**: Don't expose subscriber emails in URLs
5. **HTTPS**: Ensure all tracking URLs use HTTPS in production
## Configuration
Add to `backend/.env`:
```env
# Tracking Configuration
TRACKING_ENABLED=true
TRACKING_API_URL=http://localhost:5000
TRACKING_DATA_RETENTION_DAYS=90
```
## Monitoring and Metrics
### Key Metrics to Track
1. **Email Opens**
- Overall open rate
- Open rate by newsletter
- Time to first open
2. **Link Clicks**
- Overall click rate
- Click rate by article
- Click-through rate (CTR)
3. **Subscriber Engagement**
- Active subscriber count
- Inactive subscriber count
- Dormant subscriber count
4. **System Performance**
- Pixel response time
- Redirect response time
- Database query performance

View File

@@ -1,77 +0,0 @@
# Requirements Document
## Introduction
This document outlines the requirements for implementing email tracking functionality in the Munich News Daily newsletter system. The system will track email opens and link clicks to measure subscriber engagement and identify active users.
## Glossary
- **Newsletter System**: The Munich News Daily email sending service
- **Tracking Pixel**: A 1x1 transparent image embedded in emails to detect opens
- **Tracking Link**: A redirecting URL that logs clicks before sending users to the actual destination
- **Subscriber**: A user who receives the newsletter
- **Email Open**: When a subscriber's email client loads the tracking pixel
- **Link Click**: When a subscriber clicks a tracked link in the newsletter
- **Engagement Metrics**: Data about subscriber interactions with newsletters
## Requirements
### Requirement 1: Track Email Opens
**User Story:** As a newsletter administrator, I want to track when subscribers open emails, so that I can measure engagement and identify active users.
#### Acceptance Criteria
1. WHEN a newsletter is sent, THE Newsletter System SHALL embed a unique tracking pixel in each email
2. WHEN a subscriber opens the email, THE Newsletter System SHALL record the open event with timestamp
3. THE Newsletter System SHALL store the subscriber email, newsletter ID, and open timestamp in the database
4. THE Newsletter System SHALL serve the tracking pixel as a 1x1 transparent PNG image
5. THE Newsletter System SHALL handle multiple opens from the same subscriber for the same newsletter
### Requirement 2: Track Link Clicks
**User Story:** As a newsletter administrator, I want to track when subscribers click on article links, so that I can understand which content is most engaging.
#### Acceptance Criteria
1. WHEN a newsletter is generated, THE Newsletter System SHALL replace all article links with unique tracking URLs
2. WHEN a subscriber clicks a tracking URL, THE Newsletter System SHALL record the click event with timestamp
3. WHEN a click is recorded, THE Newsletter System SHALL redirect the subscriber to the original article URL
4. THE Newsletter System SHALL store the subscriber email, article link, and click timestamp in the database
5. THE Newsletter System SHALL complete the redirect within 200 milliseconds
### Requirement 3: Generate Engagement Reports
**User Story:** As a newsletter administrator, I want to view engagement metrics, so that I can understand subscriber behavior and content performance.
#### Acceptance Criteria
1. THE Newsletter System SHALL provide an API endpoint to retrieve open rates by newsletter
2. THE Newsletter System SHALL provide an API endpoint to retrieve click rates by article
3. THE Newsletter System SHALL calculate the percentage of subscribers who opened each newsletter
4. THE Newsletter System SHALL calculate the percentage of subscribers who clicked each article link
5. THE Newsletter System SHALL identify subscribers who have not opened emails in the last 30 days
### Requirement 4: Privacy and Compliance
**User Story:** As a newsletter administrator, I want to respect subscriber privacy, so that the system complies with privacy regulations.
#### Acceptance Criteria
1. THE Newsletter System SHALL include a privacy notice in the newsletter footer explaining tracking
2. THE Newsletter System SHALL anonymize tracking data after 90 days by removing email addresses
3. THE Newsletter System SHALL provide an API endpoint to delete all tracking data for a specific subscriber
4. THE Newsletter System SHALL not track subscribers who have opted out of tracking
5. WHERE a subscriber unsubscribes, THE Newsletter System SHALL delete all their tracking data
### Requirement 5: Identify Active Users
**User Story:** As a newsletter administrator, I want to identify active subscribers, so that I can segment my audience and improve targeting.
#### Acceptance Criteria
1. THE Newsletter System SHALL classify a subscriber as "active" if they opened an email in the last 30 days
2. THE Newsletter System SHALL classify a subscriber as "inactive" if they have not opened an email in 30-60 days
3. THE Newsletter System SHALL classify a subscriber as "dormant" if they have not opened an email in over 60 days
4. THE Newsletter System SHALL provide an API endpoint to retrieve subscriber activity status
5. THE Newsletter System SHALL update subscriber activity status daily

View File

@@ -1,170 +0,0 @@
# Implementation Plan
## Phase 1: Core Tracking Infrastructure
- [x] 1. Set up database collections and indexes
- Create MongoDB collections: `newsletter_sends`, `link_clicks`, `subscriber_activity`
- Add indexes for performance: `tracking_id` (unique), `newsletter_id`, `subscriber_email`, `sent_at`
- Write database initialization script
- _Requirements: 1.3, 2.4_
- [x] 2. Implement tracking service
- [x] 2.1 Create tracking ID generator
- Write `generate_tracking_id()` function using UUID4
- Write `create_newsletter_tracking()` function to create tracking records
- Add configuration for tracking enable/disable
- _Requirements: 1.1, 2.1_
- [x] 2.2 Implement tracking pixel endpoint
- Create Flask route `GET /api/track/pixel/<tracking_id>`
- Generate 1x1 transparent PNG (base64 encoded)
- Log email open event to `newsletter_sends` collection
- Handle multiple opens (update `last_opened_at` and `open_count`)
- Return PNG with proper headers (Content-Type: image/png)
- _Requirements: 1.2, 1.3, 1.4, 1.5_
- [x] 2.3 Implement link redirect endpoint
- Create Flask route `GET /api/track/click/<tracking_id>`
- Look up original article URL from tracking record
- Log click event to `link_clicks` collection
- Redirect to original URL with 302 status
- Handle invalid tracking_id (redirect to homepage)
- Ensure redirect completes within 200ms
- _Requirements: 2.2, 2.3, 2.4, 2.5_
- [x] 2.4 Write unit tests for tracking endpoints
- Test pixel endpoint returns PNG for valid tracking_id
- Test pixel endpoint returns PNG for invalid tracking_id (fail silently)
- Test link redirect works correctly
- Test link redirect handles invalid tracking_id
- Test database logging for opens and clicks
- _Requirements: 1.2, 1.4, 2.2, 2.3_
## Phase 2: Newsletter Integration
- [x] 3. Integrate tracking into sender service
- [x] 3.1 Create tracking integration module
- Write `inject_tracking_pixel()` function to add pixel to HTML
- Write `replace_article_links()` function to replace links with tracking URLs
- Write `generate_tracking_urls()` function to create tracking records for all links
- Add tracking configuration to sender service
- _Requirements: 1.1, 2.1_
- [x] 3.2 Modify newsletter sending flow
- Update `send_newsletter()` to generate tracking IDs for each subscriber
- Create tracking records in database before sending
- Inject tracking pixel into newsletter HTML
- Replace article links with tracking URLs
- Store newsletter_id and tracking metadata
- _Requirements: 1.1, 1.3, 2.1, 2.4_
- [x] 3.3 Update newsletter template
- Ensure template supports dynamic tracking pixel injection
- Ensure article links are properly structured for replacement
- Add privacy notice to footer about tracking
- _Requirements: 4.1_
- [x] 3.4 Test newsletter with tracking
- Send test newsletter with tracking enabled
- Verify tracking pixel is embedded correctly
- Verify article links are replaced with tracking URLs
- Test email open tracking works
- Test link click tracking works
- _Requirements: 1.1, 1.2, 2.1, 2.2_
## Phase 3: Analytics and Reporting
- [x] 4. Implement analytics service
- [x] 4.1 Create analytics calculation functions
- Write `get_open_rate(newsletter_id)` function
- Write `get_click_rate(article_url)` function
- Write `get_newsletter_metrics(newsletter_id)` function for overall stats
- Write `get_article_performance(article_url)` function
- _Requirements: 3.1, 3.2, 3.3, 3.4_
- [x] 4.2 Implement subscriber activity classification
- Write `get_subscriber_activity_status(email)` function
- Classify as 'active' (opened in last 30 days)
- Classify as 'inactive' (no opens in 30-60 days)
- Classify as 'dormant' (no opens in 60+ days)
- Write `update_subscriber_activity_statuses()` batch function
- _Requirements: 5.1, 5.2, 5.3, 5.4, 5.5_
- [x] 4.3 Create analytics API endpoints
- Create `GET /api/analytics/newsletter/<newsletter_id>` endpoint
- Create `GET /api/analytics/article/<article_id>` endpoint
- Create `GET /api/analytics/subscriber/<email>` endpoint
- Create `POST /api/analytics/update-activity` endpoint
- Return JSON with engagement metrics
- _Requirements: 3.1, 3.2, 3.4, 5.4_
- [x] 4.4 Write unit tests for analytics
- Test open rate calculation
- Test click rate calculation
- Test activity status classification
- Test edge cases (no opens, no clicks)
- _Requirements: 3.3, 3.4, 5.1, 5.2, 5.3_
## Phase 4: Privacy and Compliance
- [x] 5. Implement privacy features
- [x] 5.1 Create data anonymization function
- Write function to anonymize tracking data older than 90 days
- Remove email addresses from old records
- Keep aggregated metrics
- Create scheduled task to run daily
- _Requirements: 4.2_
- [x] 5.2 Implement user data deletion
- Create `DELETE /api/tracking/subscriber/<email>` endpoint
- Delete all tracking records for subscriber
- Delete from `newsletter_sends`, `link_clicks`, `subscriber_activity`
- Return confirmation response
- _Requirements: 4.3, 4.5_
- [x] 5.3 Add tracking opt-out support
- Add `tracking_enabled` field to subscribers collection
- Check opt-out status before creating tracking records
- Skip tracking for opted-out subscribers
- Update newsletter sending to respect opt-out
- _Requirements: 4.4_
- [x] 5.4 Create anonymization endpoint
- Create `POST /api/tracking/anonymize` endpoint
- Trigger anonymization of old data
- Return count of anonymized records
- Add authentication/authorization
- _Requirements: 4.2_
- [x] 5.5 Write privacy compliance tests
- Test data anonymization works correctly
- Test user data deletion removes all records
- Test opt-out prevents tracking
- Test anonymization preserves aggregated metrics
- _Requirements: 4.2, 4.3, 4.4, 4.5_
## Phase 5: Configuration and Documentation
- [x] 6. Add configuration and environment setup
- Add `TRACKING_ENABLED` to environment variables
- Add `TRACKING_API_URL` configuration
- Add `TRACKING_DATA_RETENTION_DAYS` configuration
- Update `.env.template` with tracking variables
- Update configuration documentation
- _Requirements: All_
- [x] 7. Update database schema documentation
- Document `newsletter_sends` collection schema
- Document `link_clicks` collection schema
- Document `subscriber_activity` collection schema
- Add indexes documentation
- Update DATABASE_SCHEMA.md
- _Requirements: 1.3, 2.4, 5.4_
- [x] 8. Create tracking usage documentation
- Document how to enable/disable tracking
- Document analytics API endpoints
- Document privacy features
- Add examples of querying tracking data
- Create README for tracking system
- _Requirements: All_