update
Some checks failed
Pipeline: Test, Lint, Build / Get version info (push) Has been cancelled
Pipeline: Test, Lint, Build / Lint Go code (push) Has been cancelled
Pipeline: Test, Lint, Build / Test Go code (push) Has been cancelled
Pipeline: Test, Lint, Build / Test JS code (push) Has been cancelled
Pipeline: Test, Lint, Build / Lint i18n files (push) Has been cancelled
Pipeline: Test, Lint, Build / Check Docker configuration (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (darwin/amd64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (darwin/arm64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/386) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/amd64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm/v5) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm/v6) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm/v7) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (windows/386) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (windows/amd64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Push to GHCR (push) Has been cancelled
Pipeline: Test, Lint, Build / Push to Docker Hub (push) Has been cancelled
Pipeline: Test, Lint, Build / Cleanup digest artifacts (push) Has been cancelled
Pipeline: Test, Lint, Build / Build Windows installers (push) Has been cancelled
Pipeline: Test, Lint, Build / Package/Release (push) Has been cancelled
Pipeline: Test, Lint, Build / Upload Linux PKG (push) Has been cancelled
Close stale issues and PRs / stale (push) Has been cancelled
POEditor import / update-translations (push) Has been cancelled
Some checks failed
Pipeline: Test, Lint, Build / Get version info (push) Has been cancelled
Pipeline: Test, Lint, Build / Lint Go code (push) Has been cancelled
Pipeline: Test, Lint, Build / Test Go code (push) Has been cancelled
Pipeline: Test, Lint, Build / Test JS code (push) Has been cancelled
Pipeline: Test, Lint, Build / Lint i18n files (push) Has been cancelled
Pipeline: Test, Lint, Build / Check Docker configuration (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (darwin/amd64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (darwin/arm64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/386) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/amd64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm/v5) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm/v6) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm/v7) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (linux/arm64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (windows/386) (push) Has been cancelled
Pipeline: Test, Lint, Build / Build (windows/amd64) (push) Has been cancelled
Pipeline: Test, Lint, Build / Push to GHCR (push) Has been cancelled
Pipeline: Test, Lint, Build / Push to Docker Hub (push) Has been cancelled
Pipeline: Test, Lint, Build / Cleanup digest artifacts (push) Has been cancelled
Pipeline: Test, Lint, Build / Build Windows installers (push) Has been cancelled
Pipeline: Test, Lint, Build / Package/Release (push) Has been cancelled
Pipeline: Test, Lint, Build / Upload Linux PKG (push) Has been cancelled
Close stale issues and PRs / stale (push) Has been cancelled
POEditor import / update-translations (push) Has been cancelled
This commit is contained in:
466
scanner/README.md
Normal file
466
scanner/README.md
Normal file
@@ -0,0 +1,466 @@
|
||||
# Navidrome Scanner: Technical Overview
|
||||
|
||||
This document provides a comprehensive technical explanation of Navidrome's music library scanner system.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The Navidrome scanner is built on a multi-phase pipeline architecture designed for efficient processing of music files. It systematically traverses file system directories, processes metadata, and maintains a database representation of the music library. A key performance feature is that some phases run sequentially while others execute in parallel.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph "Scanner Execution Flow"
|
||||
Controller[Scanner Controller] --> Scanner[Scanner Implementation]
|
||||
|
||||
Scanner --> Phase1[Phase 1: Folders Scan]
|
||||
Phase1 --> Phase2[Phase 2: Missing Tracks]
|
||||
|
||||
Phase2 --> ParallelPhases
|
||||
|
||||
subgraph ParallelPhases["Parallel Execution"]
|
||||
Phase3[Phase 3: Refresh Albums]
|
||||
Phase4[Phase 4: Playlist Import]
|
||||
end
|
||||
|
||||
ParallelPhases --> FinalSteps[Final Steps: GC + Stats]
|
||||
end
|
||||
|
||||
%% Triggers that can initiate a scan
|
||||
FileChanges[File System Changes] -->|Detected by| Watcher[Filesystem Watcher]
|
||||
Watcher -->|Triggers| Controller
|
||||
|
||||
ScheduledJob[Scheduled Job] -->|Based on Scanner.Schedule| Controller
|
||||
ServerStartup[Server Startup] -->|If Scanner.ScanOnStartup=true| Controller
|
||||
ManualTrigger[Manual Scan via UI/API] -->|Admin user action| Controller
|
||||
CLICommand[Command Line: navidrome scan] -->|Direct invocation| Controller
|
||||
PIDChange[PID Configuration Change] -->|Forces full scan| Controller
|
||||
DBMigration[Database Migration] -->|May require full scan| Controller
|
||||
|
||||
Scanner -.->|Alternative| External[External Scanner Process]
|
||||
```
|
||||
|
||||
The execution flow shows that Phases 1 and 2 run sequentially, while Phases 3 and 4 execute in parallel to maximize performance before the final processing steps.
|
||||
|
||||
## Core Components
|
||||
|
||||
### Scanner Controller (`controller.go`)
|
||||
|
||||
This is the entry point for all scanning operations. It provides:
|
||||
|
||||
- Public API for initiating scans and checking scan status
|
||||
- Event broadcasting to notify clients about scan progress
|
||||
- Serialization of scan operations (prevents concurrent scans)
|
||||
- Progress tracking and monitoring
|
||||
- Error collection and reporting
|
||||
|
||||
```go
|
||||
type Scanner interface {
|
||||
// ScanAll starts a full scan of the music library. This is a blocking operation.
|
||||
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
|
||||
Status(context.Context) (*StatusInfo, error)
|
||||
}
|
||||
```
|
||||
|
||||
### Scanner Implementation (`scanner.go`)
|
||||
|
||||
The primary implementation that orchestrates the four-phase scanning pipeline. Each phase follows the Phase interface pattern:
|
||||
|
||||
```go
|
||||
type phase[T any] interface {
|
||||
producer() ppl.Producer[T]
|
||||
stages() []ppl.Stage[T]
|
||||
finalize(error) error
|
||||
description() string
|
||||
}
|
||||
```
|
||||
|
||||
This design enables:
|
||||
- Type-safe pipeline construction with generics
|
||||
- Modular phase implementation
|
||||
- Separation of concerns
|
||||
- Easy measurement of performance
|
||||
|
||||
### External Scanner (`external.go`)
|
||||
|
||||
The External Scanner is a specialized implementation that offloads the scanning process to a separate subprocess. This is specifically designed to address memory management challenges in long-running Navidrome instances.
|
||||
|
||||
```go
|
||||
// scannerExternal is a scanner that runs an external process to do the scanning. It is used to avoid
|
||||
// memory leaks or retention in the main process, as the scanner can consume a lot of memory. The
|
||||
// external process will be spawned with the same executable as the current process, and will run
|
||||
// the "scan" command with the "--subprocess" flag.
|
||||
//
|
||||
// The external process will send progress updates to the main process through its STDOUT, and the main
|
||||
// process will forward them to the caller.
|
||||
```
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant MP as Main Process
|
||||
participant ES as External Scanner
|
||||
participant SP as Subprocess (navidrome scan --subprocess)
|
||||
participant FS as File System
|
||||
participant DB as Database
|
||||
|
||||
Note over MP: DevExternalScanner=true
|
||||
MP->>ES: ScanAll(ctx, fullScan)
|
||||
activate ES
|
||||
|
||||
ES->>ES: Locate executable path
|
||||
ES->>SP: Start subprocess with args:<br>scan --subprocess --configfile ... etc.
|
||||
activate SP
|
||||
|
||||
Note over ES,SP: Create pipe for communication
|
||||
|
||||
par Subprocess executes scan
|
||||
SP->>FS: Read files & metadata
|
||||
SP->>DB: Update database
|
||||
and Main process monitors progress
|
||||
loop For each progress update
|
||||
SP->>ES: Send encoded progress info via stdout pipe
|
||||
ES->>MP: Forward progress info
|
||||
end
|
||||
end
|
||||
|
||||
SP-->>ES: Subprocess completes (success/error)
|
||||
deactivate SP
|
||||
ES-->>MP: Return aggregated warnings/errors
|
||||
deactivate ES
|
||||
```
|
||||
|
||||
Technical details:
|
||||
|
||||
1. **Process Isolation**
|
||||
- Spawns a separate process using the same executable
|
||||
- Uses the `--subprocess` flag to indicate it's running as a child process
|
||||
- Preserves configuration by passing required flags (`--configfile`, `--datafolder`, etc.)
|
||||
|
||||
2. **Inter-Process Communication**
|
||||
- Uses a pipe for bidirectional communication
|
||||
- Encodes/decodes progress updates using Go's `gob` encoding for efficient binary transfer
|
||||
- Properly handles process termination and error propagation
|
||||
|
||||
3. **Memory Management Benefits**
|
||||
- Scanning operations can be memory-intensive, especially with large music libraries
|
||||
- Memory leaks or excessive allocations are automatically cleaned up when the process terminates
|
||||
- Main Navidrome process remains stable even if scanner encounters memory-related issues
|
||||
|
||||
4. **Error Handling**
|
||||
- Detects non-zero exit codes from the subprocess
|
||||
- Propagates error messages back to the main process
|
||||
- Ensures resources are properly cleaned up, even in error conditions
|
||||
|
||||
## Scanning Process Flow
|
||||
|
||||
### Phase 1: Folder Scan (`phase_1_folders.go`)
|
||||
|
||||
This phase handles the initial traversal and media file processing.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 1] --> B{Full Scan?}
|
||||
B -- Yes --> C[Scan All Folders]
|
||||
B -- No --> D[Scan Modified Folders]
|
||||
C --> E[Read File Metadata]
|
||||
D --> E
|
||||
E --> F[Create Artists]
|
||||
E --> G[Create Albums]
|
||||
F --> H[Save to Database]
|
||||
G --> H
|
||||
H --> I[Mark Missing Folders]
|
||||
I --> J[End Phase 1]
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Folder Traversal**
|
||||
- Uses `walkDirTree` to traverse the directory structure
|
||||
- Handles symbolic links and hidden files
|
||||
- Processes `.ndignore` files for exclusions
|
||||
- Maps files to appropriate types (audio, image, playlist)
|
||||
|
||||
2. **Metadata Extraction**
|
||||
- Processes files in batches (defined by `filesBatchSize = 200`)
|
||||
- Extracts metadata using the configured storage backend
|
||||
- Converts raw metadata to `MediaFile` objects
|
||||
- Collects and normalizes tag information
|
||||
|
||||
3. **Album and Artist Creation**
|
||||
- Groups tracks by album ID
|
||||
- Creates album records from track metadata
|
||||
- Handles album ID changes by tracking previous IDs
|
||||
- Creates artist records from track participants
|
||||
|
||||
4. **Database Persistence**
|
||||
- Uses transactions for atomic updates
|
||||
- Preserves album annotations across ID changes
|
||||
- Updates library-artist mappings
|
||||
- Marks missing tracks for later processing
|
||||
- Pre-caches artwork for performance
|
||||
|
||||
### Phase 2: Missing Tracks Processing (`phase_2_missing_tracks.go`)
|
||||
|
||||
This phase identifies tracks that have moved or been deleted.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 2] --> B[Load Libraries]
|
||||
B --> C[Get Missing and Matching Tracks]
|
||||
C --> D[Group by PID]
|
||||
D --> E{Match Type?}
|
||||
E -- Exact --> F[Update Path]
|
||||
E -- Same PID --> G[Update If Only One]
|
||||
E -- Equivalent --> H[Update If No Better Match]
|
||||
F --> I[End Phase 2]
|
||||
G --> I
|
||||
H --> I
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Track Identification Strategy**
|
||||
- Uses persistent identifiers (PIDs) to track tracks across scans
|
||||
- Loads missing tracks and potential matches from the database
|
||||
- Groups tracks by PID to limit comparison scope
|
||||
|
||||
2. **Match Analysis**
|
||||
- Applies three levels of matching criteria:
|
||||
- Exact match (full metadata equivalence)
|
||||
- Single match for a PID
|
||||
- Equivalent match (same base path or similar metadata)
|
||||
- Prioritizes matches in order of confidence
|
||||
|
||||
3. **Database Update Strategy**
|
||||
- Preserves the original track ID
|
||||
- Updates the path to the new location
|
||||
- Deletes the duplicate entry
|
||||
- Uses transactions to ensure atomicity
|
||||
|
||||
### Phase 3: Album Refresh (`phase_3_refresh_albums.go`)
|
||||
|
||||
This phase updates album information based on the latest track metadata.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 3] --> B[Load Touched Albums]
|
||||
B --> C[Filter Unmodified]
|
||||
C --> D{Changes Detected?}
|
||||
D -- Yes --> E[Refresh Album Data]
|
||||
D -- No --> F[Skip]
|
||||
E --> G[Update Database]
|
||||
F --> H[End Phase 3]
|
||||
G --> H
|
||||
H --> I[Refresh Statistics]
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Album Selection Logic**
|
||||
- Loads albums that have been "touched" in previous phases
|
||||
- Uses a producer-consumer pattern for efficient processing
|
||||
- Retrieves all media files for each album for completeness
|
||||
|
||||
2. **Change Detection**
|
||||
- Rebuilds album metadata from associated tracks
|
||||
- Compares album attributes for changes
|
||||
- Skips albums with no media files
|
||||
- Avoids unnecessary database updates
|
||||
|
||||
3. **Statistics Refreshing**
|
||||
- Updates album play counts
|
||||
- Updates artist play counts
|
||||
- Maintains consistency between related entities
|
||||
|
||||
### Phase 4: Playlist Import (`phase_4_playlists.go`)
|
||||
|
||||
This phase imports and updates playlists from the file system.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Phase 4] --> B{AutoImportPlaylists?}
|
||||
B -- No --> C[Skip]
|
||||
B -- Yes --> D{Admin User Exists?}
|
||||
D -- No --> E[Log Warning & Skip]
|
||||
D -- Yes --> F[Load Folders with Playlists]
|
||||
F --> G{For Each Folder}
|
||||
G --> H[Read Directory]
|
||||
H --> I{For Each Playlist}
|
||||
I --> J[Import Playlist]
|
||||
J --> K[Pre-cache Artwork]
|
||||
K --> L[End Phase 4]
|
||||
C --> L
|
||||
E --> L
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Playlist Discovery**
|
||||
- Loads folders known to contain playlists
|
||||
- Focuses on folders that have been touched in previous phases
|
||||
- Handles both playlist formats (M3U, NSP)
|
||||
|
||||
2. **Import Process**
|
||||
- Uses the core.Playlists service for import
|
||||
- Handles both regular and smart playlists
|
||||
- Updates existing playlists when changed
|
||||
- Pre-caches playlist cover art
|
||||
|
||||
3. **Configuration Awareness**
|
||||
- Respects the AutoImportPlaylists setting
|
||||
- Requires an admin user for playlist import
|
||||
- Logs appropriate messages for configuration issues
|
||||
|
||||
## Final Processing Steps
|
||||
|
||||
After the four main phases, several finalization steps occur:
|
||||
|
||||
1. **Garbage Collection**
|
||||
- Removes dangling tracks with no files
|
||||
- Cleans up empty albums
|
||||
- Removes orphaned artists
|
||||
- Deletes orphaned annotations
|
||||
|
||||
2. **Statistics Refresh**
|
||||
- Updates artist song and album counts
|
||||
- Refreshes tag usage statistics
|
||||
- Updates aggregate metrics
|
||||
|
||||
3. **Library Status Update**
|
||||
- Marks scan as completed
|
||||
- Updates last scan timestamp
|
||||
- Stores persistent ID configuration
|
||||
|
||||
4. **Database Optimization**
|
||||
- Performs database maintenance
|
||||
- Optimizes tables and indexes
|
||||
- Reclaims space from deleted records
|
||||
|
||||
## File System Watching
|
||||
|
||||
The watcher system (`watcher.go`) provides real-time monitoring of file system changes:
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[Start Watcher] --> B[For Each Library]
|
||||
B --> C[Start Library Watcher]
|
||||
C --> D[Monitor File Events]
|
||||
D --> E{Change Detected?}
|
||||
E -- Yes --> F[Wait for More Changes]
|
||||
F --> G{Time Elapsed?}
|
||||
G -- Yes --> H[Trigger Scan]
|
||||
G -- No --> F
|
||||
H --> I[Wait for Scan Completion]
|
||||
I --> D
|
||||
```
|
||||
|
||||
**Technical implementation details:**
|
||||
|
||||
1. **Event Throttling**
|
||||
- Uses a timer to batch changes
|
||||
- Prevents excessive rescanning
|
||||
- Configurable wait period
|
||||
|
||||
2. **Library-specific Watching**
|
||||
- Each library has its own watcher goroutine
|
||||
- Translates paths to library-relative paths
|
||||
- Filters irrelevant changes
|
||||
|
||||
3. **Platform Adaptability**
|
||||
- Uses storage-provided watcher implementation
|
||||
- Supports different notification mechanisms per platform
|
||||
- Graceful fallback when watching is not supported
|
||||
|
||||
## Edge Cases and Optimizations
|
||||
|
||||
### Handling Album ID Changes
|
||||
|
||||
The scanner carefully manages album identity across scans:
|
||||
- Tracks previous album IDs to handle ID generation changes
|
||||
- Preserves annotations when IDs change
|
||||
- Maintains creation timestamps for consistent sorting
|
||||
|
||||
### Detecting Moved Files
|
||||
|
||||
A sophisticated algorithm identifies moved files:
|
||||
1. Groups missing and new files by their Persistent ID
|
||||
2. Applies multiple matching strategies in priority order
|
||||
3. Updates paths rather than creating duplicate entries
|
||||
|
||||
### Resuming Interrupted Scans
|
||||
|
||||
If a scan is interrupted:
|
||||
- The next scan detects this condition
|
||||
- Forces a full scan if the previous one was a full scan
|
||||
- Continues from where it left off for incremental scans
|
||||
|
||||
### Memory Efficiency
|
||||
|
||||
Several strategies minimize memory usage:
|
||||
- Batched file processing (200 files at a time)
|
||||
- External scanner process option
|
||||
- Database-side filtering where possible
|
||||
- Stream processing with pipelines
|
||||
|
||||
### Concurrency Control
|
||||
|
||||
The scanner implements a sophisticated concurrency model to optimize performance:
|
||||
|
||||
1. **Phase-Level Parallelism**:
|
||||
- Phases 1 and 2 run sequentially due to their dependencies
|
||||
- Phases 3 and 4 run in parallel using the `chain.RunParallel()` function
|
||||
- Final steps run sequentially to ensure data consistency
|
||||
|
||||
2. **Within-Phase Concurrency**:
|
||||
- Each phase has configurable concurrency for its stages
|
||||
- For example, `phase_1_folders.go` processes folders concurrently: `ppl.NewStage(p.processFolder, ppl.Name("process folder"), ppl.Concurrency(conf.Server.DevScannerThreads))`
|
||||
- Multiple stages can exist within a phase, each with its own concurrency level
|
||||
|
||||
3. **Pipeline Architecture Benefits**:
|
||||
- Producer-consumer pattern minimizes memory usage
|
||||
- Work is streamed through stages rather than accumulated
|
||||
- Back-pressure is automatically managed
|
||||
|
||||
4. **Thread Safety Mechanisms**:
|
||||
- Atomic counters for statistics gathering
|
||||
- Mutex protection for shared resources
|
||||
- Transactional database operations
|
||||
|
||||
## Configuration Options
|
||||
|
||||
The scanner's behavior can be customized through several configuration settings that directly affect its operation:
|
||||
|
||||
### Core Scanner Options
|
||||
|
||||
| Setting | Description | Default |
|
||||
|-------------------------|------------------------------------------------------------------|----------------|
|
||||
| `Scanner.Enabled` | Whether the automatic scanner is enabled | true |
|
||||
| `Scanner.Schedule` | Cron expression or duration for scheduled scans (e.g., "@daily") | "0" (disabled) |
|
||||
| `Scanner.ScanOnStartup` | Whether to scan when the server starts | true |
|
||||
| `Scanner.WatcherWait` | Delay before triggering scan after file changes detected | 5s |
|
||||
| `Scanner.ArtistJoiner` | String used to join multiple artists in track metadata | " • " |
|
||||
|
||||
### Playlist Processing
|
||||
|
||||
| Setting | Description | Default |
|
||||
|-----------------------------|----------------------------------------------------------|---------|
|
||||
| `PlaylistsPath` | Path(s) to search for playlists (supports glob patterns) | "" |
|
||||
| `AutoImportPlaylists` | Whether to import playlists during scanning | true |
|
||||
|
||||
### Performance Options
|
||||
|
||||
| Setting | Description | Default |
|
||||
|----------------------|-----------------------------------------------------------|---------|
|
||||
| `DevExternalScanner` | Use external process for scanning (reduces memory issues) | true |
|
||||
| `DevScannerThreads` | Number of concurrent processing threads during scanning | 5 |
|
||||
|
||||
### Persistent ID Options
|
||||
|
||||
| Setting | Description | Default |
|
||||
|-------------|---------------------------------------------------------------------|---------------------------------------------------------------------|
|
||||
| `PID.Track` | Format for track persistent IDs (critical for tracking moved files) | "musicbrainz_trackid\|albumid,discnumber,tracknumber,title" |
|
||||
| `PID.Album` | Format for album persistent IDs (affects album grouping) | "musicbrainz_albumid\|albumartistid,album,albumversion,releasedate" |
|
||||
|
||||
These options can be set in the Navidrome configuration file (e.g., `navidrome.toml`) or via environment variables with the `ND_` prefix (e.g., `ND_SCANNER_ENABLED=false`). For environment variables, dots in option names are replaced with underscores.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The Navidrome scanner represents a sophisticated system for efficiently managing music libraries. Its phase-based pipeline architecture, careful handling of edge cases, and performance optimizations allow it to handle libraries of significant size while maintaining data integrity and providing a responsive user experience.
|
||||
310
scanner/controller.go
Normal file
310
scanner/controller.go
Normal file
@@ -0,0 +1,310 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/auth"
|
||||
"github.com/navidrome/navidrome/core/metrics"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/request"
|
||||
"github.com/navidrome/navidrome/server/events"
|
||||
. "github.com/navidrome/navidrome/utils/gg"
|
||||
"github.com/navidrome/navidrome/utils/pl"
|
||||
"golang.org/x/time/rate"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrAlreadyScanning = errors.New("already scanning")
|
||||
)
|
||||
|
||||
func New(rootCtx context.Context, ds model.DataStore, cw artwork.CacheWarmer, broker events.Broker,
|
||||
pls core.Playlists, m metrics.Metrics) model.Scanner {
|
||||
c := &controller{
|
||||
rootCtx: rootCtx,
|
||||
ds: ds,
|
||||
cw: cw,
|
||||
broker: broker,
|
||||
pls: pls,
|
||||
metrics: m,
|
||||
}
|
||||
if !conf.Server.DevExternalScanner {
|
||||
c.limiter = P(rate.Sometimes{Interval: conf.Server.DevActivityPanelUpdateRate})
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
func (s *controller) getScanner() scanner {
|
||||
if conf.Server.DevExternalScanner {
|
||||
return &scannerExternal{}
|
||||
}
|
||||
return &scannerImpl{ds: s.ds, cw: s.cw, pls: s.pls}
|
||||
}
|
||||
|
||||
// CallScan starts an in-process scan of specific library/folder pairs.
|
||||
// If targets is empty, it scans all libraries.
|
||||
// This is meant to be called from the command line (see cmd/scan.go).
|
||||
func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullScan bool, targets []model.ScanTarget) (<-chan *ProgressInfo, error) {
|
||||
release, err := lockScan(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer release()
|
||||
|
||||
ctx = auth.WithAdminUser(ctx, ds)
|
||||
progress := make(chan *ProgressInfo, 100)
|
||||
go func() {
|
||||
defer close(progress)
|
||||
scanner := &scannerImpl{ds: ds, cw: artwork.NoopCacheWarmer(), pls: pls}
|
||||
scanner.scanFolders(ctx, fullScan, targets, progress)
|
||||
}()
|
||||
return progress, nil
|
||||
}
|
||||
|
||||
func IsScanning() bool {
|
||||
return running.Load()
|
||||
}
|
||||
|
||||
type ProgressInfo struct {
|
||||
LibID int
|
||||
FileCount uint32
|
||||
Path string
|
||||
Phase string
|
||||
ChangesDetected bool
|
||||
Warning string
|
||||
Error string
|
||||
ForceUpdate bool
|
||||
}
|
||||
|
||||
// scanner defines the interface for different scanner implementations.
|
||||
// This allows for swapping between in-process and external scanners.
|
||||
type scanner interface {
|
||||
// scanFolders performs the actual scanning of folders. If targets is nil, it scans all libraries.
|
||||
scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo)
|
||||
}
|
||||
|
||||
type controller struct {
|
||||
rootCtx context.Context
|
||||
ds model.DataStore
|
||||
cw artwork.CacheWarmer
|
||||
broker events.Broker
|
||||
metrics metrics.Metrics
|
||||
pls core.Playlists
|
||||
limiter *rate.Sometimes
|
||||
count atomic.Uint32
|
||||
folderCount atomic.Uint32
|
||||
changesDetected bool
|
||||
}
|
||||
|
||||
// getLastScanTime returns the most recent scan time across all libraries
|
||||
func (s *controller) getLastScanTime(ctx context.Context) (time.Time, error) {
|
||||
libs, err := s.ds.Library(ctx).GetAll(model.QueryOptions{
|
||||
Sort: "last_scan_at",
|
||||
Order: "desc",
|
||||
Max: 1,
|
||||
})
|
||||
if err != nil {
|
||||
return time.Time{}, fmt.Errorf("getting libraries: %w", err)
|
||||
}
|
||||
|
||||
if len(libs) == 0 {
|
||||
return time.Time{}, nil
|
||||
}
|
||||
|
||||
return libs[0].LastScanAt, nil
|
||||
}
|
||||
|
||||
// getScanInfo retrieves scan status from the database
|
||||
func (s *controller) getScanInfo(ctx context.Context) (scanType string, elapsed time.Duration, lastErr string) {
|
||||
lastErr, _ = s.ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "")
|
||||
scanType, _ = s.ds.Property(ctx).DefaultGet(consts.LastScanTypeKey, "")
|
||||
startTimeStr, _ := s.ds.Property(ctx).DefaultGet(consts.LastScanStartTimeKey, "")
|
||||
|
||||
if startTimeStr != "" {
|
||||
startTime, err := time.Parse(time.RFC3339, startTimeStr)
|
||||
if err == nil {
|
||||
if running.Load() {
|
||||
elapsed = time.Since(startTime)
|
||||
} else {
|
||||
// If scan is not running, calculate elapsed time using the most recent scan time
|
||||
lastScanTime, err := s.getLastScanTime(ctx)
|
||||
if err == nil && !lastScanTime.IsZero() {
|
||||
elapsed = lastScanTime.Sub(startTime)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return scanType, elapsed, lastErr
|
||||
}
|
||||
|
||||
func (s *controller) Status(ctx context.Context) (*model.ScannerStatus, error) {
|
||||
lastScanTime, err := s.getLastScanTime(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting last scan time: %w", err)
|
||||
}
|
||||
|
||||
scanType, elapsed, lastErr := s.getScanInfo(ctx)
|
||||
|
||||
if running.Load() {
|
||||
status := &model.ScannerStatus{
|
||||
Scanning: true,
|
||||
LastScan: lastScanTime,
|
||||
Count: s.count.Load(),
|
||||
FolderCount: s.folderCount.Load(),
|
||||
LastError: lastErr,
|
||||
ScanType: scanType,
|
||||
ElapsedTime: elapsed,
|
||||
}
|
||||
return status, nil
|
||||
}
|
||||
|
||||
count, folderCount, err := s.getCounters(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting library stats: %w", err)
|
||||
}
|
||||
return &model.ScannerStatus{
|
||||
Scanning: false,
|
||||
LastScan: lastScanTime,
|
||||
Count: uint32(count),
|
||||
FolderCount: uint32(folderCount),
|
||||
LastError: lastErr,
|
||||
ScanType: scanType,
|
||||
ElapsedTime: elapsed,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *controller) getCounters(ctx context.Context) (int64, int64, error) {
|
||||
libs, err := s.ds.Library(ctx).GetAll()
|
||||
if err != nil {
|
||||
return 0, 0, fmt.Errorf("library count: %w", err)
|
||||
}
|
||||
var count, folderCount int64
|
||||
for _, l := range libs {
|
||||
count += int64(l.TotalSongs)
|
||||
folderCount += int64(l.TotalFolders)
|
||||
}
|
||||
return count, folderCount, nil
|
||||
}
|
||||
|
||||
func (s *controller) ScanAll(requestCtx context.Context, fullScan bool) ([]string, error) {
|
||||
return s.ScanFolders(requestCtx, fullScan, nil)
|
||||
}
|
||||
|
||||
func (s *controller) ScanFolders(requestCtx context.Context, fullScan bool, targets []model.ScanTarget) ([]string, error) {
|
||||
release, err := lockScan(requestCtx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer release()
|
||||
|
||||
// Prepare the context for the scan
|
||||
ctx := request.AddValues(s.rootCtx, requestCtx)
|
||||
ctx = auth.WithAdminUser(ctx, s.ds)
|
||||
|
||||
// Send the initial scan status event
|
||||
s.sendMessage(ctx, &events.ScanStatus{Scanning: true, Count: 0, FolderCount: 0})
|
||||
progress := make(chan *ProgressInfo, 100)
|
||||
go func() {
|
||||
defer close(progress)
|
||||
scanner := s.getScanner()
|
||||
scanner.scanFolders(ctx, fullScan, targets, progress)
|
||||
}()
|
||||
|
||||
// Wait for the scan to finish, sending progress events to all connected clients
|
||||
scanWarnings, scanError := s.trackProgress(ctx, progress)
|
||||
for _, w := range scanWarnings {
|
||||
log.Warn(ctx, fmt.Sprintf("Scan warning: %s", w))
|
||||
}
|
||||
// If changes were detected, send a refresh event to all clients
|
||||
if s.changesDetected {
|
||||
log.Debug(ctx, "Library changes imported. Sending refresh event")
|
||||
s.broker.SendBroadcastMessage(ctx, &events.RefreshResource{})
|
||||
}
|
||||
// Send the final scan status event, with totals
|
||||
if count, folderCount, err := s.getCounters(ctx); err != nil {
|
||||
s.metrics.WriteAfterScanMetrics(ctx, false)
|
||||
return scanWarnings, err
|
||||
} else {
|
||||
scanType, elapsed, lastErr := s.getScanInfo(ctx)
|
||||
s.metrics.WriteAfterScanMetrics(ctx, true)
|
||||
s.sendMessage(ctx, &events.ScanStatus{
|
||||
Scanning: false,
|
||||
Count: count,
|
||||
FolderCount: folderCount,
|
||||
Error: lastErr,
|
||||
ScanType: scanType,
|
||||
ElapsedTime: elapsed,
|
||||
})
|
||||
}
|
||||
return scanWarnings, scanError
|
||||
}
|
||||
|
||||
// This is a global variable that is used to prevent multiple scans from running at the same time.
|
||||
// "There can be only one" - https://youtu.be/sqcLjcSloXs?si=VlsjEOjTJZ68zIyg
|
||||
var running atomic.Bool
|
||||
|
||||
func lockScan(ctx context.Context) (func(), error) {
|
||||
if !running.CompareAndSwap(false, true) {
|
||||
log.Debug(ctx, "Scanner already running, ignoring request")
|
||||
return func() {}, ErrAlreadyScanning
|
||||
}
|
||||
return func() {
|
||||
running.Store(false)
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *controller) trackProgress(ctx context.Context, progress <-chan *ProgressInfo) ([]string, error) {
|
||||
s.count.Store(0)
|
||||
s.folderCount.Store(0)
|
||||
s.changesDetected = false
|
||||
|
||||
var warnings []string
|
||||
var errs []error
|
||||
for p := range pl.ReadOrDone(ctx, progress) {
|
||||
if p.Error != "" {
|
||||
errs = append(errs, errors.New(p.Error))
|
||||
continue
|
||||
}
|
||||
if p.Warning != "" {
|
||||
warnings = append(warnings, p.Warning)
|
||||
continue
|
||||
}
|
||||
if p.ChangesDetected {
|
||||
s.changesDetected = true
|
||||
continue
|
||||
}
|
||||
s.count.Add(p.FileCount)
|
||||
if p.FileCount > 0 {
|
||||
s.folderCount.Add(1)
|
||||
}
|
||||
|
||||
scanType, elapsed, lastErr := s.getScanInfo(ctx)
|
||||
status := &events.ScanStatus{
|
||||
Scanning: true,
|
||||
Count: int64(s.count.Load()),
|
||||
FolderCount: int64(s.folderCount.Load()),
|
||||
Error: lastErr,
|
||||
ScanType: scanType,
|
||||
ElapsedTime: elapsed,
|
||||
}
|
||||
if s.limiter != nil && !p.ForceUpdate {
|
||||
s.limiter.Do(func() { s.sendMessage(ctx, status) })
|
||||
} else {
|
||||
s.sendMessage(ctx, status)
|
||||
}
|
||||
}
|
||||
return warnings, errors.Join(errs...)
|
||||
}
|
||||
|
||||
func (s *controller) sendMessage(ctx context.Context, status *events.ScanStatus) {
|
||||
s.broker.SendBroadcastMessage(ctx, status)
|
||||
}
|
||||
56
scanner/controller_test.go
Normal file
56
scanner/controller_test.go
Normal file
@@ -0,0 +1,56 @@
|
||||
package scanner_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/metrics"
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/persistence"
|
||||
"github.com/navidrome/navidrome/scanner"
|
||||
"github.com/navidrome/navidrome/server/events"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Controller", func() {
|
||||
var ctx context.Context
|
||||
var ds *tests.MockDataStore
|
||||
var ctrl model.Scanner
|
||||
|
||||
Describe("Status", func() {
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
db.Init(ctx)
|
||||
DeferCleanup(func() { Expect(tests.ClearDB()).To(Succeed()) })
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
ds = &tests.MockDataStore{RealDS: persistence.New(db.Db())}
|
||||
ds.MockedProperty = &tests.MockedPropertyRepo{}
|
||||
ctrl = scanner.New(ctx, ds, artwork.NoopCacheWarmer(), events.NoopBroker(), core.NewPlaylists(ds), metrics.NewNoopInstance())
|
||||
})
|
||||
|
||||
It("includes last scan error", func() {
|
||||
Expect(ds.Property(ctx).Put(consts.LastScanErrorKey, "boom")).To(Succeed())
|
||||
status, err := ctrl.Status(ctx)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(status.LastError).To(Equal("boom"))
|
||||
})
|
||||
|
||||
It("includes scan type and error in status", func() {
|
||||
// Set up test data in property repo
|
||||
Expect(ds.Property(ctx).Put(consts.LastScanErrorKey, "test error")).To(Succeed())
|
||||
Expect(ds.Property(ctx).Put(consts.LastScanTypeKey, "full")).To(Succeed())
|
||||
|
||||
// Get status and verify basic info
|
||||
status, err := ctrl.Status(ctx)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(status.LastError).To(Equal("test error"))
|
||||
Expect(status.ScanType).To(Equal("full"))
|
||||
})
|
||||
})
|
||||
})
|
||||
101
scanner/external.go
Normal file
101
scanner/external.go
Normal file
@@ -0,0 +1,101 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/gob"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"os/exec"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
// scannerExternal is a scanner that runs an external process to do the scanning. It is used to avoid
|
||||
// memory leaks or retention in the main process, as the scanner can consume a lot of memory. The
|
||||
// external process will be spawned with the same executable as the current process, and will run
|
||||
// the "scan" command with the "--subprocess" flag.
|
||||
//
|
||||
// The external process will send progress updates to the main process through its STDOUT, and the main
|
||||
// process will forward them to the caller.
|
||||
type scannerExternal struct{}
|
||||
|
||||
func (s *scannerExternal) scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
|
||||
s.scan(ctx, fullScan, targets, progress)
|
||||
}
|
||||
|
||||
func (s *scannerExternal) scan(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
|
||||
exe, err := os.Executable()
|
||||
if err != nil {
|
||||
progress <- &ProgressInfo{Error: fmt.Sprintf("failed to get executable path: %s", err)}
|
||||
return
|
||||
}
|
||||
|
||||
// Build command arguments
|
||||
args := []string{
|
||||
"scan",
|
||||
"--nobanner", "--subprocess",
|
||||
"--configfile", conf.Server.ConfigFile,
|
||||
"--datafolder", conf.Server.DataFolder,
|
||||
"--cachefolder", conf.Server.CacheFolder,
|
||||
}
|
||||
|
||||
// Add targets if provided
|
||||
if len(targets) > 0 {
|
||||
for _, target := range targets {
|
||||
args = append(args, "-t", target.String())
|
||||
}
|
||||
log.Debug(ctx, "Spawning external scanner process with targets", "fullScan", fullScan, "path", exe, "targets", targets)
|
||||
} else {
|
||||
log.Debug(ctx, "Spawning external scanner process", "fullScan", fullScan, "path", exe)
|
||||
}
|
||||
|
||||
// Add full scan flag if needed
|
||||
if fullScan {
|
||||
args = append(args, "--full")
|
||||
}
|
||||
|
||||
cmd := exec.CommandContext(ctx, exe, args...)
|
||||
|
||||
in, out := io.Pipe()
|
||||
defer in.Close()
|
||||
defer out.Close()
|
||||
cmd.Stdout = out
|
||||
cmd.Stderr = os.Stderr
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
progress <- &ProgressInfo{Error: fmt.Sprintf("failed to start scanner process: %s", err)}
|
||||
return
|
||||
}
|
||||
go s.wait(cmd, out)
|
||||
|
||||
decoder := gob.NewDecoder(in)
|
||||
for {
|
||||
var p ProgressInfo
|
||||
if err := decoder.Decode(&p); err != nil {
|
||||
if !errors.Is(err, io.EOF) {
|
||||
progress <- &ProgressInfo{Error: fmt.Sprintf("failed to read status from scanner: %s", err)}
|
||||
}
|
||||
break
|
||||
}
|
||||
progress <- &p
|
||||
}
|
||||
}
|
||||
|
||||
func (s *scannerExternal) wait(cmd *exec.Cmd, out *io.PipeWriter) {
|
||||
if err := cmd.Wait(); err != nil {
|
||||
var exitErr *exec.ExitError
|
||||
if errors.As(err, &exitErr) {
|
||||
_ = out.CloseWithError(fmt.Errorf("%s exited with non-zero status code: %w", cmd, exitErr))
|
||||
} else {
|
||||
_ = out.CloseWithError(fmt.Errorf("waiting %s cmd: %w", cmd, err))
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = out.Close()
|
||||
}
|
||||
|
||||
var _ scanner = (*scannerExternal)(nil)
|
||||
118
scanner/folder_entry.go
Normal file
118
scanner/folder_entry.go
Normal file
@@ -0,0 +1,118 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"maps"
|
||||
"slices"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/utils/chrono"
|
||||
)
|
||||
|
||||
func newFolderEntry(job *scanJob, id, path string, updTime time.Time, hash string) *folderEntry {
|
||||
f := &folderEntry{
|
||||
id: id,
|
||||
job: job,
|
||||
path: path,
|
||||
audioFiles: make(map[string]fs.DirEntry),
|
||||
imageFiles: make(map[string]fs.DirEntry),
|
||||
albumIDMap: make(map[string]string),
|
||||
updTime: updTime,
|
||||
prevHash: hash,
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
type folderEntry struct {
|
||||
job *scanJob
|
||||
elapsed chrono.Meter
|
||||
path string // Full path
|
||||
id string // DB ID
|
||||
modTime time.Time // From FS
|
||||
updTime time.Time // from DB
|
||||
audioFiles map[string]fs.DirEntry
|
||||
imageFiles map[string]fs.DirEntry
|
||||
numPlaylists int
|
||||
numSubFolders int
|
||||
imagesUpdatedAt time.Time
|
||||
prevHash string // Previous hash from DB
|
||||
tracks model.MediaFiles
|
||||
albums model.Albums
|
||||
albumIDMap map[string]string
|
||||
artists model.Artists
|
||||
tags model.TagList
|
||||
missingTracks []*model.MediaFile
|
||||
}
|
||||
|
||||
func (f *folderEntry) hasNoFiles() bool {
|
||||
return len(f.audioFiles) == 0 && len(f.imageFiles) == 0 && f.numPlaylists == 0
|
||||
}
|
||||
|
||||
func (f *folderEntry) isEmpty() bool {
|
||||
return f.hasNoFiles() && f.numSubFolders == 0
|
||||
}
|
||||
|
||||
func (f *folderEntry) isNew() bool {
|
||||
return f.updTime.IsZero()
|
||||
}
|
||||
|
||||
func (f *folderEntry) isOutdated() bool {
|
||||
if f.job.lib.FullScanInProgress && f.updTime.Before(f.job.lib.LastScanStartedAt) {
|
||||
return true
|
||||
}
|
||||
return f.prevHash != f.hash()
|
||||
}
|
||||
|
||||
func (f *folderEntry) toFolder() *model.Folder {
|
||||
folder := model.NewFolder(f.job.lib, f.path)
|
||||
folder.NumAudioFiles = len(f.audioFiles)
|
||||
if core.InPlaylistsPath(*folder) {
|
||||
folder.NumPlaylists = f.numPlaylists
|
||||
}
|
||||
folder.ImageFiles = slices.Collect(maps.Keys(f.imageFiles))
|
||||
folder.ImagesUpdatedAt = f.imagesUpdatedAt
|
||||
folder.Hash = f.hash()
|
||||
return folder
|
||||
}
|
||||
|
||||
func (f *folderEntry) hash() string {
|
||||
h := md5.New()
|
||||
_, _ = fmt.Fprintf(
|
||||
h,
|
||||
"%s:%d:%d:%s",
|
||||
f.modTime.UTC(),
|
||||
f.numPlaylists,
|
||||
f.numSubFolders,
|
||||
f.imagesUpdatedAt.UTC(),
|
||||
)
|
||||
|
||||
// Sort the keys of audio and image files to ensure consistent hashing
|
||||
audioKeys := slices.Collect(maps.Keys(f.audioFiles))
|
||||
slices.Sort(audioKeys)
|
||||
imageKeys := slices.Collect(maps.Keys(f.imageFiles))
|
||||
slices.Sort(imageKeys)
|
||||
|
||||
// Include audio files with their size and modtime
|
||||
for _, key := range audioKeys {
|
||||
_, _ = io.WriteString(h, key)
|
||||
if info, err := f.audioFiles[key].Info(); err == nil {
|
||||
_, _ = fmt.Fprintf(h, ":%d:%s", info.Size(), info.ModTime().UTC().String())
|
||||
}
|
||||
}
|
||||
|
||||
// Include image files with their size and modtime
|
||||
for _, key := range imageKeys {
|
||||
_, _ = io.WriteString(h, key)
|
||||
if info, err := f.imageFiles[key].Info(); err == nil {
|
||||
_, _ = fmt.Fprintf(h, ":%d:%s", info.Size(), info.ModTime().UTC().String())
|
||||
}
|
||||
}
|
||||
|
||||
return hex.EncodeToString(h.Sum(nil))
|
||||
}
|
||||
543
scanner/folder_entry_test.go
Normal file
543
scanner/folder_entry_test.go
Normal file
@@ -0,0 +1,543 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"io/fs"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("folder_entry", func() {
|
||||
var (
|
||||
lib model.Library
|
||||
job *scanJob
|
||||
path string
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
lib = model.Library{
|
||||
ID: 500,
|
||||
Path: "/music",
|
||||
LastScanStartedAt: time.Now().Add(-1 * time.Hour),
|
||||
FullScanInProgress: false,
|
||||
}
|
||||
job = &scanJob{
|
||||
lib: lib,
|
||||
lastUpdates: make(map[string]model.FolderUpdateInfo),
|
||||
}
|
||||
path = "test/folder"
|
||||
})
|
||||
|
||||
Describe("newFolderEntry", func() {
|
||||
It("creates a new folder entry with correct initialization", func() {
|
||||
folderID := model.FolderID(lib, path)
|
||||
updateInfo := model.FolderUpdateInfo{
|
||||
UpdatedAt: time.Now().Add(-30 * time.Minute),
|
||||
Hash: "previous-hash",
|
||||
}
|
||||
|
||||
entry := newFolderEntry(job, folderID, path, updateInfo.UpdatedAt, updateInfo.Hash)
|
||||
|
||||
Expect(entry.id).To(Equal(folderID))
|
||||
Expect(entry.job).To(Equal(job))
|
||||
Expect(entry.path).To(Equal(path))
|
||||
Expect(entry.audioFiles).To(BeEmpty())
|
||||
Expect(entry.imageFiles).To(BeEmpty())
|
||||
Expect(entry.albumIDMap).To(BeEmpty())
|
||||
Expect(entry.updTime).To(Equal(updateInfo.UpdatedAt))
|
||||
Expect(entry.prevHash).To(Equal(updateInfo.Hash))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("createFolderEntry", func() {
|
||||
It("removes the lastUpdate from the job after creation", func() {
|
||||
folderID := model.FolderID(lib, path)
|
||||
updateInfo := model.FolderUpdateInfo{
|
||||
UpdatedAt: time.Now().Add(-30 * time.Minute),
|
||||
Hash: "previous-hash",
|
||||
}
|
||||
job.lastUpdates[folderID] = updateInfo
|
||||
|
||||
entry := job.createFolderEntry(path)
|
||||
|
||||
Expect(entry.updTime).To(Equal(updateInfo.UpdatedAt))
|
||||
Expect(entry.prevHash).To(Equal(updateInfo.Hash))
|
||||
Expect(job.lastUpdates).ToNot(HaveKey(folderID))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("folderEntry", func() {
|
||||
var entry *folderEntry
|
||||
|
||||
BeforeEach(func() {
|
||||
folderID := model.FolderID(lib, path)
|
||||
entry = newFolderEntry(job, folderID, path, time.Time{}, "")
|
||||
})
|
||||
|
||||
Describe("hasNoFiles", func() {
|
||||
It("returns true when folder has no files or subfolders", func() {
|
||||
Expect(entry.hasNoFiles()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns false when folder has audio files", func() {
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{name: "test.mp3"}
|
||||
Expect(entry.hasNoFiles()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("returns false when folder has image files", func() {
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{name: "cover.jpg"}
|
||||
Expect(entry.hasNoFiles()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("returns false when folder has playlists", func() {
|
||||
entry.numPlaylists = 1
|
||||
Expect(entry.hasNoFiles()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("ignores subfolders when checking for no files", func() {
|
||||
entry.numSubFolders = 1
|
||||
Expect(entry.hasNoFiles()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns false when folder has multiple types of content", func() {
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{name: "test.mp3"}
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{name: "cover.jpg"}
|
||||
entry.numPlaylists = 2
|
||||
entry.numSubFolders = 3
|
||||
Expect(entry.hasNoFiles()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("isEmpty", func() {
|
||||
It("returns true when folder has no files or subfolders", func() {
|
||||
Expect(entry.isEmpty()).To(BeTrue())
|
||||
})
|
||||
It("returns false when folder has audio files", func() {
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{name: "test.mp3"}
|
||||
Expect(entry.isEmpty()).To(BeFalse())
|
||||
})
|
||||
It("returns false when folder has subfolders", func() {
|
||||
entry.numSubFolders = 1
|
||||
Expect(entry.isEmpty()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("isNew", func() {
|
||||
It("returns true when updTime is zero", func() {
|
||||
entry.updTime = time.Time{}
|
||||
Expect(entry.isNew()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns false when updTime is not zero", func() {
|
||||
entry.updTime = time.Now()
|
||||
Expect(entry.isNew()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("toFolder", func() {
|
||||
BeforeEach(func() {
|
||||
entry.audioFiles = map[string]fs.DirEntry{
|
||||
"song1.mp3": &fakeDirEntry{name: "song1.mp3"},
|
||||
"song2.mp3": &fakeDirEntry{name: "song2.mp3"},
|
||||
}
|
||||
entry.imageFiles = map[string]fs.DirEntry{
|
||||
"cover.jpg": &fakeDirEntry{name: "cover.jpg"},
|
||||
"folder.png": &fakeDirEntry{name: "folder.png"},
|
||||
}
|
||||
entry.numPlaylists = 3
|
||||
entry.imagesUpdatedAt = time.Now()
|
||||
})
|
||||
|
||||
It("converts folder entry to model.Folder correctly", func() {
|
||||
folder := entry.toFolder()
|
||||
|
||||
Expect(folder.LibraryID).To(Equal(lib.ID))
|
||||
Expect(folder.ID).To(Equal(entry.id))
|
||||
Expect(folder.NumAudioFiles).To(Equal(2))
|
||||
Expect(folder.ImageFiles).To(ConsistOf("cover.jpg", "folder.png"))
|
||||
Expect(folder.ImagesUpdatedAt).To(Equal(entry.imagesUpdatedAt))
|
||||
Expect(folder.Hash).To(Equal(entry.hash()))
|
||||
})
|
||||
|
||||
It("sets NumPlaylists when folder is in playlists path", func() {
|
||||
// Mock InPlaylistsPath to return true by setting empty PlaylistsPath
|
||||
originalPath := conf.Server.PlaylistsPath
|
||||
conf.Server.PlaylistsPath = ""
|
||||
DeferCleanup(func() { conf.Server.PlaylistsPath = originalPath })
|
||||
|
||||
folder := entry.toFolder()
|
||||
Expect(folder.NumPlaylists).To(Equal(3))
|
||||
})
|
||||
|
||||
It("does not set NumPlaylists when folder is not in playlists path", func() {
|
||||
// Mock InPlaylistsPath to return false by setting a different path
|
||||
originalPath := conf.Server.PlaylistsPath
|
||||
conf.Server.PlaylistsPath = "different/path"
|
||||
DeferCleanup(func() { conf.Server.PlaylistsPath = originalPath })
|
||||
|
||||
folder := entry.toFolder()
|
||||
Expect(folder.NumPlaylists).To(BeZero())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("hash", func() {
|
||||
BeforeEach(func() {
|
||||
entry.modTime = time.Date(2023, 1, 15, 12, 0, 0, 0, time.UTC)
|
||||
entry.imagesUpdatedAt = time.Date(2023, 1, 16, 14, 30, 0, 0, time.UTC)
|
||||
})
|
||||
|
||||
It("produces deterministic hash for same content", func() {
|
||||
entry.audioFiles = map[string]fs.DirEntry{
|
||||
"b.mp3": &fakeDirEntry{name: "b.mp3"},
|
||||
"a.mp3": &fakeDirEntry{name: "a.mp3"},
|
||||
}
|
||||
entry.imageFiles = map[string]fs.DirEntry{
|
||||
"z.jpg": &fakeDirEntry{name: "z.jpg"},
|
||||
"x.png": &fakeDirEntry{name: "x.png"},
|
||||
}
|
||||
entry.numPlaylists = 2
|
||||
entry.numSubFolders = 3
|
||||
|
||||
hash1 := entry.hash()
|
||||
|
||||
// Reverse order of maps
|
||||
entry.audioFiles = map[string]fs.DirEntry{
|
||||
"a.mp3": &fakeDirEntry{name: "a.mp3"},
|
||||
"b.mp3": &fakeDirEntry{name: "b.mp3"},
|
||||
}
|
||||
entry.imageFiles = map[string]fs.DirEntry{
|
||||
"x.png": &fakeDirEntry{name: "x.png"},
|
||||
"z.jpg": &fakeDirEntry{name: "z.jpg"},
|
||||
}
|
||||
|
||||
hash2 := entry.hash()
|
||||
Expect(hash1).To(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when audio files change", func() {
|
||||
entry.audioFiles = map[string]fs.DirEntry{
|
||||
"song1.mp3": &fakeDirEntry{name: "song1.mp3"},
|
||||
}
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.audioFiles["song2.mp3"] = &fakeDirEntry{name: "song2.mp3"}
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when image files change", func() {
|
||||
entry.imageFiles = map[string]fs.DirEntry{
|
||||
"cover.jpg": &fakeDirEntry{name: "cover.jpg"},
|
||||
}
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.imageFiles["folder.png"] = &fakeDirEntry{name: "folder.png"}
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when modification time changes", func() {
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.modTime = entry.modTime.Add(1 * time.Hour)
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when playlist count changes", func() {
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.numPlaylists = 5
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when subfolder count changes", func() {
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.numSubFolders = 3
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when images updated time changes", func() {
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.imagesUpdatedAt = entry.imagesUpdatedAt.Add(2 * time.Hour)
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when audio file size changes", func() {
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{
|
||||
name: "test.mp3",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "test.mp3",
|
||||
size: 1000,
|
||||
modTime: time.Now(),
|
||||
},
|
||||
}
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{
|
||||
name: "test.mp3",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "test.mp3",
|
||||
size: 2000, // Different size
|
||||
modTime: time.Now(),
|
||||
},
|
||||
}
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when audio file modification time changes", func() {
|
||||
baseTime := time.Now()
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{
|
||||
name: "test.mp3",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "test.mp3",
|
||||
size: 1000,
|
||||
modTime: baseTime,
|
||||
},
|
||||
}
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.audioFiles["test.mp3"] = &fakeDirEntry{
|
||||
name: "test.mp3",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "test.mp3",
|
||||
size: 1000,
|
||||
modTime: baseTime.Add(1 * time.Hour), // Different modtime
|
||||
},
|
||||
}
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when image file size changes", func() {
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{
|
||||
name: "cover.jpg",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "cover.jpg",
|
||||
size: 5000,
|
||||
modTime: time.Now(),
|
||||
},
|
||||
}
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{
|
||||
name: "cover.jpg",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "cover.jpg",
|
||||
size: 6000, // Different size
|
||||
modTime: time.Now(),
|
||||
},
|
||||
}
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces different hash when image file modification time changes", func() {
|
||||
baseTime := time.Now()
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{
|
||||
name: "cover.jpg",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "cover.jpg",
|
||||
size: 5000,
|
||||
modTime: baseTime,
|
||||
},
|
||||
}
|
||||
hash1 := entry.hash()
|
||||
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{
|
||||
name: "cover.jpg",
|
||||
fileInfo: &fakeFileInfo{
|
||||
name: "cover.jpg",
|
||||
size: 5000,
|
||||
modTime: baseTime.Add(1 * time.Hour), // Different modtime
|
||||
},
|
||||
}
|
||||
hash2 := entry.hash()
|
||||
|
||||
Expect(hash1).ToNot(Equal(hash2))
|
||||
})
|
||||
|
||||
It("produces valid hex-encoded hash", func() {
|
||||
hash := entry.hash()
|
||||
Expect(hash).To(HaveLen(32)) // MD5 hash should be 32 hex characters
|
||||
Expect(hash).To(MatchRegexp("^[a-f0-9]{32}$"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("isOutdated", func() {
|
||||
BeforeEach(func() {
|
||||
entry.prevHash = entry.hash()
|
||||
})
|
||||
|
||||
Context("when full scan is in progress", func() {
|
||||
BeforeEach(func() {
|
||||
entry.job.lib.FullScanInProgress = true
|
||||
entry.job.lib.LastScanStartedAt = time.Now()
|
||||
})
|
||||
|
||||
It("returns true when updTime is before LastScanStartedAt", func() {
|
||||
entry.updTime = entry.job.lib.LastScanStartedAt.Add(-1 * time.Hour)
|
||||
Expect(entry.isOutdated()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns false when updTime is after LastScanStartedAt", func() {
|
||||
entry.updTime = entry.job.lib.LastScanStartedAt.Add(1 * time.Hour)
|
||||
Expect(entry.isOutdated()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("returns false when updTime equals LastScanStartedAt", func() {
|
||||
entry.updTime = entry.job.lib.LastScanStartedAt
|
||||
Expect(entry.isOutdated()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Context("when full scan is not in progress", func() {
|
||||
BeforeEach(func() {
|
||||
entry.job.lib.FullScanInProgress = false
|
||||
})
|
||||
|
||||
It("returns false when hash hasn't changed", func() {
|
||||
Expect(entry.isOutdated()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("returns true when hash has changed", func() {
|
||||
entry.numPlaylists = 10 // Change something to change the hash
|
||||
Expect(entry.isOutdated()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns true when prevHash is empty", func() {
|
||||
entry.prevHash = ""
|
||||
Expect(entry.isOutdated()).To(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
Context("priority between conditions", func() {
|
||||
BeforeEach(func() {
|
||||
entry.job.lib.FullScanInProgress = true
|
||||
entry.job.lib.LastScanStartedAt = time.Now()
|
||||
entry.updTime = entry.job.lib.LastScanStartedAt.Add(-1 * time.Hour)
|
||||
})
|
||||
|
||||
It("returns true for full scan condition even when hash hasn't changed", func() {
|
||||
// Hash is the same but full scan condition should take priority
|
||||
Expect(entry.isOutdated()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns true when full scan condition is not met but hash changed", func() {
|
||||
entry.updTime = entry.job.lib.LastScanStartedAt.Add(1 * time.Hour)
|
||||
entry.numPlaylists = 10 // Change hash
|
||||
Expect(entry.isOutdated()).To(BeTrue())
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("integration scenarios", func() {
|
||||
It("handles complete folder lifecycle", func() {
|
||||
// Create new folder entry
|
||||
folderPath := "music/rock/album"
|
||||
folderID := model.FolderID(lib, folderPath)
|
||||
entry := newFolderEntry(job, folderID, folderPath, time.Time{}, "")
|
||||
|
||||
// Initially new and has no files
|
||||
Expect(entry.isNew()).To(BeTrue())
|
||||
Expect(entry.hasNoFiles()).To(BeTrue())
|
||||
|
||||
// Add some files
|
||||
entry.audioFiles["track1.mp3"] = &fakeDirEntry{name: "track1.mp3"}
|
||||
entry.audioFiles["track2.mp3"] = &fakeDirEntry{name: "track2.mp3"}
|
||||
entry.imageFiles["cover.jpg"] = &fakeDirEntry{name: "cover.jpg"}
|
||||
entry.numSubFolders = 1
|
||||
entry.modTime = time.Now()
|
||||
entry.imagesUpdatedAt = time.Now()
|
||||
|
||||
// No longer empty
|
||||
Expect(entry.hasNoFiles()).To(BeFalse())
|
||||
|
||||
// Set previous hash to current hash (simulating it's been saved)
|
||||
entry.prevHash = entry.hash()
|
||||
entry.updTime = time.Now()
|
||||
|
||||
// Should not be new or outdated
|
||||
Expect(entry.isNew()).To(BeFalse())
|
||||
Expect(entry.isOutdated()).To(BeFalse())
|
||||
|
||||
// Convert to model folder
|
||||
folder := entry.toFolder()
|
||||
Expect(folder.NumAudioFiles).To(Equal(2))
|
||||
Expect(folder.ImageFiles).To(HaveLen(1))
|
||||
Expect(folder.Hash).To(Equal(entry.hash()))
|
||||
|
||||
// Modify folder and verify it becomes outdated
|
||||
entry.audioFiles["track3.mp3"] = &fakeDirEntry{name: "track3.mp3"}
|
||||
Expect(entry.isOutdated()).To(BeTrue())
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
// fakeDirEntry implements fs.DirEntry for testing
|
||||
type fakeDirEntry struct {
|
||||
name string
|
||||
isDir bool
|
||||
typ fs.FileMode
|
||||
fileInfo fs.FileInfo
|
||||
}
|
||||
|
||||
func (f *fakeDirEntry) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
func (f *fakeDirEntry) IsDir() bool {
|
||||
return f.isDir
|
||||
}
|
||||
|
||||
func (f *fakeDirEntry) Type() fs.FileMode {
|
||||
return f.typ
|
||||
}
|
||||
|
||||
func (f *fakeDirEntry) Info() (fs.FileInfo, error) {
|
||||
if f.fileInfo != nil {
|
||||
return f.fileInfo, nil
|
||||
}
|
||||
return &fakeFileInfo{
|
||||
name: f.name,
|
||||
isDir: f.isDir,
|
||||
mode: f.typ,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// fakeFileInfo implements fs.FileInfo for testing
|
||||
type fakeFileInfo struct {
|
||||
name string
|
||||
size int64
|
||||
mode fs.FileMode
|
||||
modTime time.Time
|
||||
isDir bool
|
||||
}
|
||||
|
||||
func (f *fakeFileInfo) Name() string { return f.name }
|
||||
func (f *fakeFileInfo) Size() int64 { return f.size }
|
||||
func (f *fakeFileInfo) Mode() fs.FileMode { return f.mode }
|
||||
func (f *fakeFileInfo) ModTime() time.Time { return f.modTime }
|
||||
func (f *fakeFileInfo) IsDir() bool { return f.isDir }
|
||||
func (f *fakeFileInfo) Sys() any { return nil }
|
||||
163
scanner/ignore_checker.go
Normal file
163
scanner/ignore_checker.go
Normal file
@@ -0,0 +1,163 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"io/fs"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
ignore "github.com/sabhiram/go-gitignore"
|
||||
)
|
||||
|
||||
// IgnoreChecker manages .ndignore patterns using a stack-based approach.
|
||||
// Use Push() to add patterns when entering a folder, Pop() when leaving,
|
||||
// and ShouldIgnore() to check if a path should be ignored.
|
||||
type IgnoreChecker struct {
|
||||
fsys fs.FS
|
||||
patternStack [][]string // Stack of patterns for each folder level
|
||||
currentPatterns []string // Flattened current patterns
|
||||
matcher *ignore.GitIgnore // Compiled matcher for current patterns
|
||||
}
|
||||
|
||||
// newIgnoreChecker creates a new IgnoreChecker for the given filesystem.
|
||||
func newIgnoreChecker(fsys fs.FS) *IgnoreChecker {
|
||||
return &IgnoreChecker{
|
||||
fsys: fsys,
|
||||
patternStack: make([][]string, 0),
|
||||
}
|
||||
}
|
||||
|
||||
// Push loads .ndignore patterns from the specified folder and adds them to the pattern stack.
|
||||
// Use this when entering a folder during directory tree traversal.
|
||||
func (ic *IgnoreChecker) Push(ctx context.Context, folder string) error {
|
||||
patterns := ic.loadPatternsFromFolder(ctx, folder)
|
||||
ic.patternStack = append(ic.patternStack, patterns)
|
||||
ic.rebuildCurrentPatterns()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Pop removes the most recent patterns from the stack.
|
||||
// Use this when leaving a folder during directory tree traversal.
|
||||
func (ic *IgnoreChecker) Pop() {
|
||||
if len(ic.patternStack) > 0 {
|
||||
ic.patternStack = ic.patternStack[:len(ic.patternStack)-1]
|
||||
ic.rebuildCurrentPatterns()
|
||||
}
|
||||
}
|
||||
|
||||
// PushAllParents pushes patterns from root down to the target path.
|
||||
// This is a convenience method for when you need to check a specific path
|
||||
// without recursively walking the tree. It handles the common pattern of
|
||||
// pushing all parent directories from root to the target.
|
||||
// This method is optimized to compile patterns only once at the end.
|
||||
func (ic *IgnoreChecker) PushAllParents(ctx context.Context, targetPath string) error {
|
||||
if targetPath == "." || targetPath == "" {
|
||||
// Simple case: just push root
|
||||
return ic.Push(ctx, ".")
|
||||
}
|
||||
|
||||
// Load patterns for root
|
||||
patterns := ic.loadPatternsFromFolder(ctx, ".")
|
||||
ic.patternStack = append(ic.patternStack, patterns)
|
||||
|
||||
// Load patterns for each parent directory
|
||||
currentPath := "."
|
||||
parts := strings.Split(path.Clean(targetPath), "/")
|
||||
for _, part := range parts {
|
||||
if part == "." || part == "" {
|
||||
continue
|
||||
}
|
||||
currentPath = path.Join(currentPath, part)
|
||||
patterns = ic.loadPatternsFromFolder(ctx, currentPath)
|
||||
ic.patternStack = append(ic.patternStack, patterns)
|
||||
}
|
||||
|
||||
// Rebuild and compile patterns only once at the end
|
||||
ic.rebuildCurrentPatterns()
|
||||
return nil
|
||||
}
|
||||
|
||||
// ShouldIgnore checks if the given path should be ignored based on the current patterns.
|
||||
// Returns true if the path matches any ignore pattern, false otherwise.
|
||||
func (ic *IgnoreChecker) ShouldIgnore(ctx context.Context, relPath string) bool {
|
||||
// Handle root/empty path - never ignore
|
||||
if relPath == "" || relPath == "." {
|
||||
return false
|
||||
}
|
||||
|
||||
// If no patterns loaded, nothing to ignore
|
||||
if ic.matcher == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
matches := ic.matcher.MatchesPath(relPath)
|
||||
if matches {
|
||||
log.Trace(ctx, "Scanner: Ignoring entry matching .ndignore", "path", relPath)
|
||||
}
|
||||
return matches
|
||||
}
|
||||
|
||||
// loadPatternsFromFolder reads the .ndignore file in the specified folder and returns the patterns.
|
||||
// If the file doesn't exist, returns an empty slice.
|
||||
// If the file exists but is empty, returns a pattern to ignore everything ("**/*").
|
||||
func (ic *IgnoreChecker) loadPatternsFromFolder(ctx context.Context, folder string) []string {
|
||||
ignoreFilePath := path.Join(folder, consts.ScanIgnoreFile)
|
||||
var patterns []string
|
||||
|
||||
// Check if .ndignore file exists
|
||||
if _, err := fs.Stat(ic.fsys, ignoreFilePath); err != nil {
|
||||
// No .ndignore file in this folder
|
||||
return patterns
|
||||
}
|
||||
|
||||
// Read and parse the .ndignore file
|
||||
ignoreFile, err := ic.fsys.Open(ignoreFilePath)
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Error opening .ndignore file", "path", ignoreFilePath, err)
|
||||
return patterns
|
||||
}
|
||||
defer ignoreFile.Close()
|
||||
|
||||
lineScanner := bufio.NewScanner(ignoreFile)
|
||||
for lineScanner.Scan() {
|
||||
line := strings.TrimSpace(lineScanner.Text())
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
continue // Skip empty lines, whitespace-only lines, and comments
|
||||
}
|
||||
patterns = append(patterns, line)
|
||||
}
|
||||
|
||||
if err := lineScanner.Err(); err != nil {
|
||||
log.Warn(ctx, "Scanner: Error reading .ndignore file", "path", ignoreFilePath, err)
|
||||
return patterns
|
||||
}
|
||||
|
||||
// If the .ndignore file is empty, ignore everything
|
||||
if len(patterns) == 0 {
|
||||
log.Trace(ctx, "Scanner: .ndignore file is empty, ignoring everything", "path", folder)
|
||||
patterns = []string{"**/*"}
|
||||
}
|
||||
|
||||
return patterns
|
||||
}
|
||||
|
||||
// rebuildCurrentPatterns flattens the pattern stack into currentPatterns and recompiles the matcher.
|
||||
func (ic *IgnoreChecker) rebuildCurrentPatterns() {
|
||||
ic.currentPatterns = make([]string, 0)
|
||||
for _, patterns := range ic.patternStack {
|
||||
ic.currentPatterns = append(ic.currentPatterns, patterns...)
|
||||
}
|
||||
ic.compilePatterns()
|
||||
}
|
||||
|
||||
// compilePatterns compiles the current patterns into a GitIgnore matcher.
|
||||
func (ic *IgnoreChecker) compilePatterns() {
|
||||
if len(ic.currentPatterns) == 0 {
|
||||
ic.matcher = nil
|
||||
return
|
||||
}
|
||||
ic.matcher = ignore.CompileIgnoreLines(ic.currentPatterns...)
|
||||
}
|
||||
313
scanner/ignore_checker_test.go
Normal file
313
scanner/ignore_checker_test.go
Normal file
@@ -0,0 +1,313 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing/fstest"
|
||||
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("IgnoreChecker", func() {
|
||||
Describe("loadPatternsFromFolder", func() {
|
||||
var ic *IgnoreChecker
|
||||
var ctx context.Context
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
})
|
||||
|
||||
Context("when .ndignore file does not exist", func() {
|
||||
It("should return empty patterns", func() {
|
||||
fsys := fstest.MapFS{}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
patterns := ic.loadPatternsFromFolder(ctx, ".")
|
||||
Expect(patterns).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
Context("when .ndignore file is empty", func() {
|
||||
It("should return wildcard to ignore everything", func() {
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte("")},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
patterns := ic.loadPatternsFromFolder(ctx, ".")
|
||||
Expect(patterns).To(Equal([]string{"**/*"}))
|
||||
})
|
||||
})
|
||||
|
||||
DescribeTable("parsing .ndignore content",
|
||||
func(content string, expectedPatterns []string) {
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte(content)},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
patterns := ic.loadPatternsFromFolder(ctx, ".")
|
||||
Expect(patterns).To(Equal(expectedPatterns))
|
||||
},
|
||||
Entry("single pattern", "*.txt", []string{"*.txt"}),
|
||||
Entry("multiple patterns", "*.txt\n*.log", []string{"*.txt", "*.log"}),
|
||||
Entry("with comments", "# comment\n*.txt\n# another\n*.log", []string{"*.txt", "*.log"}),
|
||||
Entry("with empty lines", "*.txt\n\n*.log\n\n", []string{"*.txt", "*.log"}),
|
||||
Entry("mixed content", "# header\n\n*.txt\n# middle\n*.log\n\n", []string{"*.txt", "*.log"}),
|
||||
Entry("only comments and empty lines", "# comment\n\n# another\n", []string{"**/*"}),
|
||||
Entry("trailing newline", "*.txt\n*.log\n", []string{"*.txt", "*.log"}),
|
||||
Entry("directory pattern", "temp/", []string{"temp/"}),
|
||||
Entry("wildcard pattern", "**/*.mp3", []string{"**/*.mp3"}),
|
||||
Entry("multiple wildcards", "**/*.mp3\n**/*.flac\n*.log", []string{"**/*.mp3", "**/*.flac", "*.log"}),
|
||||
Entry("negation pattern", "!important.txt", []string{"!important.txt"}),
|
||||
Entry("comment with hash not at start is pattern", "not#comment", []string{"not#comment"}),
|
||||
Entry("whitespace-only lines skipped", "*.txt\n \n*.log\n\t\n", []string{"*.txt", "*.log"}),
|
||||
Entry("patterns with whitespace trimmed", " *.txt \n\t*.log\t", []string{"*.txt", "*.log"}),
|
||||
)
|
||||
})
|
||||
|
||||
Describe("Push and Pop", func() {
|
||||
var ic *IgnoreChecker
|
||||
var fsys fstest.MapFS
|
||||
var ctx context.Context
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
fsys = fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte("*.txt")},
|
||||
"folder1/.ndignore": &fstest.MapFile{Data: []byte("*.mp3")},
|
||||
"folder2/.ndignore": &fstest.MapFile{Data: []byte("*.flac")},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
})
|
||||
|
||||
Context("Push", func() {
|
||||
It("should add patterns to stack", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(ic.patternStack)).To(Equal(1))
|
||||
Expect(ic.currentPatterns).To(ContainElement("*.txt"))
|
||||
})
|
||||
|
||||
It("should compile matcher after push", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.matcher).ToNot(BeNil())
|
||||
})
|
||||
|
||||
It("should accumulate patterns from multiple levels", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
err = ic.Push(ctx, "folder1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(ic.patternStack)).To(Equal(2))
|
||||
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.mp3"))
|
||||
})
|
||||
|
||||
It("should handle push when no .ndignore exists", func() {
|
||||
err := ic.Push(ctx, "nonexistent")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(ic.patternStack)).To(Equal(1))
|
||||
Expect(ic.currentPatterns).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
Context("Pop", func() {
|
||||
It("should remove most recent patterns", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
err = ic.Push(ctx, "folder1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
ic.Pop()
|
||||
Expect(len(ic.patternStack)).To(Equal(1))
|
||||
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
|
||||
})
|
||||
|
||||
It("should handle Pop on empty stack gracefully", func() {
|
||||
Expect(func() { ic.Pop() }).ToNot(Panic())
|
||||
Expect(ic.patternStack).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should set matcher to nil when all patterns popped", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.matcher).ToNot(BeNil())
|
||||
ic.Pop()
|
||||
Expect(ic.matcher).To(BeNil())
|
||||
})
|
||||
|
||||
It("should update matcher after pop", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
err = ic.Push(ctx, "folder1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
matcher1 := ic.matcher
|
||||
ic.Pop()
|
||||
matcher2 := ic.matcher
|
||||
Expect(matcher1).ToNot(Equal(matcher2))
|
||||
})
|
||||
})
|
||||
|
||||
Context("multiple Push/Pop cycles", func() {
|
||||
It("should maintain correct state through cycles", func() {
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
|
||||
|
||||
err = ic.Push(ctx, "folder1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.mp3"))
|
||||
|
||||
ic.Pop()
|
||||
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
|
||||
|
||||
err = ic.Push(ctx, "folder2")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.flac"))
|
||||
|
||||
ic.Pop()
|
||||
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
|
||||
|
||||
ic.Pop()
|
||||
Expect(ic.currentPatterns).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("PushAllParents", func() {
|
||||
var ic *IgnoreChecker
|
||||
var ctx context.Context
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte("root.txt")},
|
||||
"folder1/.ndignore": &fstest.MapFile{Data: []byte("level1.txt")},
|
||||
"folder1/folder2/.ndignore": &fstest.MapFile{Data: []byte("level2.txt")},
|
||||
"folder1/folder2/folder3/.ndignore": &fstest.MapFile{Data: []byte("level3.txt")},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
})
|
||||
|
||||
DescribeTable("loading parent patterns",
|
||||
func(targetPath string, expectedStackDepth int, expectedPatterns []string) {
|
||||
err := ic.PushAllParents(ctx, targetPath)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(ic.patternStack)).To(Equal(expectedStackDepth))
|
||||
Expect(ic.currentPatterns).To(ConsistOf(expectedPatterns))
|
||||
},
|
||||
Entry("root path", ".", 1, []string{"root.txt"}),
|
||||
Entry("empty path", "", 1, []string{"root.txt"}),
|
||||
Entry("single level", "folder1", 2, []string{"root.txt", "level1.txt"}),
|
||||
Entry("two levels", "folder1/folder2", 3, []string{"root.txt", "level1.txt", "level2.txt"}),
|
||||
Entry("three levels", "folder1/folder2/folder3", 4, []string{"root.txt", "level1.txt", "level2.txt", "level3.txt"}),
|
||||
)
|
||||
|
||||
It("should only compile patterns once at the end", func() {
|
||||
// This is more of a behavioral test - we verify the matcher is not nil after PushAllParents
|
||||
err := ic.PushAllParents(ctx, "folder1/folder2")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.matcher).ToNot(BeNil())
|
||||
})
|
||||
|
||||
It("should handle paths with dot", func() {
|
||||
err := ic.PushAllParents(ctx, "./folder1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(ic.patternStack)).To(Equal(2))
|
||||
})
|
||||
|
||||
Context("when some parent folders have no .ndignore", func() {
|
||||
BeforeEach(func() {
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte("root.txt")},
|
||||
"folder1/folder2/.ndignore": &fstest.MapFile{Data: []byte("level2.txt")},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
})
|
||||
|
||||
It("should still push all parent levels", func() {
|
||||
err := ic.PushAllParents(ctx, "folder1/folder2")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(ic.patternStack)).To(Equal(3)) // root, folder1 (empty), folder2
|
||||
Expect(ic.currentPatterns).To(ConsistOf("root.txt", "level2.txt"))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("ShouldIgnore", func() {
|
||||
var ic *IgnoreChecker
|
||||
var ctx context.Context
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
})
|
||||
|
||||
Context("with no patterns loaded", func() {
|
||||
It("should not ignore any path", func() {
|
||||
fsys := fstest.MapFS{}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
Expect(ic.ShouldIgnore(ctx, "anything.txt")).To(BeFalse())
|
||||
Expect(ic.ShouldIgnore(ctx, "folder/file.mp3")).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Context("special paths", func() {
|
||||
BeforeEach(func() {
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte("**/*")},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
})
|
||||
|
||||
It("should never ignore root or empty paths", func() {
|
||||
Expect(ic.ShouldIgnore(ctx, "")).To(BeFalse())
|
||||
Expect(ic.ShouldIgnore(ctx, ".")).To(BeFalse())
|
||||
})
|
||||
|
||||
It("should ignore all other paths with wildcard", func() {
|
||||
Expect(ic.ShouldIgnore(ctx, "file.txt")).To(BeTrue())
|
||||
Expect(ic.ShouldIgnore(ctx, "folder/file.mp3")).To(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
DescribeTable("pattern matching",
|
||||
func(pattern string, path string, shouldMatch bool) {
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte(pattern)},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(ic.ShouldIgnore(ctx, path)).To(Equal(shouldMatch))
|
||||
},
|
||||
Entry("glob match", "*.txt", "file.txt", true),
|
||||
Entry("glob no match", "*.txt", "file.mp3", false),
|
||||
Entry("directory pattern match", "tmp/", "tmp/file.txt", true),
|
||||
Entry("directory pattern no match", "tmp/", "temporary/file.txt", false),
|
||||
Entry("nested glob match", "**/*.log", "deep/nested/file.log", true),
|
||||
Entry("nested glob no match", "**/*.log", "deep/nested/file.txt", false),
|
||||
Entry("specific file match", "ignore.me", "ignore.me", true),
|
||||
Entry("specific file no match", "ignore.me", "keep.me", false),
|
||||
Entry("wildcard all", "**/*", "any/path/file.txt", true),
|
||||
Entry("nested specific match", "temp/*", "temp/cache.db", true),
|
||||
Entry("nested specific no match", "temp/*", "temporary/cache.db", false),
|
||||
)
|
||||
|
||||
Context("with multiple patterns", func() {
|
||||
BeforeEach(func() {
|
||||
fsys := fstest.MapFS{
|
||||
".ndignore": &fstest.MapFile{Data: []byte("*.txt\n*.log\ntemp/")},
|
||||
}
|
||||
ic = newIgnoreChecker(fsys)
|
||||
err := ic.Push(ctx, ".")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
})
|
||||
|
||||
It("should match any of the patterns", func() {
|
||||
Expect(ic.ShouldIgnore(ctx, "file.txt")).To(BeTrue())
|
||||
Expect(ic.ShouldIgnore(ctx, "debug.log")).To(BeTrue())
|
||||
Expect(ic.ShouldIgnore(ctx, "temp/cache")).To(BeTrue())
|
||||
Expect(ic.ShouldIgnore(ctx, "music.mp3")).To(BeFalse())
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
211
scanner/metadata_old/ffmpeg/ffmpeg.go
Normal file
211
scanner/metadata_old/ffmpeg/ffmpeg.go
Normal file
@@ -0,0 +1,211 @@
|
||||
package ffmpeg
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"errors"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/core/ffmpeg"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/scanner/metadata_old"
|
||||
)
|
||||
|
||||
const ExtractorID = "ffmpeg"
|
||||
|
||||
type Extractor struct {
|
||||
ffmpeg ffmpeg.FFmpeg
|
||||
}
|
||||
|
||||
func (e *Extractor) Parse(files ...string) (map[string]metadata_old.ParsedTags, error) {
|
||||
output, err := e.ffmpeg.Probe(context.TODO(), files)
|
||||
if err != nil {
|
||||
log.Error("Cannot use ffmpeg to extract tags. Aborting", err)
|
||||
return nil, err
|
||||
}
|
||||
fileTags := map[string]metadata_old.ParsedTags{}
|
||||
if len(output) == 0 {
|
||||
return fileTags, errors.New("error extracting metadata files")
|
||||
}
|
||||
infos := e.parseOutput(output)
|
||||
for file, info := range infos {
|
||||
tags, err := e.extractMetadata(file, info)
|
||||
// Skip files with errors
|
||||
if err == nil {
|
||||
fileTags[file] = tags
|
||||
}
|
||||
}
|
||||
return fileTags, nil
|
||||
}
|
||||
|
||||
func (e *Extractor) CustomMappings() metadata_old.ParsedTags {
|
||||
return metadata_old.ParsedTags{
|
||||
"disc": {"tpa"},
|
||||
"has_picture": {"metadata_block_picture"},
|
||||
"originaldate": {"tdor"},
|
||||
}
|
||||
}
|
||||
|
||||
func (e *Extractor) Version() string {
|
||||
return e.ffmpeg.Version()
|
||||
}
|
||||
|
||||
func (e *Extractor) extractMetadata(filePath, info string) (metadata_old.ParsedTags, error) {
|
||||
tags := e.parseInfo(info)
|
||||
if len(tags) == 0 {
|
||||
log.Trace("Not a media file. Skipping", "filePath", filePath)
|
||||
return nil, errors.New("not a media file")
|
||||
}
|
||||
|
||||
return tags, nil
|
||||
}
|
||||
|
||||
var (
|
||||
// Input #0, mp3, from 'groovin.mp3':
|
||||
inputRegex = regexp.MustCompile(`(?m)^Input #\d+,.*,\sfrom\s'(.*)'`)
|
||||
|
||||
// TITLE : Back In Black
|
||||
tagsRx = regexp.MustCompile(`(?i)^\s{4,6}([\w\s-]+)\s*:(.*)`)
|
||||
|
||||
// : Second comment line
|
||||
continuationRx = regexp.MustCompile(`(?i)^\s+:(.*)`)
|
||||
|
||||
// Duration: 00:04:16.00, start: 0.000000, bitrate: 995 kb/s`
|
||||
durationRx = regexp.MustCompile(`^\s\sDuration: ([\d.:]+).*bitrate: (\d+)`)
|
||||
|
||||
// Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
|
||||
bitRateRx = regexp.MustCompile(`^\s{2,4}Stream #\d+:\d+: Audio:.*, (\d+) kb/s`)
|
||||
|
||||
// Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
|
||||
// Stream #0:0: Audio: flac, 44100 Hz, stereo, s16
|
||||
// Stream #0:0: Audio: dsd_lsbf_planar, 352800 Hz, stereo, fltp, 5644 kb/s
|
||||
audioStreamRx = regexp.MustCompile(`^\s{2,4}Stream #\d+:\d+.*: Audio: (.*), (.*) Hz, ([\w.]+),*(.*.,)*`)
|
||||
|
||||
// Stream #0:1: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 600x600 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn, 90k tbc`
|
||||
coverRx = regexp.MustCompile(`^\s{2,4}Stream #\d+:.+: (Video):.*`)
|
||||
)
|
||||
|
||||
func (e *Extractor) parseOutput(output string) map[string]string {
|
||||
outputs := map[string]string{}
|
||||
all := inputRegex.FindAllStringSubmatchIndex(output, -1)
|
||||
for i, loc := range all {
|
||||
// Filename is the first captured group
|
||||
file := output[loc[2]:loc[3]]
|
||||
|
||||
// File info is everything from the match, up until the beginning of the next match
|
||||
info := ""
|
||||
initial := loc[1]
|
||||
if i < len(all)-1 {
|
||||
end := all[i+1][0] - 1
|
||||
info = output[initial:end]
|
||||
} else {
|
||||
// if this is the last match
|
||||
info = output[initial:]
|
||||
}
|
||||
outputs[file] = info
|
||||
}
|
||||
return outputs
|
||||
}
|
||||
|
||||
func (e *Extractor) parseInfo(info string) map[string][]string {
|
||||
tags := map[string][]string{}
|
||||
|
||||
reader := strings.NewReader(info)
|
||||
scanner := bufio.NewScanner(reader)
|
||||
lastTag := ""
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if len(line) == 0 {
|
||||
continue
|
||||
}
|
||||
match := tagsRx.FindStringSubmatch(line)
|
||||
if len(match) > 0 {
|
||||
tagName := strings.TrimSpace(strings.ToLower(match[1]))
|
||||
if tagName != "" {
|
||||
tagValue := strings.TrimSpace(match[2])
|
||||
tags[tagName] = append(tags[tagName], tagValue)
|
||||
lastTag = tagName
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if lastTag != "" {
|
||||
match = continuationRx.FindStringSubmatch(line)
|
||||
if len(match) > 0 {
|
||||
if tags[lastTag] == nil {
|
||||
tags[lastTag] = []string{""}
|
||||
}
|
||||
tagValue := tags[lastTag][0]
|
||||
tags[lastTag][0] = tagValue + "\n" + strings.TrimSpace(match[1])
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
lastTag = ""
|
||||
match = coverRx.FindStringSubmatch(line)
|
||||
if len(match) > 0 {
|
||||
tags["has_picture"] = []string{"true"}
|
||||
continue
|
||||
}
|
||||
|
||||
match = durationRx.FindStringSubmatch(line)
|
||||
if len(match) > 0 {
|
||||
tags["duration"] = []string{e.parseDuration(match[1])}
|
||||
if len(match) > 1 {
|
||||
tags["bitrate"] = []string{match[2]}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
match = bitRateRx.FindStringSubmatch(line)
|
||||
if len(match) > 0 {
|
||||
tags["bitrate"] = []string{match[1]}
|
||||
}
|
||||
|
||||
match = audioStreamRx.FindStringSubmatch(line)
|
||||
if len(match) > 0 {
|
||||
tags["samplerate"] = []string{match[2]}
|
||||
tags["channels"] = []string{e.parseChannels(match[3])}
|
||||
}
|
||||
}
|
||||
|
||||
comment := tags["comment"]
|
||||
if len(comment) > 0 && comment[0] == "Cover (front)" {
|
||||
delete(tags, "comment")
|
||||
}
|
||||
|
||||
return tags
|
||||
}
|
||||
|
||||
var zeroTime = time.Date(0000, time.January, 1, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
func (e *Extractor) parseDuration(tag string) string {
|
||||
d, err := time.Parse("15:04:05", tag)
|
||||
if err != nil {
|
||||
return "0"
|
||||
}
|
||||
return strconv.FormatFloat(d.Sub(zeroTime).Seconds(), 'f', 2, 32)
|
||||
}
|
||||
|
||||
func (e *Extractor) parseChannels(tag string) string {
|
||||
switch tag {
|
||||
case "mono":
|
||||
return "1"
|
||||
case "stereo":
|
||||
return "2"
|
||||
case "5.1":
|
||||
return "6"
|
||||
case "7.1":
|
||||
return "8"
|
||||
default:
|
||||
return "0"
|
||||
}
|
||||
}
|
||||
|
||||
// Inputs will always be absolute paths
|
||||
func init() {
|
||||
metadata_old.RegisterExtractor(ExtractorID, &Extractor{ffmpeg: ffmpeg.New()})
|
||||
}
|
||||
17
scanner/metadata_old/ffmpeg/ffmpeg_suite_test.go
Normal file
17
scanner/metadata_old/ffmpeg/ffmpeg_suite_test.go
Normal file
@@ -0,0 +1,17 @@
|
||||
package ffmpeg
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestFFMpeg(t *testing.T) {
|
||||
tests.Init(t, true)
|
||||
log.SetLevel(log.LevelFatal)
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "FFMpeg Suite")
|
||||
}
|
||||
375
scanner/metadata_old/ffmpeg/ffmpeg_test.go
Normal file
375
scanner/metadata_old/ffmpeg/ffmpeg_test.go
Normal file
@@ -0,0 +1,375 @@
|
||||
package ffmpeg
|
||||
|
||||
import (
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Extractor", func() {
|
||||
var e *Extractor
|
||||
BeforeEach(func() {
|
||||
e = &Extractor{}
|
||||
})
|
||||
|
||||
Context("extractMetadata", func() {
|
||||
It("extracts MusicBrainz custom tags", func() {
|
||||
const output = `
|
||||
Input #0, ape, from './Capture/02 01 - Symphony No. 5 in C minor, Op. 67 I. Allegro con brio - Ludwig van Beethoven.ape':
|
||||
Metadata:
|
||||
ALBUM : Forever Classics
|
||||
ARTIST : Ludwig van Beethoven
|
||||
TITLE : Symphony No. 5 in C minor, Op. 67: I. Allegro con brio
|
||||
MUSICBRAINZ_ALBUMSTATUS: official
|
||||
MUSICBRAINZ_ALBUMTYPE: album
|
||||
MusicBrainz_AlbumComment: MP3
|
||||
Musicbrainz_Albumid: 71eb5e4a-90e2-4a31-a2d1-a96485fcb667
|
||||
musicbrainz_trackid: ffe06940-727a-415a-b608-b7e45737f9d8
|
||||
Musicbrainz_Artistid: 1f9df192-a621-4f54-8850-2c5373b7eac9
|
||||
Musicbrainz_Albumartistid: 89ad4ac3-39f7-470e-963a-56509c546377
|
||||
Musicbrainz_Releasegroupid: 708b1ae1-2d3d-34c7-b764-2732b154f5b6
|
||||
musicbrainz_releasetrackid: 6fee2e35-3049-358f-83be-43b36141028b
|
||||
CatalogNumber : PLD 1201
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(SatisfyAll(
|
||||
HaveKeyWithValue("catalognumber", []string{"PLD 1201"}),
|
||||
HaveKeyWithValue("musicbrainz_trackid", []string{"ffe06940-727a-415a-b608-b7e45737f9d8"}),
|
||||
HaveKeyWithValue("musicbrainz_albumid", []string{"71eb5e4a-90e2-4a31-a2d1-a96485fcb667"}),
|
||||
HaveKeyWithValue("musicbrainz_artistid", []string{"1f9df192-a621-4f54-8850-2c5373b7eac9"}),
|
||||
HaveKeyWithValue("musicbrainz_albumartistid", []string{"89ad4ac3-39f7-470e-963a-56509c546377"}),
|
||||
HaveKeyWithValue("musicbrainz_albumtype", []string{"album"}),
|
||||
HaveKeyWithValue("musicbrainz_albumcomment", []string{"MP3"}),
|
||||
))
|
||||
})
|
||||
|
||||
It("detects embedded cover art correctly", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.mp3':
|
||||
Metadata:
|
||||
compilation : 1
|
||||
Duration: 00:00:01.02, start: 0.000000, bitrate: 477 kb/s
|
||||
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s
|
||||
Stream #0:1: Video: mjpeg, yuvj444p(pc, bt470bg/unknown/unknown), 600x600 [SAR 1:1 DAR 1:1], 90k tbr, 90k tbn, 90k tbc`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("has_picture", []string{"true"}))
|
||||
})
|
||||
|
||||
It("detects embedded cover art in ffmpeg 4.4 output", func() {
|
||||
const output = `
|
||||
Input #0, flac, from '/run/media/naomi/Archivio/Musica/Katy Perry/Chained to the Rhythm/01 Katy Perry featuring Skip Marley - Chained to the Rhythm.flac':
|
||||
Metadata:
|
||||
ARTIST : Katy Perry featuring Skip Marley
|
||||
Duration: 00:03:57.91, start: 0.000000, bitrate: 983 kb/s
|
||||
Stream #0:0: Audio: flac, 44100 Hz, stereo, s16
|
||||
Stream #0:1: Video: mjpeg (Baseline), yuvj444p(pc, bt470bg/unknown/unknown), 599x518, 90k tbr, 90k tbn, 90k tbc (attached pic)
|
||||
Metadata:
|
||||
comment : Cover (front)`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("has_picture", []string{"true"}))
|
||||
})
|
||||
|
||||
It("detects embedded cover art in ogg containers", func() {
|
||||
const output = `
|
||||
Input #0, ogg, from '/Users/deluan/Music/iTunes/iTunes Media/Music/_Testes/Jamaican In New York/01-02 Jamaican In New York (Album Version).opus':
|
||||
Duration: 00:04:28.69, start: 0.007500, bitrate: 139 kb/s
|
||||
Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp
|
||||
Metadata:
|
||||
ALBUM : Jamaican In New York
|
||||
metadata_block_picture: AAAAAwAAAAppbWFnZS9qcGVnAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4Id/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/2wBDAQMEBAUEBQkFBQkUDQsNFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQUFBQ
|
||||
TITLE : Jamaican In New York (Album Version)`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKey("metadata_block_picture"))
|
||||
md = md.Map(e.CustomMappings())
|
||||
Expect(md).To(HaveKey("has_picture"))
|
||||
})
|
||||
|
||||
It("detects embedded cover art in m4a containers", func() {
|
||||
const output = `
|
||||
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Putumayo Presents_ Euro Groove/01 Destins et Désirs.m4a':
|
||||
Metadata:
|
||||
album : Putumayo Presents: Euro Groove
|
||||
Duration: 00:05:15.81, start: 0.047889, bitrate: 133 kb/s
|
||||
Stream #0:0[0x1](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
|
||||
Metadata:
|
||||
creation_time : 2008-03-11T21:03:23.000000Z
|
||||
vendor_id : [0][0][0][0]
|
||||
Stream #0:1[0x0]: Video: png, rgb24(pc), 350x350, 90k tbr, 90k tbn (attached pic)
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("has_picture", []string{"true"}))
|
||||
})
|
||||
|
||||
It("gets bitrate from the stream, if available", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.mp3':
|
||||
Duration: 00:00:01.02, start: 0.000000, bitrate: 477 kb/s
|
||||
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("bitrate", []string{"192"}))
|
||||
})
|
||||
|
||||
It("parses duration with milliseconds", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.mp3':
|
||||
Duration: 00:05:02.63, start: 0.000000, bitrate: 140 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("duration", []string{"302.63"}))
|
||||
})
|
||||
|
||||
It("parse flac bitrates", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.mp3':
|
||||
Duration: 00:00:01.02, start: 0.000000, bitrate: 477 kb/s
|
||||
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 192 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("channels", []string{"2"}))
|
||||
})
|
||||
|
||||
It("parse channels from the stream with bitrate", func() {
|
||||
const output = `
|
||||
Input #0, flac, from '/Users/deluan/Music/Music/Media/__/Crazy For You/01-01 Crazy For You.flac':
|
||||
Metadata:
|
||||
TITLE : Crazy For You
|
||||
Duration: 00:04:13.00, start: 0.000000, bitrate: 852 kb/s
|
||||
Stream #0:0: Audio: flac, 44100 Hz, stereo, s16
|
||||
Stream #0:1: Video: mjpeg (Progressive), yuvj444p(pc, bt470bg/unknown/unknown), 600x600, 90k tbr, 90k tbn, 90k tbc (attached pic)
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("bitrate", []string{"852"}))
|
||||
})
|
||||
|
||||
It("parse 7.1 channels from the stream", func() {
|
||||
const output = `
|
||||
Input #0, wav, from '/Users/deluan/Music/Music/Media/_/multichannel/Nums_7dot1_24_48000.wav':
|
||||
Duration: 00:00:09.05, bitrate: 9216 kb/s
|
||||
Stream #0:0: Audio: pcm_s24le ([1][0][0][0] / 0x0001), 48000 Hz, 7.1, s32 (24 bit), 9216 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("channels", []string{"8"}))
|
||||
})
|
||||
|
||||
It("parse channels from the stream without bitrate", func() {
|
||||
const output = `
|
||||
Input #0, flac, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.flac':
|
||||
Duration: 00:00:01.02, start: 0.000000, bitrate: 1371 kb/s
|
||||
Stream #0:0: Audio: flac, 44100 Hz, stereo, fltp, s32 (24 bit)`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("channels", []string{"2"}))
|
||||
})
|
||||
|
||||
It("parse channels from the stream with lang", func() {
|
||||
const output = `
|
||||
Input #0, flac, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.m4a':
|
||||
Duration: 00:00:01.02, start: 0.000000, bitrate: 1371 kb/s
|
||||
Stream #0:0(eng): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 262 kb/s (default)`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("channels", []string{"2"}))
|
||||
})
|
||||
|
||||
It("parse channels from the stream with lang 2", func() {
|
||||
const output = `
|
||||
Input #0, flac, from '/Users/deluan/Music/iTunes/iTunes Media/Music/Compilations/Putumayo Presents Blues Lounge/09 Pablo's Blues.m4a':
|
||||
Duration: 00:00:01.02, start: 0.000000, bitrate: 1371 kb/s
|
||||
Stream #0:0(eng): Audio: vorbis, 44100 Hz, stereo, fltp, 192 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("channels", []string{"2"}))
|
||||
})
|
||||
|
||||
It("parse sampleRate from the stream", func() {
|
||||
const output = `
|
||||
Input #0, dsf, from '/Users/deluan/Downloads/06-04 Perpetual Change.dsf':
|
||||
Duration: 00:14:19.46, start: 0.000000, bitrate: 5644 kb/s
|
||||
Stream #0:0: Audio: dsd_lsbf_planar, 352800 Hz, stereo, fltp, 5644 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("samplerate", []string{"352800"}))
|
||||
})
|
||||
|
||||
It("parse sampleRate from the stream", func() {
|
||||
const output = `
|
||||
Input #0, wav, from '/Users/deluan/Music/Music/Media/_/multichannel/Nums_7dot1_24_48000.wav':
|
||||
Duration: 00:00:09.05, bitrate: 9216 kb/s
|
||||
Stream #0:0: Audio: pcm_s24le ([1][0][0][0] / 0x0001), 48000 Hz, 7.1, s32 (24 bit), 9216 kb/s`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("samplerate", []string{"48000"}))
|
||||
})
|
||||
|
||||
It("parses stream level tags", func() {
|
||||
const output = `
|
||||
Input #0, ogg, from './01-02 Drive (Teku).opus':
|
||||
Metadata:
|
||||
ALBUM : Hot Wheels Acceleracers Soundtrack
|
||||
Duration: 00:03:37.37, start: 0.007500, bitrate: 135 kb/s
|
||||
Stream #0:0(eng): Audio: opus, 48000 Hz, stereo, fltp
|
||||
Metadata:
|
||||
TITLE : Drive (Teku)`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("title", []string{"Drive (Teku)"}))
|
||||
})
|
||||
|
||||
It("does not overlap top level tags with the stream level tags", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from 'groovin.mp3':
|
||||
Metadata:
|
||||
title : Groovin' (feat. Daniel Sneijers, Susanne Alt)
|
||||
Duration: 00:03:34.28, start: 0.025056, bitrate: 323 kb/s
|
||||
Metadata:
|
||||
title : garbage`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("title", []string{"Groovin' (feat. Daniel Sneijers, Susanne Alt)", "garbage"}))
|
||||
})
|
||||
|
||||
It("parses multiline tags", func() {
|
||||
const outputWithMultilineComment = `
|
||||
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'modulo.m4a':
|
||||
Metadata:
|
||||
comment : https://www.mixcloud.com/codigorock/30-minutos-com-saara-saara/
|
||||
:
|
||||
: Tracklist:
|
||||
:
|
||||
: 01. Saara Saara
|
||||
: 02. Carta Corrente
|
||||
: 03. X
|
||||
: 04. Eclipse Lunar
|
||||
: 05. Vírus de Sírius
|
||||
: 06. Doktor Fritz
|
||||
: 07. Wunderbar
|
||||
: 08. Quarta Dimensão
|
||||
Duration: 00:26:46.96, start: 0.052971, bitrate: 69 kb/s`
|
||||
const expectedComment = `https://www.mixcloud.com/codigorock/30-minutos-com-saara-saara/
|
||||
|
||||
Tracklist:
|
||||
|
||||
01. Saara Saara
|
||||
02. Carta Corrente
|
||||
03. X
|
||||
04. Eclipse Lunar
|
||||
05. Vírus de Sírius
|
||||
06. Doktor Fritz
|
||||
07. Wunderbar
|
||||
08. Quarta Dimensão`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", outputWithMultilineComment)
|
||||
Expect(md).To(HaveKeyWithValue("comment", []string{expectedComment}))
|
||||
})
|
||||
|
||||
It("parses sort tags correctly", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from '/Users/deluan/Downloads/椎名林檎 - 加爾基 精液 栗ノ花 - 2003/02 - ドツペルゲンガー.mp3':
|
||||
Metadata:
|
||||
title-sort : Dopperugengā
|
||||
album : 加爾基 精液 栗ノ花
|
||||
artist : 椎名林檎
|
||||
album_artist : 椎名林檎
|
||||
title : ドツペルゲンガー
|
||||
albumsort : Kalk Samen Kuri No Hana
|
||||
artist_sort : Shiina, Ringo
|
||||
ALBUMARTISTSORT : Shiina, Ringo
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(SatisfyAll(
|
||||
HaveKeyWithValue("title", []string{"ドツペルゲンガー"}),
|
||||
HaveKeyWithValue("album", []string{"加爾基 精液 栗ノ花"}),
|
||||
HaveKeyWithValue("artist", []string{"椎名林檎"}),
|
||||
HaveKeyWithValue("album_artist", []string{"椎名林檎"}),
|
||||
HaveKeyWithValue("title-sort", []string{"Dopperugengā"}),
|
||||
HaveKeyWithValue("albumsort", []string{"Kalk Samen Kuri No Hana"}),
|
||||
HaveKeyWithValue("artist_sort", []string{"Shiina, Ringo"}),
|
||||
HaveKeyWithValue("albumartistsort", []string{"Shiina, Ringo"}),
|
||||
))
|
||||
})
|
||||
|
||||
It("ignores cover comment", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from './Edie Brickell/Picture Perfect Morning/01-01 Tomorrow Comes.mp3':
|
||||
Metadata:
|
||||
title : Tomorrow Comes
|
||||
artist : Edie Brickell
|
||||
Duration: 00:03:56.12, start: 0.000000, bitrate: 332 kb/s
|
||||
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 320 kb/s
|
||||
Stream #0:1: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1200x1200 [SAR 72:72 DAR 1:1], 90k tbr, 90k tbn, 90k tbc
|
||||
Metadata:
|
||||
comment : Cover (front)`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).ToNot(HaveKey("comment"))
|
||||
})
|
||||
|
||||
It("parses tags with spaces in the name", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from '/Users/deluan/Music/Music/Media/_/Wyclef Jean - From the Hut, to the Projects, to the Mansion/10 - The Struggle (interlude).mp3':
|
||||
Metadata:
|
||||
ALBUM ARTIST : Wyclef Jean
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("album artist", []string{"Wyclef Jean"}))
|
||||
})
|
||||
})
|
||||
|
||||
It("parses an integer TBPM tag", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from 'tests/fixtures/test.mp3':
|
||||
Metadata:
|
||||
TBPM : 123`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("tbpm", []string{"123"}))
|
||||
})
|
||||
|
||||
It("parses and rounds a floating point fBPM tag", func() {
|
||||
const output = `
|
||||
Input #0, ogg, from 'tests/fixtures/test.ogg':
|
||||
Metadata:
|
||||
FBPM : 141.7`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.ogg", output)
|
||||
Expect(md).To(HaveKeyWithValue("fbpm", []string{"141.7"}))
|
||||
})
|
||||
|
||||
It("parses replaygain data correctly", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from 'test.mp3':
|
||||
Metadata:
|
||||
REPLAYGAIN_ALBUM_PEAK: 0.9125
|
||||
REPLAYGAIN_TRACK_PEAK: 0.4512
|
||||
REPLAYGAIN_TRACK_GAIN: -1.48 dB
|
||||
REPLAYGAIN_ALBUM_GAIN: +3.21518 dB
|
||||
Side data:
|
||||
replaygain: track gain - -1.480000, track peak - 0.000011, album gain - 3.215180, album peak - 0.000021,
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(SatisfyAll(
|
||||
HaveKeyWithValue("replaygain_track_gain", []string{"-1.48 dB"}),
|
||||
HaveKeyWithValue("replaygain_track_peak", []string{"0.4512"}),
|
||||
HaveKeyWithValue("replaygain_album_gain", []string{"+3.21518 dB"}),
|
||||
HaveKeyWithValue("replaygain_album_peak", []string{"0.9125"}),
|
||||
))
|
||||
})
|
||||
|
||||
It("parses lyrics with language code", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from 'test.mp3':
|
||||
Metadata:
|
||||
lyrics-eng : [00:00.00]This is
|
||||
: [00:02.50]English
|
||||
lyrics-xxx : [00:00.00]This is
|
||||
: [00:02.50]unspecified
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(SatisfyAll(
|
||||
HaveKeyWithValue("lyrics-eng", []string{
|
||||
"[00:00.00]This is\n[00:02.50]English",
|
||||
}),
|
||||
HaveKeyWithValue("lyrics-xxx", []string{
|
||||
"[00:00.00]This is\n[00:02.50]unspecified",
|
||||
}),
|
||||
))
|
||||
})
|
||||
|
||||
It("parses normal LYRICS tag", func() {
|
||||
const output = `
|
||||
Input #0, mp3, from 'test.mp3':
|
||||
Metadata:
|
||||
LYRICS : [00:00.00]This is
|
||||
: [00:02.50]English
|
||||
`
|
||||
md, _ := e.extractMetadata("tests/fixtures/test.mp3", output)
|
||||
Expect(md).To(HaveKeyWithValue("lyrics", []string{
|
||||
"[00:00.00]This is\n[00:02.50]English",
|
||||
}))
|
||||
})
|
||||
})
|
||||
408
scanner/metadata_old/metadata.go
Normal file
408
scanner/metadata_old/metadata.go
Normal file
@@ -0,0 +1,408 @@
|
||||
package metadata_old
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/djherbis/times"
|
||||
"github.com/google/uuid"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
type Extractor interface {
|
||||
Parse(files ...string) (map[string]ParsedTags, error)
|
||||
CustomMappings() ParsedTags
|
||||
Version() string
|
||||
}
|
||||
|
||||
var extractors = map[string]Extractor{}
|
||||
|
||||
func RegisterExtractor(id string, parser Extractor) {
|
||||
extractors[id] = parser
|
||||
}
|
||||
|
||||
func LogExtractors() {
|
||||
for id, p := range extractors {
|
||||
log.Debug("Registered metadata extractor", "id", id, "version", p.Version())
|
||||
}
|
||||
}
|
||||
|
||||
func Extract(files ...string) (map[string]Tags, error) {
|
||||
p, ok := extractors[conf.Server.Scanner.Extractor]
|
||||
if !ok {
|
||||
log.Warn("Invalid 'Scanner.Extractor' option. Using default", "requested", conf.Server.Scanner.Extractor,
|
||||
"validOptions", "ffmpeg,taglib", "default", consts.DefaultScannerExtractor)
|
||||
p = extractors[consts.DefaultScannerExtractor]
|
||||
}
|
||||
|
||||
extractedTags, err := p.Parse(files...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := map[string]Tags{}
|
||||
for filePath, tags := range extractedTags {
|
||||
fileInfo, err := os.Stat(filePath)
|
||||
if err != nil {
|
||||
log.Warn("Error stating file. Skipping", "filePath", filePath, err)
|
||||
continue
|
||||
}
|
||||
|
||||
tags = tags.Map(p.CustomMappings())
|
||||
result[filePath] = NewTag(filePath, fileInfo, tags)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func NewTag(filePath string, fileInfo os.FileInfo, tags ParsedTags) Tags {
|
||||
for t, values := range tags {
|
||||
values = removeDuplicatesAndEmpty(values)
|
||||
if len(values) == 0 {
|
||||
delete(tags, t)
|
||||
continue
|
||||
}
|
||||
tags[t] = values
|
||||
}
|
||||
return Tags{
|
||||
filePath: filePath,
|
||||
fileInfo: fileInfo,
|
||||
Tags: tags,
|
||||
}
|
||||
}
|
||||
|
||||
func removeDuplicatesAndEmpty(values []string) []string {
|
||||
encountered := map[string]struct{}{}
|
||||
empty := true
|
||||
result := make([]string, 0, len(values))
|
||||
for _, v := range values {
|
||||
if _, ok := encountered[v]; ok {
|
||||
continue
|
||||
}
|
||||
encountered[v] = struct{}{}
|
||||
empty = empty && v == ""
|
||||
result = append(result, v)
|
||||
}
|
||||
if empty {
|
||||
return nil
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
type ParsedTags map[string][]string
|
||||
|
||||
func (p ParsedTags) Map(customMappings ParsedTags) ParsedTags {
|
||||
if customMappings == nil {
|
||||
return p
|
||||
}
|
||||
for tagName, alternatives := range customMappings {
|
||||
for _, altName := range alternatives {
|
||||
if altValue, ok := p[altName]; ok {
|
||||
p[tagName] = append(p[tagName], altValue...)
|
||||
delete(p, altName)
|
||||
}
|
||||
}
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
type Tags struct {
|
||||
filePath string
|
||||
fileInfo os.FileInfo
|
||||
Tags ParsedTags
|
||||
}
|
||||
|
||||
// Common tags
|
||||
|
||||
func (t Tags) Title() string { return t.getFirstTagValue("title", "sort_name", "titlesort") }
|
||||
func (t Tags) Album() string { return t.getFirstTagValue("album", "sort_album", "albumsort") }
|
||||
func (t Tags) Artist() string { return t.getFirstTagValue("artist", "sort_artist", "artistsort") }
|
||||
func (t Tags) AlbumArtist() string {
|
||||
return t.getFirstTagValue("album_artist", "album artist", "albumartist")
|
||||
}
|
||||
func (t Tags) SortTitle() string { return t.getSortTag("tsot", "title", "name") }
|
||||
func (t Tags) SortAlbum() string { return t.getSortTag("tsoa", "album") }
|
||||
func (t Tags) SortArtist() string { return t.getSortTag("tsop", "artist") }
|
||||
func (t Tags) SortAlbumArtist() string { return t.getSortTag("tso2", "albumartist", "album_artist") }
|
||||
func (t Tags) Genres() []string { return t.getAllTagValues("genre") }
|
||||
func (t Tags) Date() (int, string) { return t.getDate("date") }
|
||||
func (t Tags) OriginalDate() (int, string) { return t.getDate("originaldate") }
|
||||
func (t Tags) ReleaseDate() (int, string) { return t.getDate("releasedate") }
|
||||
func (t Tags) Comment() string { return t.getFirstTagValue("comment") }
|
||||
func (t Tags) Compilation() bool { return t.getBool("tcmp", "compilation", "wm/iscompilation") }
|
||||
func (t Tags) TrackNumber() (int, int) { return t.getTuple("track", "tracknumber") }
|
||||
func (t Tags) DiscNumber() (int, int) { return t.getTuple("disc", "discnumber") }
|
||||
func (t Tags) DiscSubtitle() string {
|
||||
return t.getFirstTagValue("tsst", "discsubtitle", "setsubtitle")
|
||||
}
|
||||
func (t Tags) CatalogNum() string { return t.getFirstTagValue("catalognumber") }
|
||||
func (t Tags) Bpm() int { return (int)(math.Round(t.getFloat("tbpm", "bpm", "fbpm"))) }
|
||||
func (t Tags) HasPicture() bool { return t.getFirstTagValue("has_picture") != "" }
|
||||
|
||||
// MusicBrainz Identifiers
|
||||
|
||||
func (t Tags) MbzReleaseTrackID() string {
|
||||
return t.getMbzID("musicbrainz_releasetrackid", "musicbrainz release track id")
|
||||
}
|
||||
|
||||
func (t Tags) MbzRecordingID() string {
|
||||
return t.getMbzID("musicbrainz_trackid", "musicbrainz track id")
|
||||
}
|
||||
func (t Tags) MbzAlbumID() string { return t.getMbzID("musicbrainz_albumid", "musicbrainz album id") }
|
||||
func (t Tags) MbzArtistID() string {
|
||||
return t.getMbzID("musicbrainz_artistid", "musicbrainz artist id")
|
||||
}
|
||||
func (t Tags) MbzAlbumArtistID() string {
|
||||
return t.getMbzID("musicbrainz_albumartistid", "musicbrainz album artist id")
|
||||
}
|
||||
func (t Tags) MbzAlbumType() string {
|
||||
return t.getFirstTagValue("musicbrainz_albumtype", "musicbrainz album type")
|
||||
}
|
||||
func (t Tags) MbzAlbumComment() string {
|
||||
return t.getFirstTagValue("musicbrainz_albumcomment", "musicbrainz album comment")
|
||||
}
|
||||
|
||||
// Gain Properties
|
||||
|
||||
func (t Tags) RGAlbumGain() float64 {
|
||||
return t.getGainValue("replaygain_album_gain", "r128_album_gain")
|
||||
}
|
||||
func (t Tags) RGAlbumPeak() float64 { return t.getPeakValue("replaygain_album_peak") }
|
||||
func (t Tags) RGTrackGain() float64 {
|
||||
return t.getGainValue("replaygain_track_gain", "r128_track_gain")
|
||||
}
|
||||
func (t Tags) RGTrackPeak() float64 { return t.getPeakValue("replaygain_track_peak") }
|
||||
|
||||
// File properties
|
||||
|
||||
func (t Tags) Duration() float32 { return float32(t.getFloat("duration")) }
|
||||
func (t Tags) SampleRate() int { return t.getInt("samplerate") }
|
||||
func (t Tags) BitRate() int { return t.getInt("bitrate") }
|
||||
func (t Tags) Channels() int { return t.getInt("channels") }
|
||||
func (t Tags) ModificationTime() time.Time { return t.fileInfo.ModTime() }
|
||||
func (t Tags) Size() int64 { return t.fileInfo.Size() }
|
||||
func (t Tags) FilePath() string { return t.filePath }
|
||||
func (t Tags) Suffix() string { return strings.ToLower(strings.TrimPrefix(path.Ext(t.filePath), ".")) }
|
||||
func (t Tags) BirthTime() time.Time {
|
||||
if ts := times.Get(t.fileInfo); ts.HasBirthTime() {
|
||||
return ts.BirthTime()
|
||||
}
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
func (t Tags) Lyrics() string {
|
||||
lyricList := model.LyricList{}
|
||||
basicLyrics := t.getAllTagValues("lyrics", "unsynced_lyrics", "unsynced lyrics", "unsyncedlyrics")
|
||||
|
||||
for _, value := range basicLyrics {
|
||||
lyrics, err := model.ToLyrics("xxx", value)
|
||||
if err != nil {
|
||||
log.Warn("Unexpected failure occurred when parsing lyrics", "file", t.filePath, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
lyricList = append(lyricList, *lyrics)
|
||||
}
|
||||
|
||||
for tag, value := range t.Tags {
|
||||
if strings.HasPrefix(tag, "lyrics-") {
|
||||
language := strings.TrimSpace(strings.TrimPrefix(tag, "lyrics-"))
|
||||
|
||||
if language == "" {
|
||||
language = "xxx"
|
||||
}
|
||||
|
||||
for _, text := range value {
|
||||
lyrics, err := model.ToLyrics(language, text)
|
||||
if err != nil {
|
||||
log.Warn("Unexpected failure occurred when parsing lyrics", "file", t.filePath, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
lyricList = append(lyricList, *lyrics)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
res, err := json.Marshal(lyricList)
|
||||
if err != nil {
|
||||
log.Warn("Unexpected error occurred when serializing lyrics", "file", t.filePath, "error", err)
|
||||
return ""
|
||||
}
|
||||
return string(res)
|
||||
}
|
||||
|
||||
func (t Tags) getGainValue(rgTagName, r128TagName string) float64 {
|
||||
// Check for ReplayGain first
|
||||
// ReplayGain is in the form [-]a.bb dB and normalized to -18dB
|
||||
var tag = t.getFirstTagValue(rgTagName)
|
||||
if tag != "" {
|
||||
tag = strings.TrimSpace(strings.Replace(tag, "dB", "", 1))
|
||||
var value, err = strconv.ParseFloat(tag, 64)
|
||||
if err != nil || value == math.Inf(-1) || value == math.Inf(1) {
|
||||
return 0
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
// If ReplayGain is not found, check for R128 gain
|
||||
// R128 gain is a Q7.8 fixed point number normalized to -23dB
|
||||
tag = t.getFirstTagValue(r128TagName)
|
||||
if tag != "" {
|
||||
var iValue, err = strconv.Atoi(tag)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
// Convert Q7.8 to float
|
||||
var value = float64(iValue) / 256.0
|
||||
// Adding 5 dB to normalize with ReplayGain level
|
||||
return value + 5
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (t Tags) getPeakValue(tagName string) float64 {
|
||||
var tag = t.getFirstTagValue(tagName)
|
||||
var value, err = strconv.ParseFloat(tag, 64)
|
||||
if err != nil || value == math.Inf(-1) || value == math.Inf(1) {
|
||||
// A default of 1 for peak value results in no changes
|
||||
return 1
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
func (t Tags) getTags(tagNames ...string) []string {
|
||||
for _, tag := range tagNames {
|
||||
if v, ok := t.Tags[tag]; ok {
|
||||
return v
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t Tags) getFirstTagValue(tagNames ...string) string {
|
||||
ts := t.getTags(tagNames...)
|
||||
if len(ts) > 0 {
|
||||
return ts[0]
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (t Tags) getAllTagValues(tagNames ...string) []string {
|
||||
values := make([]string, 0, len(tagNames)*2)
|
||||
for _, tag := range tagNames {
|
||||
if v, ok := t.Tags[tag]; ok {
|
||||
values = append(values, v...)
|
||||
}
|
||||
}
|
||||
return values
|
||||
}
|
||||
|
||||
func (t Tags) getSortTag(originalTag string, tagNames ...string) string {
|
||||
formats := []string{"sort%s", "sort_%s", "sort-%s", "%ssort", "%s_sort", "%s-sort"}
|
||||
all := make([]string, 1, len(tagNames)*len(formats)+1)
|
||||
all[0] = originalTag
|
||||
for _, tag := range tagNames {
|
||||
for _, format := range formats {
|
||||
name := fmt.Sprintf(format, tag)
|
||||
all = append(all, name)
|
||||
}
|
||||
}
|
||||
return t.getFirstTagValue(all...)
|
||||
}
|
||||
|
||||
var dateRegex = regexp.MustCompile(`([12]\d\d\d)`)
|
||||
|
||||
func (t Tags) getDate(tagNames ...string) (int, string) {
|
||||
tag := t.getFirstTagValue(tagNames...)
|
||||
if len(tag) < 4 {
|
||||
return 0, ""
|
||||
}
|
||||
// first get just the year
|
||||
match := dateRegex.FindStringSubmatch(tag)
|
||||
if len(match) == 0 {
|
||||
log.Warn("Error parsing "+tagNames[0]+" field for year", "file", t.filePath, "date", tag)
|
||||
return 0, ""
|
||||
}
|
||||
year, _ := strconv.Atoi(match[1])
|
||||
|
||||
if len(tag) < 5 {
|
||||
return year, match[1]
|
||||
}
|
||||
|
||||
//then try YYYY-MM-DD
|
||||
if len(tag) > 10 {
|
||||
tag = tag[:10]
|
||||
}
|
||||
layout := "2006-01-02"
|
||||
_, err := time.Parse(layout, tag)
|
||||
if err != nil {
|
||||
layout = "2006-01"
|
||||
_, err = time.Parse(layout, tag)
|
||||
if err != nil {
|
||||
log.Warn("Error parsing "+tagNames[0]+" field for month + day", "file", t.filePath, "date", tag)
|
||||
return year, match[1]
|
||||
}
|
||||
}
|
||||
return year, tag
|
||||
}
|
||||
|
||||
func (t Tags) getBool(tagNames ...string) bool {
|
||||
tag := t.getFirstTagValue(tagNames...)
|
||||
if tag == "" {
|
||||
return false
|
||||
}
|
||||
i, _ := strconv.Atoi(strings.TrimSpace(tag))
|
||||
return i == 1
|
||||
}
|
||||
|
||||
func (t Tags) getTuple(tagNames ...string) (int, int) {
|
||||
tag := t.getFirstTagValue(tagNames...)
|
||||
if tag == "" {
|
||||
return 0, 0
|
||||
}
|
||||
tuple := strings.Split(tag, "/")
|
||||
t1, t2 := 0, 0
|
||||
t1, _ = strconv.Atoi(tuple[0])
|
||||
if len(tuple) > 1 {
|
||||
t2, _ = strconv.Atoi(tuple[1])
|
||||
} else {
|
||||
t2tag := t.getFirstTagValue(tagNames[0] + "total")
|
||||
t2, _ = strconv.Atoi(t2tag)
|
||||
}
|
||||
return t1, t2
|
||||
}
|
||||
|
||||
func (t Tags) getMbzID(tagNames ...string) string {
|
||||
tag := t.getFirstTagValue(tagNames...)
|
||||
if _, err := uuid.Parse(tag); err != nil {
|
||||
return ""
|
||||
}
|
||||
return tag
|
||||
}
|
||||
|
||||
func (t Tags) getInt(tagNames ...string) int {
|
||||
tag := t.getFirstTagValue(tagNames...)
|
||||
i, _ := strconv.Atoi(tag)
|
||||
return i
|
||||
}
|
||||
|
||||
func (t Tags) getFloat(tagNames ...string) float64 {
|
||||
var tag = t.getFirstTagValue(tagNames...)
|
||||
var value, err = strconv.ParseFloat(tag, 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return value
|
||||
}
|
||||
144
scanner/metadata_old/metadata_internal_test.go
Normal file
144
scanner/metadata_old/metadata_internal_test.go
Normal file
@@ -0,0 +1,144 @@
|
||||
package metadata_old
|
||||
|
||||
import (
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Tags", func() {
|
||||
DescribeTable("getDate",
|
||||
func(tag string, expectedYear int, expectedDate string) {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{"date": {tag}}
|
||||
testYear, testDate := md.Date()
|
||||
Expect(testYear).To(Equal(expectedYear))
|
||||
Expect(testDate).To(Equal(expectedDate))
|
||||
},
|
||||
Entry(nil, "1985", 1985, "1985"),
|
||||
Entry(nil, "2002-01", 2002, "2002-01"),
|
||||
Entry(nil, "1969.06", 1969, "1969"),
|
||||
Entry(nil, "1980.07.25", 1980, "1980"),
|
||||
Entry(nil, "2004-00-00", 2004, "2004"),
|
||||
Entry(nil, "2016-12-31", 2016, "2016-12-31"),
|
||||
Entry(nil, "2013-May-12", 2013, "2013"),
|
||||
Entry(nil, "May 12, 2016", 2016, "2016"),
|
||||
Entry(nil, "01/10/1990", 1990, "1990"),
|
||||
Entry(nil, "invalid", 0, ""),
|
||||
)
|
||||
|
||||
Describe("getMbzID", func() {
|
||||
It("return a valid MBID", func() {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{
|
||||
"musicbrainz_trackid": {"8f84da07-09a0-477b-b216-cc982dabcde1"},
|
||||
"musicbrainz_releasetrackid": {"6caf16d3-0b20-3fe6-8020-52e31831bc11"},
|
||||
"musicbrainz_albumid": {"f68c985d-f18b-4f4a-b7f0-87837cf3fbf9"},
|
||||
"musicbrainz_artistid": {"89ad4ac3-39f7-470e-963a-56509c546377"},
|
||||
"musicbrainz_albumartistid": {"ada7a83c-e3e1-40f1-93f9-3e73dbc9298a"},
|
||||
}
|
||||
Expect(md.MbzRecordingID()).To(Equal("8f84da07-09a0-477b-b216-cc982dabcde1"))
|
||||
Expect(md.MbzReleaseTrackID()).To(Equal("6caf16d3-0b20-3fe6-8020-52e31831bc11"))
|
||||
Expect(md.MbzAlbumID()).To(Equal("f68c985d-f18b-4f4a-b7f0-87837cf3fbf9"))
|
||||
Expect(md.MbzArtistID()).To(Equal("89ad4ac3-39f7-470e-963a-56509c546377"))
|
||||
Expect(md.MbzAlbumArtistID()).To(Equal("ada7a83c-e3e1-40f1-93f9-3e73dbc9298a"))
|
||||
})
|
||||
It("return empty string for invalid MBID", func() {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{
|
||||
"musicbrainz_trackid": {"11406732-6"},
|
||||
"musicbrainz_albumid": {"11406732"},
|
||||
"musicbrainz_artistid": {"200455"},
|
||||
"musicbrainz_albumartistid": {"194"},
|
||||
}
|
||||
Expect(md.MbzRecordingID()).To(Equal(""))
|
||||
Expect(md.MbzAlbumID()).To(Equal(""))
|
||||
Expect(md.MbzArtistID()).To(Equal(""))
|
||||
Expect(md.MbzAlbumArtistID()).To(Equal(""))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("getAllTagValues", func() {
|
||||
It("returns values from all tag names", func() {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{
|
||||
"genre": {"Rock", "Pop", "New Wave"},
|
||||
}
|
||||
|
||||
Expect(md.Genres()).To(ConsistOf("Rock", "Pop", "New Wave"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("removeDuplicatesAndEmpty", func() {
|
||||
It("removes duplicates", func() {
|
||||
md := NewTag("/music/artist/album01/Song.mp3", nil, ParsedTags{
|
||||
"genre": []string{"pop", "rock", "pop"},
|
||||
"date": []string{"2023-03-01", "2023-03-01"},
|
||||
"mood": []string{"happy", "sad"},
|
||||
})
|
||||
Expect(md.Tags).To(HaveKeyWithValue("genre", []string{"pop", "rock"}))
|
||||
Expect(md.Tags).To(HaveKeyWithValue("date", []string{"2023-03-01"}))
|
||||
Expect(md.Tags).To(HaveKeyWithValue("mood", []string{"happy", "sad"}))
|
||||
})
|
||||
It("removes empty tags", func() {
|
||||
md := NewTag("/music/artist/album01/Song.mp3", nil, ParsedTags{
|
||||
"genre": []string{"pop", "rock", "pop"},
|
||||
"mood": []string{"", ""},
|
||||
})
|
||||
Expect(md.Tags).To(HaveKeyWithValue("genre", []string{"pop", "rock"}))
|
||||
Expect(md.Tags).ToNot(HaveKey("mood"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("BPM", func() {
|
||||
var t *Tags
|
||||
BeforeEach(func() {
|
||||
t = &Tags{Tags: map[string][]string{
|
||||
"fbpm": []string{"141.7"},
|
||||
}}
|
||||
})
|
||||
|
||||
It("rounds a floating point fBPM tag", func() {
|
||||
Expect(t.Bpm()).To(Equal(142))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("ReplayGain", func() {
|
||||
DescribeTable("getGainValue",
|
||||
func(tag string, expected float64) {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{"replaygain_track_gain": {tag}}
|
||||
Expect(md.RGTrackGain()).To(Equal(expected))
|
||||
|
||||
},
|
||||
Entry("0", "0", 0.0),
|
||||
Entry("1.2dB", "1.2dB", 1.2),
|
||||
Entry("Infinity", "Infinity", 0.0),
|
||||
Entry("Invalid value", "INVALID VALUE", 0.0),
|
||||
)
|
||||
DescribeTable("getPeakValue",
|
||||
func(tag string, expected float64) {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{"replaygain_track_peak": {tag}}
|
||||
Expect(md.RGTrackPeak()).To(Equal(expected))
|
||||
|
||||
},
|
||||
Entry("0", "0", 0.0),
|
||||
Entry("0.5", "0.5", 0.5),
|
||||
Entry("Invalid dB suffix", "0.7dB", 1.0),
|
||||
Entry("Infinity", "Infinity", 1.0),
|
||||
Entry("Invalid value", "INVALID VALUE", 1.0),
|
||||
)
|
||||
DescribeTable("getR128GainValue",
|
||||
func(tag string, expected float64) {
|
||||
md := &Tags{}
|
||||
md.Tags = map[string][]string{"r128_track_gain": {tag}}
|
||||
Expect(md.RGTrackGain()).To(Equal(expected))
|
||||
|
||||
},
|
||||
Entry("0", "0", 5.0),
|
||||
Entry("-3776", "-3776", -9.75),
|
||||
Entry("Infinity", "Infinity", 0.0),
|
||||
Entry("Invalid value", "INVALID VALUE", 0.0),
|
||||
)
|
||||
})
|
||||
})
|
||||
17
scanner/metadata_old/metadata_suite_test.go
Normal file
17
scanner/metadata_old/metadata_suite_test.go
Normal file
@@ -0,0 +1,17 @@
|
||||
package metadata_old
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestMetadata(t *testing.T) {
|
||||
tests.Init(t, true)
|
||||
log.SetLevel(log.LevelFatal)
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Metadata Suite")
|
||||
}
|
||||
95
scanner/metadata_old/metadata_test.go
Normal file
95
scanner/metadata_old/metadata_test.go
Normal file
@@ -0,0 +1,95 @@
|
||||
package metadata_old_test
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"encoding/json"
|
||||
"slices"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core/ffmpeg"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/scanner/metadata_old"
|
||||
_ "github.com/navidrome/navidrome/scanner/metadata_old/ffmpeg"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Tags", func() {
|
||||
var zero int64 = 0
|
||||
var secondTs int64 = 2500
|
||||
|
||||
makeLyrics := func(synced bool, lang, secondLine string) model.Lyrics {
|
||||
lines := []model.Line{
|
||||
{Value: "This is"},
|
||||
{Value: secondLine},
|
||||
}
|
||||
|
||||
if synced {
|
||||
lines[0].Start = &zero
|
||||
lines[1].Start = &secondTs
|
||||
}
|
||||
|
||||
lyrics := model.Lyrics{
|
||||
Lang: lang,
|
||||
Line: lines,
|
||||
Synced: synced,
|
||||
}
|
||||
|
||||
return lyrics
|
||||
}
|
||||
|
||||
sortLyrics := func(lines model.LyricList) model.LyricList {
|
||||
slices.SortFunc(lines, func(a, b model.Lyrics) int {
|
||||
langDiff := cmp.Compare(a.Lang, b.Lang)
|
||||
if langDiff != 0 {
|
||||
return langDiff
|
||||
}
|
||||
return cmp.Compare(a.Line[1].Value, b.Line[1].Value)
|
||||
})
|
||||
|
||||
return lines
|
||||
}
|
||||
|
||||
compareLyrics := func(m metadata_old.Tags, expected model.LyricList) {
|
||||
lyrics := model.LyricList{}
|
||||
Expect(json.Unmarshal([]byte(m.Lyrics()), &lyrics)).To(BeNil())
|
||||
Expect(sortLyrics(lyrics)).To(Equal(sortLyrics(expected)))
|
||||
}
|
||||
|
||||
// Only run these tests if FFmpeg is available
|
||||
FFmpegContext := XContext
|
||||
if ffmpeg.New().IsAvailable() {
|
||||
FFmpegContext = Context
|
||||
}
|
||||
FFmpegContext("Extract with FFmpeg", func() {
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.Scanner.Extractor = "ffmpeg"
|
||||
})
|
||||
|
||||
DescribeTable("Lyrics test",
|
||||
func(file string) {
|
||||
path := "tests/fixtures/" + file
|
||||
mds, err := metadata_old.Extract(path)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mds).To(HaveLen(1))
|
||||
|
||||
m := mds[path]
|
||||
compareLyrics(m, model.LyricList{
|
||||
makeLyrics(true, "eng", "English"),
|
||||
makeLyrics(true, "xxx", "unspecified"),
|
||||
})
|
||||
},
|
||||
|
||||
Entry("Parses AIFF file", "test.aiff"),
|
||||
Entry("Parses MP3 files", "test.mp3"),
|
||||
// Disabled, because it fails in pipeline
|
||||
// Entry("Parses WAV files", "test.wav"),
|
||||
|
||||
// FFMPEG behaves very weirdly for multivalued tags for non-ID3
|
||||
// Specifically, they are separated by ";, which is indistinguishable
|
||||
// from other fields
|
||||
)
|
||||
})
|
||||
})
|
||||
501
scanner/phase_1_folders.go
Normal file
501
scanner/phase_1_folders.go
Normal file
@@ -0,0 +1,501 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"maps"
|
||||
"path"
|
||||
"slices"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/Masterminds/squirrel"
|
||||
ppl "github.com/google/go-pipeline/pkg/pipeline"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/storage"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/metadata"
|
||||
"github.com/navidrome/navidrome/utils"
|
||||
"github.com/navidrome/navidrome/utils/pl"
|
||||
"github.com/navidrome/navidrome/utils/slice"
|
||||
)
|
||||
|
||||
func createPhaseFolders(ctx context.Context, state *scanState, ds model.DataStore, cw artwork.CacheWarmer) *phaseFolders {
|
||||
var jobs []*scanJob
|
||||
|
||||
// Create scan jobs for all libraries
|
||||
for _, lib := range state.libraries {
|
||||
// Get target folders for this library if selective scan
|
||||
var targetFolders []string
|
||||
if state.isSelectiveScan() {
|
||||
targetFolders = state.targets[lib.ID]
|
||||
}
|
||||
|
||||
job, err := newScanJob(ctx, ds, cw, lib, state.fullScan, targetFolders)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error creating scan context", "lib", lib.Name, err)
|
||||
state.sendWarning(err.Error())
|
||||
continue
|
||||
}
|
||||
jobs = append(jobs, job)
|
||||
}
|
||||
|
||||
return &phaseFolders{jobs: jobs, ctx: ctx, ds: ds, state: state}
|
||||
}
|
||||
|
||||
type scanJob struct {
|
||||
lib model.Library
|
||||
fs storage.MusicFS
|
||||
cw artwork.CacheWarmer
|
||||
lastUpdates map[string]model.FolderUpdateInfo // Holds last update info for all (DB) folders in this library
|
||||
targetFolders []string // Specific folders to scan (including all descendants)
|
||||
lock sync.Mutex
|
||||
numFolders atomic.Int64
|
||||
}
|
||||
|
||||
func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer, lib model.Library, fullScan bool, targetFolders []string) (*scanJob, error) {
|
||||
// Get folder updates, optionally filtered to specific target folders
|
||||
lastUpdates, err := ds.Folder(ctx).GetFolderUpdateInfo(lib, targetFolders...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting last updates: %w", err)
|
||||
}
|
||||
|
||||
fileStore, err := storage.For(lib.Path)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Error getting storage for library", "library", lib.Name, "path", lib.Path, err)
|
||||
return nil, fmt.Errorf("getting storage for library: %w", err)
|
||||
}
|
||||
fsys, err := fileStore.FS()
|
||||
if err != nil {
|
||||
log.Error(ctx, "Error getting fs for library", "library", lib.Name, "path", lib.Path, err)
|
||||
return nil, fmt.Errorf("getting fs for library: %w", err)
|
||||
}
|
||||
return &scanJob{
|
||||
lib: lib,
|
||||
fs: fsys,
|
||||
cw: cw,
|
||||
lastUpdates: lastUpdates,
|
||||
targetFolders: targetFolders,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// popLastUpdate retrieves and removes the last update info for the given folder ID
|
||||
// This is used to track which folders have been found during the walk_dir_tree
|
||||
func (j *scanJob) popLastUpdate(folderID string) model.FolderUpdateInfo {
|
||||
j.lock.Lock()
|
||||
defer j.lock.Unlock()
|
||||
|
||||
lastUpdate := j.lastUpdates[folderID]
|
||||
delete(j.lastUpdates, folderID)
|
||||
return lastUpdate
|
||||
}
|
||||
|
||||
// createFolderEntry creates a new folderEntry for the given path, using the last update info from the job
|
||||
// to populate the previous update time and hash. It also removes the folder from the job's lastUpdates map.
|
||||
// This is used to track which folders have been found during the walk_dir_tree.
|
||||
func (j *scanJob) createFolderEntry(path string) *folderEntry {
|
||||
id := model.FolderID(j.lib, path)
|
||||
info := j.popLastUpdate(id)
|
||||
return newFolderEntry(j, id, path, info.UpdatedAt, info.Hash)
|
||||
}
|
||||
|
||||
// phaseFolders represents the first phase of the scanning process, which is responsible
|
||||
// for scanning all libraries and importing new or updated files. This phase involves
|
||||
// traversing the directory tree of each library, identifying new or modified media files,
|
||||
// and updating the database with the relevant information.
|
||||
//
|
||||
// The phaseFolders struct holds the context, data store, and jobs required for the scanning
|
||||
// process. Each job represents a library being scanned, and contains information about the
|
||||
// library, file system, and the last updates of the folders.
|
||||
//
|
||||
// The phaseFolders struct implements the phase interface, providing methods to produce
|
||||
// folder entries, process folders, persist changes to the database, and log the results.
|
||||
type phaseFolders struct {
|
||||
jobs []*scanJob
|
||||
ds model.DataStore
|
||||
ctx context.Context
|
||||
state *scanState
|
||||
prevAlbumPIDConf string
|
||||
}
|
||||
|
||||
func (p *phaseFolders) description() string {
|
||||
return "Scan all libraries and import new/updated files"
|
||||
}
|
||||
|
||||
func (p *phaseFolders) producer() ppl.Producer[*folderEntry] {
|
||||
return ppl.NewProducer(func(put func(entry *folderEntry)) error {
|
||||
var err error
|
||||
p.prevAlbumPIDConf, err = p.ds.Property(p.ctx).DefaultGet(consts.PIDAlbumKey, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting album PID conf: %w", err)
|
||||
}
|
||||
|
||||
// TODO Parallelize multiple job when we have multiple libraries
|
||||
var total int64
|
||||
var totalChanged int64
|
||||
for _, job := range p.jobs {
|
||||
if utils.IsCtxDone(p.ctx) {
|
||||
break
|
||||
}
|
||||
|
||||
outputChan, err := walkDirTree(p.ctx, job, job.targetFolders...)
|
||||
if err != nil {
|
||||
log.Warn(p.ctx, "Scanner: Error scanning library", "lib", job.lib.Name, err)
|
||||
}
|
||||
for folder := range pl.ReadOrDone(p.ctx, outputChan) {
|
||||
job.numFolders.Add(1)
|
||||
p.state.sendProgress(&ProgressInfo{
|
||||
LibID: job.lib.ID,
|
||||
FileCount: uint32(len(folder.audioFiles)),
|
||||
Path: folder.path,
|
||||
Phase: "1",
|
||||
})
|
||||
|
||||
// Log folder info
|
||||
log.Trace(p.ctx, "Scanner: Checking folder state", " folder", folder.path, "_updTime", folder.updTime,
|
||||
"_modTime", folder.modTime, "_lastScanStartedAt", folder.job.lib.LastScanStartedAt,
|
||||
"numAudioFiles", len(folder.audioFiles), "numImageFiles", len(folder.imageFiles),
|
||||
"numPlaylists", folder.numPlaylists, "numSubfolders", folder.numSubFolders)
|
||||
|
||||
// Check if folder is outdated
|
||||
if folder.isOutdated() {
|
||||
if !p.state.fullScan {
|
||||
if folder.hasNoFiles() && folder.isNew() {
|
||||
log.Trace(p.ctx, "Scanner: Skipping new folder with no files", "folder", folder.path, "lib", job.lib.Name)
|
||||
continue
|
||||
}
|
||||
log.Debug(p.ctx, "Scanner: Detected changes in folder", "folder", folder.path, "lastUpdate", folder.modTime, "lib", job.lib.Name)
|
||||
}
|
||||
totalChanged++
|
||||
folder.elapsed.Stop()
|
||||
put(folder)
|
||||
} else {
|
||||
log.Trace(p.ctx, "Scanner: Skipping up-to-date folder", "folder", folder.path, "lastUpdate", folder.modTime, "lib", job.lib.Name)
|
||||
}
|
||||
}
|
||||
total += job.numFolders.Load()
|
||||
}
|
||||
log.Debug(p.ctx, "Scanner: Finished loading all folders", "numFolders", total, "numChanged", totalChanged)
|
||||
return nil
|
||||
}, ppl.Name("traverse filesystem"))
|
||||
}
|
||||
|
||||
func (p *phaseFolders) measure(entry *folderEntry) func() time.Duration {
|
||||
entry.elapsed.Start()
|
||||
return func() time.Duration { return entry.elapsed.Stop() }
|
||||
}
|
||||
|
||||
func (p *phaseFolders) stages() []ppl.Stage[*folderEntry] {
|
||||
return []ppl.Stage[*folderEntry]{
|
||||
ppl.NewStage(p.processFolder, ppl.Name("process folder"), ppl.Concurrency(conf.Server.DevScannerThreads)),
|
||||
ppl.NewStage(p.persistChanges, ppl.Name("persist changes")),
|
||||
ppl.NewStage(p.logFolder, ppl.Name("log results")),
|
||||
}
|
||||
}
|
||||
|
||||
func (p *phaseFolders) processFolder(entry *folderEntry) (*folderEntry, error) {
|
||||
defer p.measure(entry)()
|
||||
|
||||
// Load children mediafiles from DB
|
||||
cursor, err := p.ds.MediaFile(p.ctx).GetCursor(model.QueryOptions{
|
||||
Filters: squirrel.And{squirrel.Eq{"folder_id": entry.id}},
|
||||
})
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error loading mediafiles from DB", "folder", entry.path, err)
|
||||
return entry, err
|
||||
}
|
||||
dbTracks := make(map[string]*model.MediaFile)
|
||||
for mf, err := range cursor {
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error loading mediafiles from DB", "folder", entry.path, err)
|
||||
return entry, err
|
||||
}
|
||||
dbTracks[mf.Path] = &mf
|
||||
}
|
||||
|
||||
// Get list of files to import, based on modtime (or all if fullScan),
|
||||
// leave in dbTracks only tracks that are missing (not found in the FS)
|
||||
filesToImport := make(map[string]*model.MediaFile, len(entry.audioFiles))
|
||||
for afPath, af := range entry.audioFiles {
|
||||
fullPath := path.Join(entry.path, afPath)
|
||||
dbTrack, foundInDB := dbTracks[fullPath]
|
||||
if !foundInDB || p.state.fullScan {
|
||||
filesToImport[fullPath] = dbTrack
|
||||
} else {
|
||||
info, err := af.Info()
|
||||
if err != nil {
|
||||
log.Warn(p.ctx, "Scanner: Error getting file info", "folder", entry.path, "file", af.Name(), err)
|
||||
p.state.sendWarning(fmt.Sprintf("Error getting file info for %s/%s: %v", entry.path, af.Name(), err))
|
||||
return entry, nil
|
||||
}
|
||||
if info.ModTime().After(dbTrack.UpdatedAt) || dbTrack.Missing {
|
||||
filesToImport[fullPath] = dbTrack
|
||||
}
|
||||
}
|
||||
delete(dbTracks, fullPath)
|
||||
}
|
||||
|
||||
// Remaining dbTracks are tracks that were not found in the FS, so they should be marked as missing
|
||||
entry.missingTracks = slices.Collect(maps.Values(dbTracks))
|
||||
|
||||
// Load metadata from files that need to be imported
|
||||
if len(filesToImport) > 0 {
|
||||
err = p.loadTagsFromFiles(entry, filesToImport)
|
||||
if err != nil {
|
||||
log.Warn(p.ctx, "Scanner: Error loading tags from files. Skipping", "folder", entry.path, err)
|
||||
p.state.sendWarning(fmt.Sprintf("Error loading tags from files in %s: %v", entry.path, err))
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
p.createAlbumsFromMediaFiles(entry)
|
||||
p.createArtistsFromMediaFiles(entry)
|
||||
}
|
||||
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
const filesBatchSize = 200
|
||||
|
||||
// loadTagsFromFiles reads metadata from the files in the given list and populates
|
||||
// the entry's tracks and tags with the results.
|
||||
func (p *phaseFolders) loadTagsFromFiles(entry *folderEntry, toImport map[string]*model.MediaFile) error {
|
||||
tracks := make([]model.MediaFile, 0, len(toImport))
|
||||
uniqueTags := make(map[string]model.Tag, len(toImport))
|
||||
for chunk := range slice.CollectChunks(maps.Keys(toImport), filesBatchSize) {
|
||||
allInfo, err := entry.job.fs.ReadTags(chunk...)
|
||||
if err != nil {
|
||||
log.Warn(p.ctx, "Scanner: Error extracting metadata from files. Skipping", "folder", entry.path, err)
|
||||
return err
|
||||
}
|
||||
for filePath, info := range allInfo {
|
||||
md := metadata.New(filePath, info)
|
||||
track := md.ToMediaFile(entry.job.lib.ID, entry.id)
|
||||
tracks = append(tracks, track)
|
||||
for _, t := range track.Tags.FlattenAll() {
|
||||
uniqueTags[t.ID] = t
|
||||
}
|
||||
|
||||
// Keep track of any album ID changes, to reassign annotations later
|
||||
prevAlbumID := ""
|
||||
if prev := toImport[filePath]; prev != nil {
|
||||
prevAlbumID = prev.AlbumID
|
||||
} else {
|
||||
prevAlbumID = md.AlbumID(track, p.prevAlbumPIDConf)
|
||||
}
|
||||
_, ok := entry.albumIDMap[track.AlbumID]
|
||||
if prevAlbumID != track.AlbumID && !ok {
|
||||
entry.albumIDMap[track.AlbumID] = prevAlbumID
|
||||
}
|
||||
}
|
||||
}
|
||||
entry.tracks = tracks
|
||||
entry.tags = slices.Collect(maps.Values(uniqueTags))
|
||||
return nil
|
||||
}
|
||||
|
||||
// createAlbumsFromMediaFiles groups the entry's tracks by album ID and creates albums
|
||||
func (p *phaseFolders) createAlbumsFromMediaFiles(entry *folderEntry) {
|
||||
grouped := slice.Group(entry.tracks, func(mf model.MediaFile) string { return mf.AlbumID })
|
||||
albums := make(model.Albums, 0, len(grouped))
|
||||
for _, group := range grouped {
|
||||
songs := model.MediaFiles(group)
|
||||
album := songs.ToAlbum()
|
||||
albums = append(albums, album)
|
||||
}
|
||||
entry.albums = albums
|
||||
}
|
||||
|
||||
// createArtistsFromMediaFiles creates artists from the entry's tracks
|
||||
func (p *phaseFolders) createArtistsFromMediaFiles(entry *folderEntry) {
|
||||
participants := make(model.Participants, len(entry.tracks)*3) // preallocate ~3 artists per track
|
||||
for _, track := range entry.tracks {
|
||||
participants.Merge(track.Participants)
|
||||
}
|
||||
entry.artists = participants.AllArtists()
|
||||
}
|
||||
|
||||
func (p *phaseFolders) persistChanges(entry *folderEntry) (*folderEntry, error) {
|
||||
defer p.measure(entry)()
|
||||
p.state.changesDetected.Store(true)
|
||||
|
||||
// Collect artwork IDs to pre-cache after the transaction commits
|
||||
var artworkIDs []model.ArtworkID
|
||||
|
||||
err := p.ds.WithTx(func(tx model.DataStore) error {
|
||||
// Instantiate all repositories just once per folder
|
||||
folderRepo := tx.Folder(p.ctx)
|
||||
tagRepo := tx.Tag(p.ctx)
|
||||
artistRepo := tx.Artist(p.ctx)
|
||||
libraryRepo := tx.Library(p.ctx)
|
||||
albumRepo := tx.Album(p.ctx)
|
||||
mfRepo := tx.MediaFile(p.ctx)
|
||||
|
||||
// Save folder to DB
|
||||
folder := entry.toFolder()
|
||||
err := folderRepo.Put(folder)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error persisting folder to DB", "folder", entry.path, err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Save all tags to DB
|
||||
err = tagRepo.Add(entry.job.lib.ID, entry.tags...)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error persisting tags to DB", "folder", entry.path, err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Save all new/modified artists to DB. Their information will be incomplete, but they will be refreshed later
|
||||
for i := range entry.artists {
|
||||
err = artistRepo.Put(&entry.artists[i], "name",
|
||||
"mbz_artist_id", "sort_artist_name", "order_artist_name", "full_text", "updated_at")
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error persisting artist to DB", "folder", entry.path, "artist", entry.artists[i].Name, err)
|
||||
return err
|
||||
}
|
||||
err = libraryRepo.AddArtist(entry.job.lib.ID, entry.artists[i].ID)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error adding artist to library", "lib", entry.job.lib.ID, "artist", entry.artists[i].Name, err)
|
||||
return err
|
||||
}
|
||||
if entry.artists[i].Name != consts.UnknownArtist && entry.artists[i].Name != consts.VariousArtists {
|
||||
artworkIDs = append(artworkIDs, entry.artists[i].CoverArtID())
|
||||
}
|
||||
}
|
||||
|
||||
// Save all new/modified albums to DB. Their information will be incomplete, but they will be refreshed later
|
||||
for i := range entry.albums {
|
||||
err = p.persistAlbum(albumRepo, &entry.albums[i], entry.albumIDMap)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error persisting album to DB", "folder", entry.path, "album", entry.albums[i], err)
|
||||
return err
|
||||
}
|
||||
if entry.albums[i].Name != consts.UnknownAlbum {
|
||||
artworkIDs = append(artworkIDs, entry.albums[i].CoverArtID())
|
||||
}
|
||||
}
|
||||
|
||||
// Save all tracks to DB
|
||||
for i := range entry.tracks {
|
||||
err = mfRepo.Put(&entry.tracks[i])
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error persisting mediafile to DB", "folder", entry.path, "track", entry.tracks[i], err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Mark all missing tracks as not available
|
||||
if len(entry.missingTracks) > 0 {
|
||||
err = mfRepo.MarkMissing(true, entry.missingTracks...)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error marking missing tracks", "folder", entry.path, err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Touch all albums that have missing tracks, so they get refreshed in later phases
|
||||
groupedMissingTracks := slice.ToMap(entry.missingTracks, func(mf *model.MediaFile) (string, struct{}) {
|
||||
return mf.AlbumID, struct{}{}
|
||||
})
|
||||
albumsToUpdate := slices.Collect(maps.Keys(groupedMissingTracks))
|
||||
err = albumRepo.Touch(albumsToUpdate...)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error touching album", "folder", entry.path, "albums", albumsToUpdate, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}, "scanner: persist changes")
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error persisting changes to DB", "folder", entry.path, err)
|
||||
}
|
||||
|
||||
// Pre-cache artwork after the transaction commits successfully
|
||||
if err == nil {
|
||||
for _, artID := range artworkIDs {
|
||||
entry.job.cw.PreCache(artID)
|
||||
}
|
||||
}
|
||||
|
||||
return entry, err
|
||||
}
|
||||
|
||||
// persistAlbum persists the given album to the database, and reassigns annotations from the previous album ID
|
||||
func (p *phaseFolders) persistAlbum(repo model.AlbumRepository, a *model.Album, idMap map[string]string) error {
|
||||
prevID := idMap[a.ID]
|
||||
log.Trace(p.ctx, "Persisting album", "album", a.Name, "albumArtist", a.AlbumArtist, "id", a.ID, "prevID", cmp.Or(prevID, "nil"))
|
||||
if err := repo.Put(a); err != nil {
|
||||
return fmt.Errorf("persisting album %s: %w", a.ID, err)
|
||||
}
|
||||
if prevID == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Reassign annotation from previous album to new album
|
||||
log.Trace(p.ctx, "Reassigning album annotations", "from", prevID, "to", a.ID, "album", a.Name)
|
||||
if err := repo.ReassignAnnotation(prevID, a.ID); err != nil {
|
||||
log.Warn(p.ctx, "Scanner: Could not reassign annotations", "from", prevID, "to", a.ID, "album", a.Name, err)
|
||||
p.state.sendWarning(fmt.Sprintf("Could not reassign annotations from %s to %s ('%s'): %v", prevID, a.ID, a.Name, err))
|
||||
}
|
||||
|
||||
// Keep created_at field from previous instance of the album
|
||||
if err := repo.CopyAttributes(prevID, a.ID, "created_at"); err != nil {
|
||||
// Silently ignore when the previous album is not found
|
||||
if !errors.Is(err, model.ErrNotFound) {
|
||||
log.Warn(p.ctx, "Scanner: Could not copy fields", "from", prevID, "to", a.ID, "album", a.Name, err)
|
||||
p.state.sendWarning(fmt.Sprintf("Could not copy fields from %s to %s ('%s'): %v", prevID, a.ID, a.Name, err))
|
||||
}
|
||||
}
|
||||
// Don't keep track of this mapping anymore
|
||||
delete(idMap, a.ID)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *phaseFolders) logFolder(entry *folderEntry) (*folderEntry, error) {
|
||||
logCall := log.Info
|
||||
if entry.isEmpty() {
|
||||
logCall = log.Trace
|
||||
}
|
||||
logCall(p.ctx, "Scanner: Completed processing folder",
|
||||
"audioCount", len(entry.audioFiles), "imageCount", len(entry.imageFiles), "plsCount", entry.numPlaylists,
|
||||
"elapsed", entry.elapsed.Elapsed(), "tracksMissing", len(entry.missingTracks),
|
||||
"tracksImported", len(entry.tracks), "library", entry.job.lib.Name, consts.Zwsp+"folder", entry.path)
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
func (p *phaseFolders) finalize(err error) error {
|
||||
errF := p.ds.WithTx(func(tx model.DataStore) error {
|
||||
for _, job := range p.jobs {
|
||||
// Mark all folders that were not updated as missing
|
||||
if len(job.lastUpdates) == 0 {
|
||||
continue
|
||||
}
|
||||
folderIDs := slices.Collect(maps.Keys(job.lastUpdates))
|
||||
err := tx.Folder(p.ctx).MarkMissing(true, folderIDs...)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error marking missing folders", "lib", job.lib.Name, err)
|
||||
return err
|
||||
}
|
||||
err = tx.MediaFile(p.ctx).MarkMissingByFolder(true, folderIDs...)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error marking tracks in missing folders", "lib", job.lib.Name, err)
|
||||
return err
|
||||
}
|
||||
// Touch all albums that have missing folders, so they get refreshed in later phases
|
||||
_, err = tx.Album(p.ctx).TouchByMissingFolder()
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error touching albums with missing folders", "lib", job.lib.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}, "scanner: finalize phaseFolders")
|
||||
return errors.Join(err, errF)
|
||||
}
|
||||
|
||||
var _ phase[*folderEntry] = (*phaseFolders)(nil)
|
||||
344
scanner/phase_2_missing_tracks.go
Normal file
344
scanner/phase_2_missing_tracks.go
Normal file
@@ -0,0 +1,344 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
ppl "github.com/google/go-pipeline/pkg/pipeline"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
type missingTracks struct {
|
||||
lib model.Library
|
||||
pid string
|
||||
missing model.MediaFiles
|
||||
matched model.MediaFiles
|
||||
}
|
||||
|
||||
// phaseMissingTracks is responsible for processing missing media files during the scan process.
|
||||
// It identifies media files that are marked as missing and attempts to find matching files that
|
||||
// may have been moved or renamed. This phase helps in maintaining the integrity of the media
|
||||
// library by ensuring that moved or renamed files are correctly updated in the database.
|
||||
//
|
||||
// The phaseMissingTracks phase performs the following steps:
|
||||
// 1. Loads all libraries and their missing media files from the database.
|
||||
// 2. For each library, it sorts the missing files by their PID (persistent identifier).
|
||||
// 3. Groups missing and matched files by their PID and processes them to find exact or equivalent matches.
|
||||
// 4. Updates the database with the new locations of the matched files and removes the old entries.
|
||||
// 5. Logs the results and finalizes the phase by reporting the total number of matched files.
|
||||
type phaseMissingTracks struct {
|
||||
ctx context.Context
|
||||
ds model.DataStore
|
||||
totalMatched atomic.Uint32
|
||||
state *scanState
|
||||
processedAlbumAnnotations map[string]bool // Track processed album annotation reassignments
|
||||
annotationMutex sync.RWMutex // Protects processedAlbumAnnotations
|
||||
}
|
||||
|
||||
func createPhaseMissingTracks(ctx context.Context, state *scanState, ds model.DataStore) *phaseMissingTracks {
|
||||
return &phaseMissingTracks{
|
||||
ctx: ctx,
|
||||
ds: ds,
|
||||
state: state,
|
||||
processedAlbumAnnotations: make(map[string]bool),
|
||||
}
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) description() string {
|
||||
return "Process missing files, checking for moves"
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) producer() ppl.Producer[*missingTracks] {
|
||||
return ppl.NewProducer(p.produce, ppl.Name("load missing tracks from db"))
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) produce(put func(tracks *missingTracks)) error {
|
||||
count := 0
|
||||
var putIfMatched = func(mt missingTracks) {
|
||||
if mt.pid != "" && len(mt.missing) > 0 {
|
||||
log.Trace(p.ctx, "Scanner: Found missing tracks", "pid", mt.pid, "missing", "title", mt.missing[0].Title,
|
||||
len(mt.missing), "matched", len(mt.matched), "lib", mt.lib.Name,
|
||||
)
|
||||
count++
|
||||
put(&mt)
|
||||
}
|
||||
}
|
||||
for _, lib := range p.state.libraries {
|
||||
log.Debug(p.ctx, "Scanner: Checking missing tracks", "libraryId", lib.ID, "libraryName", lib.Name)
|
||||
cursor, err := p.ds.MediaFile(p.ctx).GetMissingAndMatching(lib.ID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading missing tracks for library %s: %w", lib.Name, err)
|
||||
}
|
||||
|
||||
// Group missing and matched tracks by PID
|
||||
mt := missingTracks{lib: lib}
|
||||
for mf, err := range cursor {
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading missing tracks for library %s: %w", lib.Name, err)
|
||||
}
|
||||
if mt.pid != mf.PID {
|
||||
putIfMatched(mt)
|
||||
mt.pid = mf.PID
|
||||
mt.missing = nil
|
||||
mt.matched = nil
|
||||
}
|
||||
if mf.Missing {
|
||||
mt.missing = append(mt.missing, mf)
|
||||
} else {
|
||||
mt.matched = append(mt.matched, mf)
|
||||
}
|
||||
}
|
||||
putIfMatched(mt)
|
||||
if count == 0 {
|
||||
log.Debug(p.ctx, "Scanner: No potential moves found", "libraryId", lib.ID, "libraryName", lib.Name)
|
||||
} else {
|
||||
log.Debug(p.ctx, "Scanner: Found potential moves", "libraryId", lib.ID, "count", count)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) stages() []ppl.Stage[*missingTracks] {
|
||||
return []ppl.Stage[*missingTracks]{
|
||||
ppl.NewStage(p.processMissingTracks, ppl.Name("process missing tracks")),
|
||||
ppl.NewStage(p.processCrossLibraryMoves, ppl.Name("process cross-library moves")),
|
||||
}
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) processMissingTracks(in *missingTracks) (*missingTracks, error) {
|
||||
hasMatches := false
|
||||
|
||||
for _, ms := range in.missing {
|
||||
var exactMatch model.MediaFile
|
||||
var equivalentMatch model.MediaFile
|
||||
|
||||
// Identify exact and equivalent matches
|
||||
for _, mt := range in.matched {
|
||||
if ms.Equals(mt) {
|
||||
exactMatch = mt
|
||||
break // Prioritize exact match
|
||||
}
|
||||
if ms.IsEquivalent(mt) {
|
||||
equivalentMatch = mt
|
||||
}
|
||||
}
|
||||
|
||||
// Use the exact match if found
|
||||
if exactMatch.ID != "" {
|
||||
log.Debug(p.ctx, "Scanner: Found missing track in a new place", "missing", ms.Path, "movedTo", exactMatch.Path, "lib", in.lib.Name)
|
||||
err := p.moveMatched(exactMatch, ms)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error moving matched track", "missing", ms.Path, "movedTo", exactMatch.Path, "lib", in.lib.Name, err)
|
||||
return nil, err
|
||||
}
|
||||
p.totalMatched.Add(1)
|
||||
hasMatches = true
|
||||
continue
|
||||
}
|
||||
|
||||
// If there is only one missing and one matched track, consider them equivalent (same PID)
|
||||
if len(in.missing) == 1 && len(in.matched) == 1 {
|
||||
singleMatch := in.matched[0]
|
||||
log.Debug(p.ctx, "Scanner: Found track with same persistent ID in a new place", "missing", ms.Path, "movedTo", singleMatch.Path, "lib", in.lib.Name)
|
||||
err := p.moveMatched(singleMatch, ms)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error updating matched track", "missing", ms.Path, "movedTo", singleMatch.Path, "lib", in.lib.Name, err)
|
||||
return nil, err
|
||||
}
|
||||
p.totalMatched.Add(1)
|
||||
hasMatches = true
|
||||
continue
|
||||
}
|
||||
|
||||
// Use the equivalent match if no other better match was found
|
||||
if equivalentMatch.ID != "" {
|
||||
log.Debug(p.ctx, "Scanner: Found missing track with same base path", "missing", ms.Path, "movedTo", equivalentMatch.Path, "lib", in.lib.Name)
|
||||
err := p.moveMatched(equivalentMatch, ms)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error updating matched track", "missing", ms.Path, "movedTo", equivalentMatch.Path, "lib", in.lib.Name, err)
|
||||
return nil, err
|
||||
}
|
||||
p.totalMatched.Add(1)
|
||||
hasMatches = true
|
||||
}
|
||||
}
|
||||
|
||||
// If any matches were found in this missingTracks group, return nil
|
||||
// This signals the next stage to skip processing this group
|
||||
if hasMatches {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// If no matches found, pass through to next stage
|
||||
return in, nil
|
||||
}
|
||||
|
||||
// processCrossLibraryMoves processes files that weren't matched within their library
|
||||
// and attempts to find matches in other libraries
|
||||
func (p *phaseMissingTracks) processCrossLibraryMoves(in *missingTracks) (*missingTracks, error) {
|
||||
// Skip if input is nil (meaning previous stage found matches)
|
||||
if in == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
log.Debug(p.ctx, "Scanner: Processing cross-library moves", "pid", in.pid, "missing", len(in.missing), "lib", in.lib.Name)
|
||||
|
||||
for _, missing := range in.missing {
|
||||
found, err := p.findCrossLibraryMatch(missing)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error searching for cross-library matches", "missing", missing.Path, "lib", in.lib.Name, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if found.ID != "" {
|
||||
log.Debug(p.ctx, "Scanner: Found cross-library moved track", "missing", missing.Path, "movedTo", found.Path, "fromLib", in.lib.Name, "toLib", found.LibraryName)
|
||||
err := p.moveMatched(found, missing)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error moving cross-library track", "missing", missing.Path, "movedTo", found.Path, err)
|
||||
continue
|
||||
}
|
||||
p.totalMatched.Add(1)
|
||||
}
|
||||
}
|
||||
|
||||
return in, nil
|
||||
}
|
||||
|
||||
// findCrossLibraryMatch searches for a missing file in other libraries using two-tier matching
|
||||
func (p *phaseMissingTracks) findCrossLibraryMatch(missing model.MediaFile) (model.MediaFile, error) {
|
||||
// First tier: Search by MusicBrainz Track ID if available
|
||||
if missing.MbzReleaseTrackID != "" {
|
||||
matches, err := p.ds.MediaFile(p.ctx).FindRecentFilesByMBZTrackID(missing, missing.CreatedAt)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error searching for recent files by MBZ Track ID", "mbzTrackID", missing.MbzReleaseTrackID, err)
|
||||
} else {
|
||||
// Apply the same matching logic as within-library matching
|
||||
for _, match := range matches {
|
||||
if missing.Equals(match) {
|
||||
return match, nil // Exact match found
|
||||
}
|
||||
}
|
||||
|
||||
// If only one match and it's equivalent, use it
|
||||
if len(matches) == 1 && missing.IsEquivalent(matches[0]) {
|
||||
return matches[0], nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Second tier: Search by intrinsic properties (title, size, suffix, etc.)
|
||||
matches, err := p.ds.MediaFile(p.ctx).FindRecentFilesByProperties(missing, missing.CreatedAt)
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error searching for recent files by properties", "missing", missing.Path, err)
|
||||
return model.MediaFile{}, err
|
||||
}
|
||||
|
||||
// Apply the same matching logic as within-library matching
|
||||
for _, match := range matches {
|
||||
if missing.Equals(match) {
|
||||
return match, nil // Exact match found
|
||||
}
|
||||
}
|
||||
|
||||
// If only one match and it's equivalent, use it
|
||||
if len(matches) == 1 && missing.IsEquivalent(matches[0]) {
|
||||
return matches[0], nil
|
||||
}
|
||||
|
||||
return model.MediaFile{}, nil
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) moveMatched(target, missing model.MediaFile) error {
|
||||
return p.ds.WithTx(func(tx model.DataStore) error {
|
||||
discardedID := target.ID
|
||||
oldAlbumID := missing.AlbumID
|
||||
newAlbumID := target.AlbumID
|
||||
|
||||
// Update the target media file with the missing file's ID. This effectively "moves" the track
|
||||
// to the new location while keeping its annotations and references intact.
|
||||
target.ID = missing.ID
|
||||
err := tx.MediaFile(p.ctx).Put(&target)
|
||||
if err != nil {
|
||||
return fmt.Errorf("update matched track: %w", err)
|
||||
}
|
||||
|
||||
// Discard the new mediafile row (the one that was moved to)
|
||||
err = tx.MediaFile(p.ctx).Delete(discardedID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("delete discarded track: %w", err)
|
||||
}
|
||||
|
||||
// Handle album annotation reassignment if AlbumID changed
|
||||
if oldAlbumID != newAlbumID {
|
||||
// Use newAlbumID as key since we only care about avoiding duplicate reassignments to the same target
|
||||
p.annotationMutex.RLock()
|
||||
alreadyProcessed := p.processedAlbumAnnotations[newAlbumID]
|
||||
p.annotationMutex.RUnlock()
|
||||
|
||||
if !alreadyProcessed {
|
||||
p.annotationMutex.Lock()
|
||||
// Double-check pattern to avoid race conditions
|
||||
if !p.processedAlbumAnnotations[newAlbumID] {
|
||||
// Reassign direct album annotations (starred, rating)
|
||||
log.Debug(p.ctx, "Scanner: Reassigning album annotations", "from", oldAlbumID, "to", newAlbumID)
|
||||
if err := tx.Album(p.ctx).ReassignAnnotation(oldAlbumID, newAlbumID); err != nil {
|
||||
log.Warn(p.ctx, "Scanner: Could not reassign album annotations", "from", oldAlbumID, "to", newAlbumID, err)
|
||||
}
|
||||
|
||||
// Note: RefreshPlayCounts will be called in later phases, so we don't need to call it here
|
||||
p.processedAlbumAnnotations[newAlbumID] = true
|
||||
}
|
||||
p.annotationMutex.Unlock()
|
||||
} else {
|
||||
log.Trace(p.ctx, "Scanner: Skipping album annotation reassignment", "from", oldAlbumID, "to", newAlbumID)
|
||||
}
|
||||
}
|
||||
|
||||
p.state.changesDetected.Store(true)
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) finalize(err error) error {
|
||||
matched := p.totalMatched.Load()
|
||||
if matched > 0 {
|
||||
log.Info(p.ctx, "Scanner: Found moved files", "total", matched, err)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Check if we should purge missing items
|
||||
if conf.Server.Scanner.PurgeMissing == consts.PurgeMissingAlways || (conf.Server.Scanner.PurgeMissing == consts.PurgeMissingFull && p.state.fullScan) {
|
||||
if err = p.purgeMissing(); err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error purging missing items", err)
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (p *phaseMissingTracks) purgeMissing() error {
|
||||
deletedCount, err := p.ds.MediaFile(p.ctx).DeleteAllMissing()
|
||||
if err != nil {
|
||||
return fmt.Errorf("error deleting missing files: %w", err)
|
||||
}
|
||||
|
||||
if deletedCount > 0 {
|
||||
log.Info(p.ctx, "Scanner: Purged missing items from the database", "mediaFiles", deletedCount)
|
||||
// Set changesDetected to true so that garbage collection will run at the end of the scan process
|
||||
p.state.changesDetected.Store(true)
|
||||
} else {
|
||||
log.Debug(p.ctx, "Scanner: No missing items to purge")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
var _ phase[*missingTracks] = (*phaseMissingTracks)(nil)
|
||||
769
scanner/phase_2_missing_tracks_test.go
Normal file
769
scanner/phase_2_missing_tracks_test.go
Normal file
@@ -0,0 +1,769 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("phaseMissingTracks", func() {
|
||||
var (
|
||||
phase *phaseMissingTracks
|
||||
ctx context.Context
|
||||
ds model.DataStore
|
||||
mr *tests.MockMediaFileRepo
|
||||
lr *tests.MockLibraryRepo
|
||||
state *scanState
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
mr = tests.CreateMockMediaFileRepo()
|
||||
lr = &tests.MockLibraryRepo{}
|
||||
lr.SetData(model.Libraries{{ID: 1, LastScanStartedAt: time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC)}})
|
||||
ds = &tests.MockDataStore{MockedMediaFile: mr, MockedLibrary: lr}
|
||||
state = &scanState{
|
||||
libraries: model.Libraries{{ID: 1, LastScanStartedAt: time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC)}},
|
||||
}
|
||||
phase = createPhaseMissingTracks(ctx, state, ds)
|
||||
})
|
||||
|
||||
Describe("produceMissingTracks", func() {
|
||||
var (
|
||||
put func(tracks *missingTracks)
|
||||
produced []*missingTracks
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
produced = nil
|
||||
put = func(tracks *missingTracks) {
|
||||
produced = append(produced, tracks)
|
||||
}
|
||||
})
|
||||
|
||||
When("there are no missing tracks", func() {
|
||||
It("should not call put", func() {
|
||||
mr.SetData(model.MediaFiles{
|
||||
{ID: "1", PID: "A", Missing: false},
|
||||
{ID: "2", PID: "A", Missing: false},
|
||||
})
|
||||
|
||||
err := phase.produce(put)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(produced).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
When("there are missing tracks", func() {
|
||||
It("should call put for any missing tracks with corresponding matches", func() {
|
||||
mr.SetData(model.MediaFiles{
|
||||
{ID: "1", PID: "A", Missing: true, LibraryID: 1},
|
||||
{ID: "2", PID: "B", Missing: true, LibraryID: 1},
|
||||
{ID: "3", PID: "A", Missing: false, LibraryID: 1},
|
||||
})
|
||||
|
||||
err := phase.produce(put)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(produced).To(HaveLen(2))
|
||||
// PID A should have both missing and matched tracks
|
||||
var pidA *missingTracks
|
||||
for _, p := range produced {
|
||||
if p.pid == "A" {
|
||||
pidA = p
|
||||
break
|
||||
}
|
||||
}
|
||||
Expect(pidA).ToNot(BeNil())
|
||||
Expect(pidA.missing).To(HaveLen(1))
|
||||
Expect(pidA.matched).To(HaveLen(1))
|
||||
// PID B should have only missing tracks
|
||||
var pidB *missingTracks
|
||||
for _, p := range produced {
|
||||
if p.pid == "B" {
|
||||
pidB = p
|
||||
break
|
||||
}
|
||||
}
|
||||
Expect(pidB).ToNot(BeNil())
|
||||
Expect(pidB.missing).To(HaveLen(1))
|
||||
Expect(pidB.matched).To(HaveLen(0))
|
||||
})
|
||||
It("should call put for any missing tracks even without matches", func() {
|
||||
mr.SetData(model.MediaFiles{
|
||||
{ID: "1", PID: "A", Missing: true, LibraryID: 1},
|
||||
{ID: "2", PID: "B", Missing: true, LibraryID: 1},
|
||||
{ID: "3", PID: "C", Missing: false, LibraryID: 1},
|
||||
})
|
||||
|
||||
err := phase.produce(put)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(produced).To(HaveLen(2))
|
||||
// Both PID A and PID B should be produced even without matches
|
||||
var pidA, pidB *missingTracks
|
||||
for _, p := range produced {
|
||||
if p.pid == "A" {
|
||||
pidA = p
|
||||
} else if p.pid == "B" {
|
||||
pidB = p
|
||||
}
|
||||
}
|
||||
Expect(pidA).ToNot(BeNil())
|
||||
Expect(pidA.missing).To(HaveLen(1))
|
||||
Expect(pidA.matched).To(HaveLen(0))
|
||||
Expect(pidB).ToNot(BeNil())
|
||||
Expect(pidB.missing).To(HaveLen(1))
|
||||
Expect(pidB.matched).To(HaveLen(0))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("processMissingTracks", func() {
|
||||
It("should move the matched track when the missing track is the exact same", func() {
|
||||
missingTrack := model.MediaFile{ID: "1", PID: "A", Path: "dir1/path1.mp3", Tags: model.Tags{"title": []string{"title1"}}, Size: 100}
|
||||
matchedTrack := model.MediaFile{ID: "2", PID: "A", Path: "dir2/path2.mp3", Tags: model.Tags{"title": []string{"title1"}}, Size: 100}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
matched: []model.MediaFile{matchedTrack},
|
||||
}
|
||||
|
||||
_, err := phase.processMissingTracks(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
movedTrack, _ := ds.MediaFile(ctx).Get("1")
|
||||
Expect(movedTrack.Path).To(Equal(matchedTrack.Path))
|
||||
})
|
||||
|
||||
It("should move the matched track when the missing track has the same tags and filename", func() {
|
||||
missingTrack := model.MediaFile{ID: "1", PID: "A", Path: "path1.mp3", Tags: model.Tags{"title": []string{"title1"}}, Size: 100}
|
||||
matchedTrack := model.MediaFile{ID: "2", PID: "A", Path: "path1.flac", Tags: model.Tags{"title": []string{"title1"}}, Size: 200}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
matched: []model.MediaFile{matchedTrack},
|
||||
}
|
||||
|
||||
_, err := phase.processMissingTracks(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
movedTrack, _ := ds.MediaFile(ctx).Get("1")
|
||||
Expect(movedTrack.Path).To(Equal(matchedTrack.Path))
|
||||
Expect(movedTrack.Size).To(Equal(matchedTrack.Size))
|
||||
})
|
||||
|
||||
It("should move the matched track when there's only one missing track and one matched track (same PID)", func() {
|
||||
missingTrack := model.MediaFile{ID: "1", PID: "A", Path: "dir1/path1.mp3", Tags: model.Tags{"title": []string{"title1"}}, Size: 100}
|
||||
matchedTrack := model.MediaFile{ID: "2", PID: "A", Path: "dir2/path2.flac", Tags: model.Tags{"title": []string{"different title"}}, Size: 200}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
matched: []model.MediaFile{matchedTrack},
|
||||
}
|
||||
|
||||
_, err := phase.processMissingTracks(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
movedTrack, _ := ds.MediaFile(ctx).Get("1")
|
||||
Expect(movedTrack.Path).To(Equal(matchedTrack.Path))
|
||||
Expect(movedTrack.Size).To(Equal(matchedTrack.Size))
|
||||
})
|
||||
|
||||
It("should prioritize exact matches", func() {
|
||||
missingTrack := model.MediaFile{ID: "1", PID: "A", Path: "dir1/file1.mp3", Tags: model.Tags{"title": []string{"title1"}}, Size: 100}
|
||||
matchedEquivalent := model.MediaFile{ID: "2", PID: "A", Path: "dir1/file1.flac", Tags: model.Tags{"title": []string{"title1"}}, Size: 200}
|
||||
matchedExact := model.MediaFile{ID: "3", PID: "A", Path: "dir2/file2.mp3", Tags: model.Tags{"title": []string{"title1"}}, Size: 100}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedEquivalent)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedExact)
|
||||
|
||||
in := &missingTracks{
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
// Note that equivalent comes before the exact match
|
||||
matched: []model.MediaFile{matchedEquivalent, matchedExact},
|
||||
}
|
||||
|
||||
_, err := phase.processMissingTracks(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
movedTrack, _ := ds.MediaFile(ctx).Get("1")
|
||||
Expect(movedTrack.Path).To(Equal(matchedExact.Path))
|
||||
Expect(movedTrack.Size).To(Equal(matchedExact.Size))
|
||||
})
|
||||
|
||||
It("should not move anything if there's more than one match and they don't are not exact nor equivalent", func() {
|
||||
missingTrack := model.MediaFile{ID: "1", PID: "A", Path: "dir1/file1.mp3", Title: "title1", Size: 100}
|
||||
matched1 := model.MediaFile{ID: "2", PID: "A", Path: "dir1/file2.flac", Title: "another title", Size: 200}
|
||||
matched2 := model.MediaFile{ID: "3", PID: "A", Path: "dir2/file3.mp3", Title: "different title", Size: 100}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matched1)
|
||||
_ = ds.MediaFile(ctx).Put(&matched2)
|
||||
|
||||
in := &missingTracks{
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
matched: []model.MediaFile{matched1, matched2},
|
||||
}
|
||||
|
||||
_, err := phase.processMissingTracks(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(0)))
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
|
||||
// The missing track should still be the same
|
||||
movedTrack, _ := ds.MediaFile(ctx).Get("1")
|
||||
Expect(movedTrack.Path).To(Equal(missingTrack.Path))
|
||||
Expect(movedTrack.Title).To(Equal(missingTrack.Title))
|
||||
Expect(movedTrack.Size).To(Equal(missingTrack.Size))
|
||||
})
|
||||
|
||||
It("should return an error when there's an error moving the matched track", func() {
|
||||
missingTrack := model.MediaFile{ID: "1", PID: "A", Path: "path1.mp3", Tags: model.Tags{"title": []string{"title1"}}}
|
||||
matchedTrack := model.MediaFile{ID: "2", PID: "A", Path: "path1.mp3", Tags: model.Tags{"title": []string{"title1"}}}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
matched: []model.MediaFile{matchedTrack},
|
||||
}
|
||||
|
||||
// Simulate an error when moving the matched track by deleting the track from the DB
|
||||
_ = ds.MediaFile(ctx).Delete("2")
|
||||
|
||||
_, err := phase.processMissingTracks(in)
|
||||
Expect(err).To(HaveOccurred())
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("finalize", func() {
|
||||
It("should return nil if no error", func() {
|
||||
err := phase.finalize(nil)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("should return the error if provided", func() {
|
||||
err := phase.finalize(context.DeadlineExceeded)
|
||||
Expect(err).To(Equal(context.DeadlineExceeded))
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
|
||||
When("PurgeMissing is 'always'", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.PurgeMissing = consts.PurgeMissingAlways
|
||||
mr.CountAllValue = 3
|
||||
mr.DeleteAllMissingValue = 3
|
||||
})
|
||||
It("should purge missing files", func() {
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
err := phase.finalize(nil)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
When("PurgeMissing is 'full'", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.PurgeMissing = consts.PurgeMissingFull
|
||||
mr.CountAllValue = 2
|
||||
mr.DeleteAllMissingValue = 2
|
||||
})
|
||||
It("should not purge missing files if not a full scan", func() {
|
||||
state.fullScan = false
|
||||
err := phase.finalize(nil)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
It("should purge missing files if full scan", func() {
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
state.fullScan = true
|
||||
err := phase.finalize(nil)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
When("PurgeMissing is 'never'", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.PurgeMissing = consts.PurgeMissingNever
|
||||
mr.CountAllValue = 1
|
||||
mr.DeleteAllMissingValue = 1
|
||||
})
|
||||
It("should not purge missing files", func() {
|
||||
err := phase.finalize(nil)
|
||||
Expect(err).To(BeNil())
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("processCrossLibraryMoves", func() {
|
||||
It("should skip processing if input is nil", func() {
|
||||
result, err := phase.processCrossLibraryMoves(nil)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(BeNil())
|
||||
})
|
||||
|
||||
It("should process cross-library moves using MusicBrainz Track ID", func() {
|
||||
scanStartTime := time.Now().Add(-1 * time.Hour)
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing1",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "mbz-track-123",
|
||||
Title: "Test Track",
|
||||
Size: 1000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib1/track.mp3",
|
||||
Missing: true,
|
||||
CreatedAt: scanStartTime.Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
movedTrack := model.MediaFile{
|
||||
ID: "moved1",
|
||||
LibraryID: 2,
|
||||
MbzReleaseTrackID: "mbz-track-123",
|
||||
Title: "Test Track",
|
||||
Size: 1000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib2/track.mp3",
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-10 * time.Minute),
|
||||
}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&movedTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
// Verify the move was performed
|
||||
updatedTrack, _ := ds.MediaFile(ctx).Get("missing1")
|
||||
Expect(updatedTrack.Path).To(Equal("/lib2/track.mp3"))
|
||||
Expect(updatedTrack.LibraryID).To(Equal(2))
|
||||
})
|
||||
|
||||
It("should fall back to intrinsic properties when MBZ Track ID is empty", func() {
|
||||
scanStartTime := time.Now().Add(-1 * time.Hour)
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing2",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 2",
|
||||
Size: 2000,
|
||||
Suffix: "flac",
|
||||
DiscNumber: 1,
|
||||
TrackNumber: 1,
|
||||
Album: "Test Album",
|
||||
Path: "/lib1/track2.flac",
|
||||
Missing: true,
|
||||
CreatedAt: scanStartTime.Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
movedTrack := model.MediaFile{
|
||||
ID: "moved2",
|
||||
LibraryID: 2,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 2",
|
||||
Size: 2000,
|
||||
Suffix: "flac",
|
||||
DiscNumber: 1,
|
||||
TrackNumber: 1,
|
||||
Album: "Test Album",
|
||||
Path: "/lib2/track2.flac",
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-10 * time.Minute),
|
||||
}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&movedTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
// Verify the move was performed
|
||||
updatedTrack, _ := ds.MediaFile(ctx).Get("missing2")
|
||||
Expect(updatedTrack.Path).To(Equal("/lib2/track2.flac"))
|
||||
Expect(updatedTrack.LibraryID).To(Equal(2))
|
||||
})
|
||||
|
||||
It("should not match files in the same library", func() {
|
||||
scanStartTime := time.Now().Add(-1 * time.Hour)
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing3",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "mbz-track-456",
|
||||
Title: "Test Track 3",
|
||||
Size: 3000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib1/track3.mp3",
|
||||
Missing: true,
|
||||
CreatedAt: scanStartTime.Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
sameLibTrack := model.MediaFile{
|
||||
ID: "same1",
|
||||
LibraryID: 1, // Same library
|
||||
MbzReleaseTrackID: "mbz-track-456",
|
||||
Title: "Test Track 3",
|
||||
Size: 3000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib1/other/track3.mp3",
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-10 * time.Minute),
|
||||
}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&sameLibTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(0)))
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
|
||||
It("should prioritize MBZ Track ID over intrinsic properties", func() {
|
||||
scanStartTime := time.Now().Add(-1 * time.Hour)
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing4",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "mbz-track-789",
|
||||
Title: "Test Track 4",
|
||||
Size: 4000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib1/track4.mp3",
|
||||
Missing: true,
|
||||
CreatedAt: scanStartTime.Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
// Track with same MBZ ID
|
||||
mbzTrack := model.MediaFile{
|
||||
ID: "mbz1",
|
||||
LibraryID: 2,
|
||||
MbzReleaseTrackID: "mbz-track-789",
|
||||
Title: "Test Track 4",
|
||||
Size: 4000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib2/track4.mp3",
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-10 * time.Minute),
|
||||
}
|
||||
|
||||
// Track with same intrinsic properties but no MBZ ID
|
||||
intrinsicTrack := model.MediaFile{
|
||||
ID: "intrinsic1",
|
||||
LibraryID: 3,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 4",
|
||||
Size: 4000,
|
||||
Suffix: "mp3",
|
||||
DiscNumber: 1,
|
||||
TrackNumber: 1,
|
||||
Album: "Test Album",
|
||||
Path: "/lib3/track4.mp3",
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-5 * time.Minute),
|
||||
}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&mbzTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&intrinsicTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
// Verify the MBZ track was chosen (not the intrinsic one)
|
||||
updatedTrack, _ := ds.MediaFile(ctx).Get("missing4")
|
||||
Expect(updatedTrack.Path).To(Equal("/lib2/track4.mp3"))
|
||||
Expect(updatedTrack.LibraryID).To(Equal(2))
|
||||
})
|
||||
|
||||
It("should handle equivalent matches correctly", func() {
|
||||
scanStartTime := time.Now().Add(-1 * time.Hour)
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing5",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 5",
|
||||
Size: 5000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib1/path/track5.mp3",
|
||||
Missing: true,
|
||||
CreatedAt: scanStartTime.Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
// Equivalent match (same filename, different directory)
|
||||
equivalentTrack := model.MediaFile{
|
||||
ID: "equiv1",
|
||||
LibraryID: 2,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 5",
|
||||
Size: 5000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib2/different/track5.mp3",
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-10 * time.Minute),
|
||||
}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&equivalentTrack)
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
|
||||
// Verify the equivalent match was accepted
|
||||
updatedTrack, _ := ds.MediaFile(ctx).Get("missing5")
|
||||
Expect(updatedTrack.Path).To(Equal("/lib2/different/track5.mp3"))
|
||||
Expect(updatedTrack.LibraryID).To(Equal(2))
|
||||
})
|
||||
|
||||
It("should skip matching when multiple matches are found but none are exact", func() {
|
||||
scanStartTime := time.Now().Add(-1 * time.Hour)
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing6",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 6",
|
||||
Size: 6000,
|
||||
Suffix: "mp3",
|
||||
DiscNumber: 1,
|
||||
TrackNumber: 1,
|
||||
Album: "Test Album",
|
||||
Path: "/lib1/track6.mp3",
|
||||
Missing: true,
|
||||
CreatedAt: scanStartTime.Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
// Multiple matches with different metadata (not exact matches)
|
||||
match1 := model.MediaFile{
|
||||
ID: "match1",
|
||||
LibraryID: 2,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 6",
|
||||
Size: 6000,
|
||||
Suffix: "mp3",
|
||||
DiscNumber: 1,
|
||||
TrackNumber: 1,
|
||||
Album: "Test Album",
|
||||
Path: "/lib2/different_track.mp3",
|
||||
Artist: "Different Artist", // This makes it non-exact
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-10 * time.Minute),
|
||||
}
|
||||
|
||||
match2 := model.MediaFile{
|
||||
ID: "match2",
|
||||
LibraryID: 3,
|
||||
MbzReleaseTrackID: "",
|
||||
Title: "Test Track 6",
|
||||
Size: 6000,
|
||||
Suffix: "mp3",
|
||||
DiscNumber: 1,
|
||||
TrackNumber: 1,
|
||||
Album: "Test Album",
|
||||
Path: "/lib3/another_track.mp3",
|
||||
Artist: "Another Artist", // This makes it non-exact
|
||||
Missing: false,
|
||||
CreatedAt: scanStartTime.Add(-5 * time.Minute),
|
||||
}
|
||||
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&match1)
|
||||
_ = ds.MediaFile(ctx).Put(&match2)
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(0)))
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
|
||||
// Verify no move was performed
|
||||
unchangedTrack, _ := ds.MediaFile(ctx).Get("missing6")
|
||||
Expect(unchangedTrack.Path).To(Equal("/lib1/track6.mp3"))
|
||||
Expect(unchangedTrack.LibraryID).To(Equal(1))
|
||||
})
|
||||
|
||||
It("should handle errors gracefully", func() {
|
||||
// Set up mock to return error
|
||||
mr.Err = true
|
||||
|
||||
missingTrack := model.MediaFile{
|
||||
ID: "missing7",
|
||||
LibraryID: 1,
|
||||
MbzReleaseTrackID: "mbz-track-error",
|
||||
Title: "Test Track 7",
|
||||
Size: 7000,
|
||||
Suffix: "mp3",
|
||||
Path: "/lib1/track7.mp3",
|
||||
Missing: true,
|
||||
CreatedAt: time.Now().Add(-30 * time.Minute),
|
||||
}
|
||||
|
||||
in := &missingTracks{
|
||||
lib: model.Library{ID: 1, Name: "Library 1"},
|
||||
missing: []model.MediaFile{missingTrack},
|
||||
}
|
||||
|
||||
// Should not fail completely, just skip the problematic file
|
||||
result, err := phase.processCrossLibraryMoves(in)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(Equal(in))
|
||||
Expect(phase.totalMatched.Load()).To(Equal(uint32(0)))
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("Album Annotation Reassignment", func() {
|
||||
var (
|
||||
albumRepo *tests.MockAlbumRepo
|
||||
missingTrack model.MediaFile
|
||||
matchedTrack model.MediaFile
|
||||
oldAlbumID string
|
||||
newAlbumID string
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
albumRepo = ds.Album(ctx).(*tests.MockAlbumRepo)
|
||||
albumRepo.ReassignAnnotationCalls = make(map[string]string)
|
||||
|
||||
oldAlbumID = "old-album-id"
|
||||
newAlbumID = "new-album-id"
|
||||
|
||||
missingTrack = model.MediaFile{
|
||||
ID: "missing-track-id",
|
||||
PID: "same-pid",
|
||||
Path: "old/path.mp3",
|
||||
AlbumID: oldAlbumID,
|
||||
LibraryID: 1,
|
||||
Missing: true,
|
||||
Annotations: model.Annotations{
|
||||
PlayCount: 5,
|
||||
Rating: 4,
|
||||
Starred: true,
|
||||
},
|
||||
}
|
||||
|
||||
matchedTrack = model.MediaFile{
|
||||
ID: "matched-track-id",
|
||||
PID: "same-pid",
|
||||
Path: "new/path.mp3",
|
||||
AlbumID: newAlbumID,
|
||||
LibraryID: 2, // Different library
|
||||
Missing: false,
|
||||
Annotations: model.Annotations{
|
||||
PlayCount: 2,
|
||||
Rating: 3,
|
||||
Starred: false,
|
||||
},
|
||||
}
|
||||
|
||||
// Store both tracks in the database
|
||||
_ = ds.MediaFile(ctx).Put(&missingTrack)
|
||||
_ = ds.MediaFile(ctx).Put(&matchedTrack)
|
||||
})
|
||||
|
||||
When("album ID changes during cross-library move", func() {
|
||||
It("should reassign album annotations when AlbumID changes", func() {
|
||||
err := phase.moveMatched(matchedTrack, missingTrack)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify that ReassignAnnotation was called
|
||||
Expect(albumRepo.ReassignAnnotationCalls).To(HaveKeyWithValue(oldAlbumID, newAlbumID))
|
||||
})
|
||||
|
||||
It("should not reassign annotations when AlbumID is the same", func() {
|
||||
missingTrack.AlbumID = newAlbumID // Same album
|
||||
|
||||
err := phase.moveMatched(matchedTrack, missingTrack)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify that ReassignAnnotation was NOT called
|
||||
Expect(albumRepo.ReassignAnnotationCalls).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
When("error handling", func() {
|
||||
It("should handle ReassignAnnotation errors gracefully", func() {
|
||||
// Make the album repo return an error
|
||||
albumRepo.SetError(true)
|
||||
|
||||
// The move should still succeed even if annotation reassignment fails
|
||||
err := phase.moveMatched(matchedTrack, missingTrack)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify that the track was still moved (ID should be updated)
|
||||
movedTrack, err := ds.MediaFile(ctx).Get(missingTrack.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(movedTrack.Path).To(Equal(matchedTrack.Path))
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
148
scanner/phase_3_refresh_albums.go
Normal file
148
scanner/phase_3_refresh_albums.go
Normal file
@@ -0,0 +1,148 @@
|
||||
// nolint:unused
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/Masterminds/squirrel"
|
||||
ppl "github.com/google/go-pipeline/pkg/pipeline"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
)
|
||||
|
||||
// phaseRefreshAlbums is responsible for refreshing albums that have been
|
||||
// newly added or changed during the scan process. This phase ensures that
|
||||
// the album information in the database is up-to-date by performing the
|
||||
// following steps:
|
||||
// 1. Loads all libraries and their albums that have been touched (new or changed).
|
||||
// 2. For each album, it filters out unmodified albums by comparing the current
|
||||
// state with the state in the database.
|
||||
// 3. Refreshes the album information in the database if any changes are detected.
|
||||
// 4. Logs the results and finalizes the phase by reporting the total number of
|
||||
// refreshed and skipped albums.
|
||||
// 5. As a last step, it refreshes the artist statistics to reflect the changes
|
||||
type phaseRefreshAlbums struct {
|
||||
ds model.DataStore
|
||||
ctx context.Context
|
||||
refreshed atomic.Uint32
|
||||
skipped atomic.Uint32
|
||||
state *scanState
|
||||
}
|
||||
|
||||
func createPhaseRefreshAlbums(ctx context.Context, state *scanState, ds model.DataStore) *phaseRefreshAlbums {
|
||||
return &phaseRefreshAlbums{ctx: ctx, ds: ds, state: state}
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) description() string {
|
||||
return "Refresh all new/changed albums"
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) producer() ppl.Producer[*model.Album] {
|
||||
return ppl.NewProducer(p.produce, ppl.Name("load albums from db"))
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) produce(put func(album *model.Album)) error {
|
||||
count := 0
|
||||
for _, lib := range p.state.libraries {
|
||||
cursor, err := p.ds.Album(p.ctx).GetTouchedAlbums(lib.ID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading touched albums: %w", err)
|
||||
}
|
||||
log.Debug(p.ctx, "Scanner: Checking albums that may need refresh", "libraryId", lib.ID, "libraryName", lib.Name)
|
||||
for album, err := range cursor {
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading touched albums: %w", err)
|
||||
}
|
||||
count++
|
||||
put(&album)
|
||||
}
|
||||
}
|
||||
if count == 0 {
|
||||
log.Debug(p.ctx, "Scanner: No albums needing refresh")
|
||||
} else {
|
||||
log.Debug(p.ctx, "Scanner: Found albums that may need refreshing", "count", count)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) stages() []ppl.Stage[*model.Album] {
|
||||
return []ppl.Stage[*model.Album]{
|
||||
ppl.NewStage(p.filterUnmodified, ppl.Name("filter unmodified"), ppl.Concurrency(5)),
|
||||
ppl.NewStage(p.refreshAlbum, ppl.Name("refresh albums")),
|
||||
}
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) filterUnmodified(album *model.Album) (*model.Album, error) {
|
||||
mfs, err := p.ds.MediaFile(p.ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"album_id": album.ID}})
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Error loading media files for album", "album_id", album.ID, err)
|
||||
return nil, err
|
||||
}
|
||||
if len(mfs) == 0 {
|
||||
log.Debug(p.ctx, "Scanner: album has no media files. Skipping", "album_id", album.ID,
|
||||
"name", album.Name, "songCount", album.SongCount, "updatedAt", album.UpdatedAt)
|
||||
p.skipped.Add(1)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
newAlbum := mfs.ToAlbum()
|
||||
if album.Equals(newAlbum) {
|
||||
log.Trace("Scanner: album is up to date. Skipping", "album_id", album.ID,
|
||||
"name", album.Name, "songCount", album.SongCount, "updatedAt", album.UpdatedAt)
|
||||
p.skipped.Add(1)
|
||||
return nil, nil
|
||||
}
|
||||
return &newAlbum, nil
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) refreshAlbum(album *model.Album) (*model.Album, error) {
|
||||
if album == nil {
|
||||
return nil, nil
|
||||
}
|
||||
start := time.Now()
|
||||
err := p.ds.Album(p.ctx).Put(album)
|
||||
log.Debug(p.ctx, "Scanner: refreshing album", "album_id", album.ID, "name", album.Name, "songCount", album.SongCount, "elapsed", time.Since(start), err)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("refreshing album %s: %w", album.ID, err)
|
||||
}
|
||||
p.refreshed.Add(1)
|
||||
p.state.changesDetected.Store(true)
|
||||
return album, nil
|
||||
}
|
||||
|
||||
func (p *phaseRefreshAlbums) finalize(err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
logF := log.Info
|
||||
refreshed := p.refreshed.Load()
|
||||
skipped := p.skipped.Load()
|
||||
if refreshed == 0 {
|
||||
logF = log.Debug
|
||||
}
|
||||
logF(p.ctx, "Scanner: Finished refreshing albums", "refreshed", refreshed, "skipped", skipped, err)
|
||||
if !p.state.changesDetected.Load() {
|
||||
log.Debug(p.ctx, "Scanner: No changes detected, skipping refreshing annotations")
|
||||
return nil
|
||||
}
|
||||
// Refresh album annotations
|
||||
start := time.Now()
|
||||
cnt, err := p.ds.Album(p.ctx).RefreshPlayCounts()
|
||||
if err != nil {
|
||||
return fmt.Errorf("refreshing album annotations: %w", err)
|
||||
}
|
||||
log.Debug(p.ctx, "Scanner: Refreshed album annotations", "albums", cnt, "elapsed", time.Since(start))
|
||||
|
||||
// Refresh artist annotations
|
||||
start = time.Now()
|
||||
cnt, err = p.ds.Artist(p.ctx).RefreshPlayCounts()
|
||||
if err != nil {
|
||||
return fmt.Errorf("refreshing artist annotations: %w", err)
|
||||
}
|
||||
log.Debug(p.ctx, "Scanner: Refreshed artist annotations", "artists", cnt, "elapsed", time.Since(start))
|
||||
p.state.changesDetected.Store(true)
|
||||
return nil
|
||||
}
|
||||
135
scanner/phase_3_refresh_albums_test.go
Normal file
135
scanner/phase_3_refresh_albums_test.go
Normal file
@@ -0,0 +1,135 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("phaseRefreshAlbums", func() {
|
||||
var (
|
||||
phase *phaseRefreshAlbums
|
||||
ctx context.Context
|
||||
albumRepo *tests.MockAlbumRepo
|
||||
mfRepo *tests.MockMediaFileRepo
|
||||
ds *tests.MockDataStore
|
||||
libs model.Libraries
|
||||
state *scanState
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
albumRepo = tests.CreateMockAlbumRepo()
|
||||
mfRepo = tests.CreateMockMediaFileRepo()
|
||||
ds = &tests.MockDataStore{
|
||||
MockedAlbum: albumRepo,
|
||||
MockedMediaFile: mfRepo,
|
||||
}
|
||||
libs = model.Libraries{
|
||||
{ID: 1, Name: "Library 1"},
|
||||
{ID: 2, Name: "Library 2"},
|
||||
}
|
||||
state = &scanState{libraries: libs}
|
||||
phase = createPhaseRefreshAlbums(ctx, state, ds)
|
||||
})
|
||||
|
||||
Describe("description", func() {
|
||||
It("returns the correct description", func() {
|
||||
Expect(phase.description()).To(Equal("Refresh all new/changed albums"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("producer", func() {
|
||||
It("produces albums that need refreshing", func() {
|
||||
albumRepo.SetData(model.Albums{
|
||||
{LibraryID: 1, ID: "album1", Name: "Album 1"},
|
||||
})
|
||||
|
||||
var produced []*model.Album
|
||||
err := phase.produce(func(album *model.Album) {
|
||||
produced = append(produced, album)
|
||||
})
|
||||
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(produced).To(HaveLen(1))
|
||||
Expect(produced[0].ID).To(Equal("album1"))
|
||||
})
|
||||
|
||||
It("returns an error if there is an error loading albums", func() {
|
||||
albumRepo.SetData(model.Albums{
|
||||
{ID: "error"},
|
||||
})
|
||||
|
||||
err := phase.produce(func(album *model.Album) {})
|
||||
|
||||
Expect(err).To(MatchError(ContainSubstring("loading touched albums")))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("filterUnmodified", func() {
|
||||
It("filters out unmodified albums", func() {
|
||||
album := &model.Album{ID: "album1", Name: "Album 1", SongCount: 1,
|
||||
FolderIDs: []string{"folder1"}, Discs: model.Discs{1: ""}}
|
||||
mfRepo.SetData(model.MediaFiles{
|
||||
{AlbumID: "album1", Title: "Song 1", Album: "Album 1", FolderID: "folder1"},
|
||||
})
|
||||
|
||||
result, err := phase.filterUnmodified(album)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(BeNil())
|
||||
})
|
||||
It("keep modified albums", func() {
|
||||
album := &model.Album{ID: "album1", Name: "Album 1"}
|
||||
mfRepo.SetData(model.MediaFiles{
|
||||
{AlbumID: "album1", Title: "Song 1", Album: "Album 2"},
|
||||
})
|
||||
|
||||
result, err := phase.filterUnmodified(album)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).ToNot(BeNil())
|
||||
Expect(result.ID).To(Equal("album1"))
|
||||
})
|
||||
It("skips albums with no media files", func() {
|
||||
album := &model.Album{ID: "album1", Name: "Album 1"}
|
||||
mfRepo.SetData(model.MediaFiles{})
|
||||
|
||||
result, err := phase.filterUnmodified(album)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).To(BeNil())
|
||||
})
|
||||
})
|
||||
|
||||
Describe("refreshAlbum", func() {
|
||||
It("refreshes the album in the database", func() {
|
||||
Expect(albumRepo.CountAll()).To(Equal(int64(0)))
|
||||
|
||||
album := &model.Album{ID: "album1", Name: "Album 1"}
|
||||
result, err := phase.refreshAlbum(album)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(result).ToNot(BeNil())
|
||||
Expect(result.ID).To(Equal("album1"))
|
||||
|
||||
savedAlbum, err := albumRepo.Get("album1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(savedAlbum).ToNot(BeNil())
|
||||
Expect(savedAlbum.ID).To(Equal("album1"))
|
||||
Expect(phase.refreshed.Load()).To(Equal(uint32(1)))
|
||||
Expect(state.changesDetected.Load()).To(BeTrue())
|
||||
})
|
||||
|
||||
It("returns an error if there is an error refreshing the album", func() {
|
||||
album := &model.Album{ID: "album1", Name: "Album 1"}
|
||||
albumRepo.SetError(true)
|
||||
|
||||
result, err := phase.refreshAlbum(album)
|
||||
Expect(result).To(BeNil())
|
||||
Expect(err).To(MatchError(ContainSubstring("refreshing album")))
|
||||
Expect(phase.refreshed.Load()).To(Equal(uint32(0)))
|
||||
Expect(state.changesDetected.Load()).To(BeFalse())
|
||||
})
|
||||
})
|
||||
})
|
||||
130
scanner/phase_4_playlists.go
Normal file
130
scanner/phase_4_playlists.go
Normal file
@@ -0,0 +1,130 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
ppl "github.com/google/go-pipeline/pkg/pipeline"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/request"
|
||||
)
|
||||
|
||||
type phasePlaylists struct {
|
||||
ctx context.Context
|
||||
scanState *scanState
|
||||
ds model.DataStore
|
||||
pls core.Playlists
|
||||
cw artwork.CacheWarmer
|
||||
refreshed atomic.Uint32
|
||||
}
|
||||
|
||||
func createPhasePlaylists(ctx context.Context, scanState *scanState, ds model.DataStore, pls core.Playlists, cw artwork.CacheWarmer) *phasePlaylists {
|
||||
return &phasePlaylists{
|
||||
ctx: ctx,
|
||||
scanState: scanState,
|
||||
ds: ds,
|
||||
pls: pls,
|
||||
cw: cw,
|
||||
}
|
||||
}
|
||||
|
||||
func (p *phasePlaylists) description() string {
|
||||
return "Import/update playlists"
|
||||
}
|
||||
|
||||
func (p *phasePlaylists) producer() ppl.Producer[*model.Folder] {
|
||||
return ppl.NewProducer(p.produce, ppl.Name("load folders with playlists from db"))
|
||||
}
|
||||
|
||||
func (p *phasePlaylists) produce(put func(entry *model.Folder)) error {
|
||||
if !conf.Server.AutoImportPlaylists {
|
||||
log.Info(p.ctx, "Playlists will not be imported, AutoImportPlaylists is set to false")
|
||||
return nil
|
||||
}
|
||||
u, _ := request.UserFrom(p.ctx)
|
||||
if !u.IsAdmin {
|
||||
log.Warn(p.ctx, "Playlists will not be imported, as there are no admin users yet, "+
|
||||
"Please create an admin user first, and then update the playlists for them to be imported")
|
||||
return nil
|
||||
}
|
||||
|
||||
count := 0
|
||||
cursor, err := p.ds.Folder(p.ctx).GetTouchedWithPlaylists()
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading touched folders: %w", err)
|
||||
}
|
||||
log.Debug(p.ctx, "Scanner: Checking playlists that may need refresh")
|
||||
for folder, err := range cursor {
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading touched folder: %w", err)
|
||||
}
|
||||
count++
|
||||
put(&folder)
|
||||
}
|
||||
if count == 0 {
|
||||
log.Debug(p.ctx, "Scanner: No playlists need refreshing")
|
||||
} else {
|
||||
log.Debug(p.ctx, "Scanner: Found folders with playlists that may need refreshing", "count", count)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *phasePlaylists) stages() []ppl.Stage[*model.Folder] {
|
||||
return []ppl.Stage[*model.Folder]{
|
||||
ppl.NewStage(p.processPlaylistsInFolder, ppl.Name("process playlists in folder"), ppl.Concurrency(3)),
|
||||
}
|
||||
}
|
||||
|
||||
func (p *phasePlaylists) processPlaylistsInFolder(folder *model.Folder) (*model.Folder, error) {
|
||||
files, err := os.ReadDir(folder.AbsolutePath())
|
||||
if err != nil {
|
||||
log.Error(p.ctx, "Scanner: Error reading files", "folder", folder, err)
|
||||
p.scanState.sendWarning(err.Error())
|
||||
return folder, nil
|
||||
}
|
||||
for _, f := range files {
|
||||
started := time.Now()
|
||||
if strings.HasPrefix(f.Name(), ".") {
|
||||
continue
|
||||
}
|
||||
if !model.IsValidPlaylist(f.Name()) {
|
||||
continue
|
||||
}
|
||||
// BFR: Check if playlist needs to be refreshed (timestamp, sync flag, etc)
|
||||
pls, err := p.pls.ImportFile(p.ctx, folder, f.Name())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if pls.IsSmartPlaylist() {
|
||||
log.Debug("Scanner: Imported smart playlist", "name", pls.Name, "lastUpdated", pls.UpdatedAt, "path", pls.Path, "elapsed", time.Since(started))
|
||||
} else {
|
||||
log.Debug("Scanner: Imported playlist", "name", pls.Name, "lastUpdated", pls.UpdatedAt, "path", pls.Path, "numTracks", len(pls.Tracks), "elapsed", time.Since(started))
|
||||
}
|
||||
p.cw.PreCache(pls.CoverArtID())
|
||||
p.refreshed.Add(1)
|
||||
}
|
||||
return folder, nil
|
||||
}
|
||||
|
||||
func (p *phasePlaylists) finalize(err error) error {
|
||||
refreshed := p.refreshed.Load()
|
||||
logF := log.Info
|
||||
if refreshed == 0 {
|
||||
logF = log.Debug
|
||||
} else {
|
||||
p.scanState.changesDetected.Store(true)
|
||||
}
|
||||
logF(p.ctx, "Scanner: Finished refreshing playlists", "refreshed", refreshed, err)
|
||||
return err
|
||||
}
|
||||
|
||||
var _ phase[*model.Folder] = (*phasePlaylists)(nil)
|
||||
164
scanner/phase_4_playlists_test.go
Normal file
164
scanner/phase_4_playlists_test.go
Normal file
@@ -0,0 +1,164 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/request"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
"github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
var _ = Describe("phasePlaylists", func() {
|
||||
var (
|
||||
phase *phasePlaylists
|
||||
ctx context.Context
|
||||
state *scanState
|
||||
folderRepo *mockFolderRepository
|
||||
ds *tests.MockDataStore
|
||||
pls *mockPlaylists
|
||||
cw artwork.CacheWarmer
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.AutoImportPlaylists = true
|
||||
ctx = context.Background()
|
||||
ctx = request.WithUser(ctx, model.User{ID: "123", IsAdmin: true})
|
||||
folderRepo = &mockFolderRepository{}
|
||||
ds = &tests.MockDataStore{
|
||||
MockedFolder: folderRepo,
|
||||
}
|
||||
pls = &mockPlaylists{}
|
||||
cw = artwork.NoopCacheWarmer()
|
||||
state = &scanState{}
|
||||
phase = createPhasePlaylists(ctx, state, ds, pls, cw)
|
||||
})
|
||||
|
||||
Describe("description", func() {
|
||||
It("returns the correct description", func() {
|
||||
Expect(phase.description()).To(Equal("Import/update playlists"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("producer", func() {
|
||||
It("produces folders with playlists", func() {
|
||||
folderRepo.SetData(map[*model.Folder]error{
|
||||
{Path: "/path/to/folder1"}: nil,
|
||||
{Path: "/path/to/folder2"}: nil,
|
||||
})
|
||||
|
||||
var produced []*model.Folder
|
||||
err := phase.produce(func(folder *model.Folder) {
|
||||
produced = append(produced, folder)
|
||||
})
|
||||
|
||||
sort.Slice(produced, func(i, j int) bool {
|
||||
return produced[i].Path < produced[j].Path
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(produced).To(HaveLen(2))
|
||||
Expect(produced[0].Path).To(Equal("/path/to/folder1"))
|
||||
Expect(produced[1].Path).To(Equal("/path/to/folder2"))
|
||||
})
|
||||
|
||||
It("returns an error if there is an error loading folders", func() {
|
||||
folderRepo.SetData(map[*model.Folder]error{
|
||||
nil: errors.New("error loading folders"),
|
||||
})
|
||||
|
||||
called := false
|
||||
err := phase.produce(func(folder *model.Folder) { called = true })
|
||||
|
||||
Expect(err).To(HaveOccurred())
|
||||
Expect(called).To(BeFalse())
|
||||
Expect(err).To(MatchError(ContainSubstring("error loading folders")))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("processPlaylistsInFolder", func() {
|
||||
It("processes playlists in a folder", func() {
|
||||
libPath := GinkgoT().TempDir()
|
||||
folder := &model.Folder{LibraryPath: libPath, Path: "path/to", Name: "folder"}
|
||||
_ = os.MkdirAll(folder.AbsolutePath(), 0755)
|
||||
|
||||
file1 := filepath.Join(folder.AbsolutePath(), "playlist1.m3u")
|
||||
file2 := filepath.Join(folder.AbsolutePath(), "playlist2.m3u")
|
||||
_ = os.WriteFile(file1, []byte{}, 0600)
|
||||
_ = os.WriteFile(file2, []byte{}, 0600)
|
||||
|
||||
pls.On("ImportFile", mock.Anything, folder, "playlist1.m3u").
|
||||
Return(&model.Playlist{}, nil)
|
||||
pls.On("ImportFile", mock.Anything, folder, "playlist2.m3u").
|
||||
Return(&model.Playlist{}, nil)
|
||||
|
||||
_, err := phase.processPlaylistsInFolder(folder)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(pls.Calls).To(HaveLen(2))
|
||||
Expect(pls.Calls[0].Arguments[2]).To(Equal("playlist1.m3u"))
|
||||
Expect(pls.Calls[1].Arguments[2]).To(Equal("playlist2.m3u"))
|
||||
Expect(phase.refreshed.Load()).To(Equal(uint32(2)))
|
||||
})
|
||||
|
||||
It("reports an error if there is an error reading files", func() {
|
||||
progress := make(chan *ProgressInfo)
|
||||
state.progress = progress
|
||||
folder := &model.Folder{Path: "/invalid/path"}
|
||||
go func() {
|
||||
_, err := phase.processPlaylistsInFolder(folder)
|
||||
// I/O errors are ignored
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}()
|
||||
|
||||
// But are reported
|
||||
info := &ProgressInfo{}
|
||||
Eventually(progress).Should(Receive(&info))
|
||||
Expect(info.Warning).To(ContainSubstring("no such file or directory"))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
type mockPlaylists struct {
|
||||
mock.Mock
|
||||
core.Playlists
|
||||
}
|
||||
|
||||
func (p *mockPlaylists) ImportFile(ctx context.Context, folder *model.Folder, filename string) (*model.Playlist, error) {
|
||||
args := p.Called(ctx, folder, filename)
|
||||
return args.Get(0).(*model.Playlist), args.Error(1)
|
||||
}
|
||||
|
||||
type mockFolderRepository struct {
|
||||
model.FolderRepository
|
||||
data map[*model.Folder]error
|
||||
}
|
||||
|
||||
func (f *mockFolderRepository) GetTouchedWithPlaylists() (model.FolderCursor, error) {
|
||||
return func(yield func(model.Folder, error) bool) {
|
||||
for folder, err := range f.data {
|
||||
if err != nil {
|
||||
if !yield(model.Folder{}, err) {
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
if !yield(*folder, err) {
|
||||
return
|
||||
}
|
||||
}
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (f *mockFolderRepository) SetData(m map[*model.Folder]error) {
|
||||
f.data = m
|
||||
}
|
||||
374
scanner/scanner.go
Normal file
374
scanner/scanner.go
Normal file
@@ -0,0 +1,374 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"maps"
|
||||
"slices"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
ppl "github.com/google/go-pipeline/pkg/pipeline"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/utils/run"
|
||||
"github.com/navidrome/navidrome/utils/slice"
|
||||
)
|
||||
|
||||
type scannerImpl struct {
|
||||
ds model.DataStore
|
||||
cw artwork.CacheWarmer
|
||||
pls core.Playlists
|
||||
}
|
||||
|
||||
// scanState holds the state of an in-progress scan, to be passed to the various phases
|
||||
type scanState struct {
|
||||
progress chan<- *ProgressInfo
|
||||
fullScan bool
|
||||
changesDetected atomic.Bool
|
||||
libraries model.Libraries // Store libraries list for consistency across phases
|
||||
targets map[int][]string // Optional: map[libraryID][]folderPaths for selective scans
|
||||
}
|
||||
|
||||
func (s *scanState) sendProgress(info *ProgressInfo) {
|
||||
if s.progress != nil {
|
||||
s.progress <- info
|
||||
}
|
||||
}
|
||||
|
||||
func (s *scanState) isSelectiveScan() bool {
|
||||
return len(s.targets) > 0
|
||||
}
|
||||
|
||||
func (s *scanState) sendWarning(msg string) {
|
||||
s.sendProgress(&ProgressInfo{Warning: msg})
|
||||
}
|
||||
|
||||
func (s *scanState) sendError(err error) {
|
||||
s.sendProgress(&ProgressInfo{Error: err.Error()})
|
||||
}
|
||||
|
||||
func (s *scannerImpl) scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
|
||||
startTime := time.Now()
|
||||
|
||||
state := scanState{
|
||||
progress: progress,
|
||||
fullScan: fullScan,
|
||||
changesDetected: atomic.Bool{},
|
||||
}
|
||||
|
||||
// Set changesDetected to true for full scans to ensure all maintenance operations run
|
||||
if fullScan {
|
||||
state.changesDetected.Store(true)
|
||||
}
|
||||
|
||||
// Get libraries and optionally filter by targets
|
||||
allLibs, err := s.ds.Library(ctx).GetAll()
|
||||
if err != nil {
|
||||
state.sendWarning(fmt.Sprintf("getting libraries: %s", err))
|
||||
return
|
||||
}
|
||||
|
||||
if len(targets) > 0 {
|
||||
// Selective scan: filter libraries and build targets map
|
||||
state.targets = make(map[int][]string)
|
||||
|
||||
for _, target := range targets {
|
||||
folderPath := target.FolderPath
|
||||
if folderPath == "" {
|
||||
folderPath = "."
|
||||
}
|
||||
state.targets[target.LibraryID] = append(state.targets[target.LibraryID], folderPath)
|
||||
}
|
||||
|
||||
// Filter libraries to only those in targets
|
||||
state.libraries = slice.Filter(allLibs, func(lib model.Library) bool {
|
||||
return len(state.targets[lib.ID]) > 0
|
||||
})
|
||||
|
||||
log.Info(ctx, "Scanner: Starting selective scan", "fullScan", state.fullScan, "numLibraries", len(state.libraries), "numTargets", len(targets))
|
||||
} else {
|
||||
// Full library scan
|
||||
state.libraries = allLibs
|
||||
log.Info(ctx, "Scanner: Starting scan", "fullScan", state.fullScan, "numLibraries", len(state.libraries))
|
||||
}
|
||||
|
||||
// Store scan type and start time
|
||||
scanType := "quick"
|
||||
if state.fullScan {
|
||||
scanType = "full"
|
||||
}
|
||||
if state.isSelectiveScan() {
|
||||
scanType += "-selective"
|
||||
}
|
||||
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, scanType)
|
||||
_ = s.ds.Property(ctx).Put(consts.LastScanStartTimeKey, startTime.Format(time.RFC3339))
|
||||
|
||||
// if there was a full scan in progress, force a full scan
|
||||
if !state.fullScan {
|
||||
for _, lib := range state.libraries {
|
||||
if lib.FullScanInProgress {
|
||||
log.Info(ctx, "Scanner: Interrupted full scan detected", "lib", lib.Name)
|
||||
state.fullScan = true
|
||||
if state.isSelectiveScan() {
|
||||
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, "full-selective")
|
||||
} else {
|
||||
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, "full")
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Prepare libraries for scanning (initialize LastScanStartedAt if needed)
|
||||
err = s.prepareLibrariesForScan(ctx, &state)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error preparing libraries for scan", err)
|
||||
state.sendError(err)
|
||||
return
|
||||
}
|
||||
|
||||
err = run.Sequentially(
|
||||
// Phase 1: Scan all libraries and import new/updated files
|
||||
runPhase[*folderEntry](ctx, 1, createPhaseFolders(ctx, &state, s.ds, s.cw)),
|
||||
|
||||
// Phase 2: Process missing files, checking for moves
|
||||
runPhase[*missingTracks](ctx, 2, createPhaseMissingTracks(ctx, &state, s.ds)),
|
||||
|
||||
// Phases 3 and 4 can be run in parallel
|
||||
run.Parallel(
|
||||
// Phase 3: Refresh all new/changed albums and update artists
|
||||
runPhase[*model.Album](ctx, 3, createPhaseRefreshAlbums(ctx, &state, s.ds)),
|
||||
|
||||
// Phase 4: Import/update playlists
|
||||
runPhase[*model.Folder](ctx, 4, createPhasePlaylists(ctx, &state, s.ds, s.pls, s.cw)),
|
||||
),
|
||||
|
||||
// Final Steps (cannot be parallelized):
|
||||
|
||||
// Run GC if there were any changes (Remove dangling tracks, empty albums and artists, and orphan annotations)
|
||||
s.runGC(ctx, &state),
|
||||
|
||||
// Refresh artist and tags stats
|
||||
s.runRefreshStats(ctx, &state),
|
||||
|
||||
// Update last_scan_completed_at for all libraries
|
||||
s.runUpdateLibraries(ctx, &state),
|
||||
|
||||
// Optimize DB
|
||||
s.runOptimize(ctx),
|
||||
)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Finished with error", "duration", time.Since(startTime), err)
|
||||
_ = s.ds.Property(ctx).Put(consts.LastScanErrorKey, err.Error())
|
||||
state.sendError(err)
|
||||
return
|
||||
}
|
||||
|
||||
_ = s.ds.Property(ctx).Put(consts.LastScanErrorKey, "")
|
||||
|
||||
if state.changesDetected.Load() {
|
||||
state.sendProgress(&ProgressInfo{ChangesDetected: true})
|
||||
}
|
||||
|
||||
if state.isSelectiveScan() {
|
||||
log.Info(ctx, "Scanner: Finished scanning selected folders", "duration", time.Since(startTime), "numTargets", len(targets))
|
||||
} else {
|
||||
log.Info(ctx, "Scanner: Finished scanning all libraries", "duration", time.Since(startTime))
|
||||
}
|
||||
}
|
||||
|
||||
// prepareLibrariesForScan initializes the scan for all libraries in the state.
|
||||
// It calls ScanBegin for libraries that haven't started scanning yet (LastScanStartedAt is zero),
|
||||
// reloads them to get the updated state, and filters out any libraries that fail to initialize.
|
||||
func (s *scannerImpl) prepareLibrariesForScan(ctx context.Context, state *scanState) error {
|
||||
var successfulLibs []model.Library
|
||||
|
||||
for _, lib := range state.libraries {
|
||||
if lib.LastScanStartedAt.IsZero() {
|
||||
// This is a new scan - mark it as started
|
||||
err := s.ds.Library(ctx).ScanBegin(lib.ID, state.fullScan)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error marking scan start", "lib", lib.Name, err)
|
||||
state.sendWarning(err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
// Reload library to get updated state (timestamps, etc.)
|
||||
reloadedLib, err := s.ds.Library(ctx).Get(lib.ID)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error reloading library", "lib", lib.Name, err)
|
||||
state.sendWarning(err.Error())
|
||||
continue
|
||||
}
|
||||
lib = *reloadedLib
|
||||
} else {
|
||||
// This is a resumed scan
|
||||
log.Debug(ctx, "Scanner: Resuming previous scan", "lib", lib.Name,
|
||||
"lastScanStartedAt", lib.LastScanStartedAt, "fullScan", lib.FullScanInProgress)
|
||||
}
|
||||
|
||||
successfulLibs = append(successfulLibs, lib)
|
||||
}
|
||||
|
||||
if len(successfulLibs) == 0 {
|
||||
return fmt.Errorf("no libraries available for scanning")
|
||||
}
|
||||
|
||||
// Update state with only successfully initialized libraries
|
||||
state.libraries = successfulLibs
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *scannerImpl) runGC(ctx context.Context, state *scanState) func() error {
|
||||
return func() error {
|
||||
state.sendProgress(&ProgressInfo{ForceUpdate: true})
|
||||
return s.ds.WithTx(func(tx model.DataStore) error {
|
||||
if state.changesDetected.Load() {
|
||||
start := time.Now()
|
||||
|
||||
// For selective scans, extract library IDs to scope GC operations
|
||||
var libraryIDs []int
|
||||
if state.isSelectiveScan() {
|
||||
libraryIDs = slices.Collect(maps.Keys(state.targets))
|
||||
log.Debug(ctx, "Scanner: Running selective GC", "libraryIDs", libraryIDs)
|
||||
}
|
||||
|
||||
err := tx.GC(ctx, libraryIDs...)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error running GC", err)
|
||||
return fmt.Errorf("running GC: %w", err)
|
||||
}
|
||||
log.Debug(ctx, "Scanner: GC completed", "elapsed", time.Since(start))
|
||||
} else {
|
||||
log.Debug(ctx, "Scanner: No changes detected, skipping GC")
|
||||
}
|
||||
return nil
|
||||
}, "scanner: GC")
|
||||
}
|
||||
}
|
||||
|
||||
func (s *scannerImpl) runRefreshStats(ctx context.Context, state *scanState) func() error {
|
||||
return func() error {
|
||||
if !state.changesDetected.Load() {
|
||||
log.Debug(ctx, "Scanner: No changes detected, skipping refreshing stats")
|
||||
return nil
|
||||
}
|
||||
start := time.Now()
|
||||
stats, err := s.ds.Artist(ctx).RefreshStats(state.fullScan)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error refreshing artists stats", err)
|
||||
return fmt.Errorf("refreshing artists stats: %w", err)
|
||||
}
|
||||
log.Debug(ctx, "Scanner: Refreshed artist stats", "stats", stats, "elapsed", time.Since(start))
|
||||
|
||||
start = time.Now()
|
||||
err = s.ds.Tag(ctx).UpdateCounts()
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error updating tag counts", err)
|
||||
return fmt.Errorf("updating tag counts: %w", err)
|
||||
}
|
||||
log.Debug(ctx, "Scanner: Updated tag counts", "elapsed", time.Since(start))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (s *scannerImpl) runOptimize(ctx context.Context) func() error {
|
||||
return func() error {
|
||||
start := time.Now()
|
||||
db.Optimize(ctx)
|
||||
log.Debug(ctx, "Scanner: Optimized DB", "elapsed", time.Since(start))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (s *scannerImpl) runUpdateLibraries(ctx context.Context, state *scanState) func() error {
|
||||
return func() error {
|
||||
start := time.Now()
|
||||
return s.ds.WithTx(func(tx model.DataStore) error {
|
||||
for _, lib := range state.libraries {
|
||||
err := tx.Library(ctx).ScanEnd(lib.ID)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error updating last scan completed", "lib", lib.Name, err)
|
||||
return fmt.Errorf("updating last scan completed: %w", err)
|
||||
}
|
||||
err = tx.Property(ctx).Put(consts.PIDTrackKey, conf.Server.PID.Track)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error updating track PID conf", err)
|
||||
return fmt.Errorf("updating track PID conf: %w", err)
|
||||
}
|
||||
err = tx.Property(ctx).Put(consts.PIDAlbumKey, conf.Server.PID.Album)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error updating album PID conf", err)
|
||||
return fmt.Errorf("updating album PID conf: %w", err)
|
||||
}
|
||||
if state.changesDetected.Load() {
|
||||
log.Debug(ctx, "Scanner: Refreshing library stats", "lib", lib.Name)
|
||||
if err := tx.Library(ctx).RefreshStats(lib.ID); err != nil {
|
||||
log.Error(ctx, "Scanner: Error refreshing library stats", "lib", lib.Name, err)
|
||||
return fmt.Errorf("refreshing library stats: %w", err)
|
||||
}
|
||||
} else {
|
||||
log.Debug(ctx, "Scanner: No changes detected, skipping library stats refresh", "lib", lib.Name)
|
||||
}
|
||||
}
|
||||
log.Debug(ctx, "Scanner: Updated libraries after scan", "elapsed", time.Since(start), "numLibraries", len(state.libraries))
|
||||
return nil
|
||||
}, "scanner: update libraries")
|
||||
}
|
||||
}
|
||||
|
||||
type phase[T any] interface {
|
||||
producer() ppl.Producer[T]
|
||||
stages() []ppl.Stage[T]
|
||||
finalize(error) error
|
||||
description() string
|
||||
}
|
||||
|
||||
func runPhase[T any](ctx context.Context, phaseNum int, phase phase[T]) func() error {
|
||||
return func() error {
|
||||
log.Debug(ctx, fmt.Sprintf("Scanner: Starting phase %d: %s", phaseNum, phase.description()))
|
||||
start := time.Now()
|
||||
|
||||
producer := phase.producer()
|
||||
stages := phase.stages()
|
||||
|
||||
// Prepend a counter stage to the phase's pipeline
|
||||
counter, countStageFn := countTasks[T]()
|
||||
stages = append([]ppl.Stage[T]{ppl.NewStage(countStageFn, ppl.Name("count tasks"))}, stages...)
|
||||
|
||||
var err error
|
||||
if log.IsGreaterOrEqualTo(log.LevelDebug) {
|
||||
var m *ppl.Metrics
|
||||
m, err = ppl.Measure(producer, stages...)
|
||||
log.Info(ctx, "Scanner: "+m.String(), err)
|
||||
} else {
|
||||
err = ppl.Do(producer, stages...)
|
||||
}
|
||||
|
||||
err = phase.finalize(err)
|
||||
|
||||
if err != nil {
|
||||
log.Error(ctx, fmt.Sprintf("Scanner: Error processing libraries in phase %d", phaseNum), "elapsed", time.Since(start), err)
|
||||
} else {
|
||||
log.Debug(ctx, fmt.Sprintf("Scanner: Finished phase %d", phaseNum), "elapsed", time.Since(start), "totalTasks", counter.Load())
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
func countTasks[T any]() (*atomic.Int64, func(T) (T, error)) {
|
||||
counter := atomic.Int64{}
|
||||
return &counter, func(in T) (T, error) {
|
||||
counter.Add(1)
|
||||
return in, nil
|
||||
}
|
||||
}
|
||||
|
||||
var _ scanner = (*scannerImpl)(nil)
|
||||
89
scanner/scanner_benchmark_test.go
Normal file
89
scanner/scanner_benchmark_test.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package scanner_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"testing/fstest"
|
||||
|
||||
"github.com/dustin/go-humanize"
|
||||
"github.com/google/uuid"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/metrics"
|
||||
"github.com/navidrome/navidrome/core/storage/storagetest"
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/persistence"
|
||||
"github.com/navidrome/navidrome/scanner"
|
||||
"github.com/navidrome/navidrome/server/events"
|
||||
"go.uber.org/goleak"
|
||||
)
|
||||
|
||||
func BenchmarkScan(b *testing.B) {
|
||||
// Detect any goroutine leaks in the scanner code under test
|
||||
defer goleak.VerifyNone(b,
|
||||
goleak.IgnoreTopFunction("testing.(*B).run1"),
|
||||
goleak.IgnoreAnyFunction("testing.(*B).doBench"),
|
||||
// Ignore database/sql.(*DB).connectionOpener, as we are not closing the database connection
|
||||
goleak.IgnoreAnyFunction("database/sql.(*DB).connectionOpener"),
|
||||
)
|
||||
|
||||
tmpDir := os.TempDir()
|
||||
conf.Server.DbPath = filepath.Join(tmpDir, "test-scanner.db?_journal_mode=WAL")
|
||||
db.Init(context.Background())
|
||||
|
||||
ds := persistence.New(db.Db())
|
||||
conf.Server.DevExternalScanner = false
|
||||
s := scanner.New(context.Background(), ds, artwork.NoopCacheWarmer(), events.NoopBroker(),
|
||||
core.NewPlaylists(ds), metrics.NewNoopInstance())
|
||||
|
||||
fs := storagetest.FakeFS{}
|
||||
storagetest.Register("fake", &fs)
|
||||
var beatlesMBID = uuid.NewString()
|
||||
beatles := _t{
|
||||
"artist": "The Beatles",
|
||||
"artistsort": "Beatles, The",
|
||||
"musicbrainz_artistid": beatlesMBID,
|
||||
"albumartist": "The Beatles",
|
||||
"albumartistsort": "Beatles The",
|
||||
"musicbrainz_albumartistid": beatlesMBID,
|
||||
}
|
||||
revolver := template(beatles, _t{"album": "Revolver", "year": 1966, "composer": "Lennon/McCartney"})
|
||||
help := template(beatles, _t{"album": "Help!", "year": 1965, "composer": "Lennon/McCartney"})
|
||||
fs.SetFiles(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(track(2, "Eleanor Rigby")),
|
||||
"The Beatles/Revolver/03 - I'm Only Sleeping.mp3": revolver(track(3, "I'm Only Sleeping")),
|
||||
"The Beatles/Revolver/04 - Love You To.mp3": revolver(track(4, "Love You To")),
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(track(1, "Help!")),
|
||||
"The Beatles/Help!/02 - The Night Before.mp3": help(track(2, "The Night Before")),
|
||||
"The Beatles/Help!/03 - You've Got to Hide Your Love Away.mp3": help(track(3, "You've Got to Hide Your Love Away")),
|
||||
})
|
||||
|
||||
lib := model.Library{ID: 1, Name: "Fake Library", Path: "fake:///music"}
|
||||
err := ds.Library(context.Background()).Put(&lib)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
|
||||
var m1, m2 runtime.MemStats
|
||||
runtime.GC()
|
||||
runtime.ReadMemStats(&m1)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := s.ScanAll(context.Background(), true)
|
||||
if err != nil {
|
||||
b.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
runtime.ReadMemStats(&m2)
|
||||
fmt.Println("total:", humanize.Bytes(m2.TotalAlloc-m1.TotalAlloc))
|
||||
fmt.Println("mallocs:", humanize.Comma(int64(m2.Mallocs-m1.Mallocs)))
|
||||
}
|
||||
98
scanner/scanner_internal_test.go
Normal file
98
scanner/scanner_internal_test.go
Normal file
@@ -0,0 +1,98 @@
|
||||
// nolint unused
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync/atomic"
|
||||
|
||||
ppl "github.com/google/go-pipeline/pkg/pipeline"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
type mockPhase struct {
|
||||
num int
|
||||
produceFunc func() ppl.Producer[int]
|
||||
stagesFunc func() []ppl.Stage[int]
|
||||
finalizeFunc func(error) error
|
||||
descriptionFn func() string
|
||||
}
|
||||
|
||||
func (m *mockPhase) producer() ppl.Producer[int] {
|
||||
return m.produceFunc()
|
||||
}
|
||||
|
||||
func (m *mockPhase) stages() []ppl.Stage[int] {
|
||||
return m.stagesFunc()
|
||||
}
|
||||
|
||||
func (m *mockPhase) finalize(err error) error {
|
||||
return m.finalizeFunc(err)
|
||||
}
|
||||
|
||||
func (m *mockPhase) description() string {
|
||||
return m.descriptionFn()
|
||||
}
|
||||
|
||||
var _ = Describe("runPhase", func() {
|
||||
var (
|
||||
ctx context.Context
|
||||
phaseNum int
|
||||
phase *mockPhase
|
||||
sum atomic.Int32
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = context.Background()
|
||||
phaseNum = 1
|
||||
phase = &mockPhase{
|
||||
num: 3,
|
||||
produceFunc: func() ppl.Producer[int] {
|
||||
return ppl.NewProducer(func(put func(int)) error {
|
||||
for i := 1; i <= phase.num; i++ {
|
||||
put(i)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
},
|
||||
stagesFunc: func() []ppl.Stage[int] {
|
||||
return []ppl.Stage[int]{ppl.NewStage(func(i int) (int, error) {
|
||||
sum.Add(int32(i))
|
||||
return i, nil
|
||||
})}
|
||||
},
|
||||
finalizeFunc: func(err error) error {
|
||||
return err
|
||||
},
|
||||
descriptionFn: func() string {
|
||||
return "Mock Phase"
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
It("should run the phase successfully", func() {
|
||||
err := runPhase(ctx, phaseNum, phase)()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(sum.Load()).To(Equal(int32(1 * 2 * 3)))
|
||||
})
|
||||
|
||||
It("should log an error if the phase fails", func() {
|
||||
phase.finalizeFunc = func(err error) error {
|
||||
return errors.New("finalize error")
|
||||
}
|
||||
err := runPhase(ctx, phaseNum, phase)()
|
||||
Expect(err).To(HaveOccurred())
|
||||
Expect(err.Error()).To(ContainSubstring("finalize error"))
|
||||
})
|
||||
|
||||
It("should count the tasks", func() {
|
||||
counter, countStageFn := countTasks[int]()
|
||||
phase.stagesFunc = func() []ppl.Stage[int] {
|
||||
return []ppl.Stage[int]{ppl.NewStage(countStageFn, ppl.Name("count tasks"))}
|
||||
}
|
||||
err := runPhase(ctx, phaseNum, phase)()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(counter.Load()).To(Equal(int64(3)))
|
||||
})
|
||||
})
|
||||
831
scanner/scanner_multilibrary_test.go
Normal file
831
scanner/scanner_multilibrary_test.go
Normal file
@@ -0,0 +1,831 @@
|
||||
package scanner_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"path/filepath"
|
||||
"testing/fstest"
|
||||
"time"
|
||||
|
||||
"github.com/Masterminds/squirrel"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/metrics"
|
||||
"github.com/navidrome/navidrome/core/storage/storagetest"
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/request"
|
||||
"github.com/navidrome/navidrome/persistence"
|
||||
"github.com/navidrome/navidrome/scanner"
|
||||
"github.com/navidrome/navidrome/server/events"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
"github.com/navidrome/navidrome/utils/slice"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Scanner - Multi-Library", Ordered, func() {
|
||||
var ctx context.Context
|
||||
var lib1, lib2 model.Library
|
||||
var ds *tests.MockDataStore
|
||||
var s model.Scanner
|
||||
|
||||
createFS := func(path string, files fstest.MapFS) storagetest.FakeFS {
|
||||
fs := storagetest.FakeFS{}
|
||||
fs.SetFiles(files)
|
||||
storagetest.Register(path, &fs)
|
||||
return fs
|
||||
}
|
||||
|
||||
BeforeAll(func() {
|
||||
ctx = request.WithUser(GinkgoT().Context(), model.User{ID: "123", IsAdmin: true})
|
||||
tmpDir := GinkgoT().TempDir()
|
||||
conf.Server.DbPath = filepath.Join(tmpDir, "test-scanner-multilibrary.db?_journal_mode=WAL")
|
||||
log.Warn("Using DB at " + conf.Server.DbPath)
|
||||
db.Db().SetMaxOpenConns(1)
|
||||
})
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.DevExternalScanner = false
|
||||
|
||||
db.Init(ctx)
|
||||
DeferCleanup(func() {
|
||||
Expect(tests.ClearDB()).To(Succeed())
|
||||
})
|
||||
|
||||
ds = &tests.MockDataStore{RealDS: persistence.New(db.Db())}
|
||||
|
||||
// Create the admin user in the database to match the context
|
||||
adminUser := model.User{
|
||||
ID: "123",
|
||||
UserName: "admin",
|
||||
Name: "Admin User",
|
||||
IsAdmin: true,
|
||||
NewPassword: "password",
|
||||
}
|
||||
Expect(ds.User(ctx).Put(&adminUser)).To(Succeed())
|
||||
|
||||
s = scanner.New(ctx, ds, artwork.NoopCacheWarmer(), events.NoopBroker(),
|
||||
core.NewPlaylists(ds), metrics.NewNoopInstance())
|
||||
|
||||
// Create two test libraries (let DB auto-assign IDs)
|
||||
lib1 = model.Library{Name: "Rock Collection", Path: "rock:///music"}
|
||||
lib2 = model.Library{Name: "Jazz Collection", Path: "jazz:///music"}
|
||||
Expect(ds.Library(ctx).Put(&lib1)).To(Succeed())
|
||||
Expect(ds.Library(ctx).Put(&lib2)).To(Succeed())
|
||||
})
|
||||
|
||||
runScanner := func(ctx context.Context, fullScan bool) error {
|
||||
_, err := s.ScanAll(ctx, fullScan)
|
||||
return err
|
||||
}
|
||||
|
||||
Context("Two Libraries with Different Content", func() {
|
||||
BeforeEach(func() {
|
||||
// Rock library content
|
||||
beatles := template(_t{"albumartist": "The Beatles", "album": "Abbey Road", "year": 1969, "genre": "Rock"})
|
||||
zeppelin := template(_t{"albumartist": "Led Zeppelin", "album": "IV", "year": 1971, "genre": "Rock"})
|
||||
|
||||
_ = createFS("rock", fstest.MapFS{
|
||||
"The Beatles/Abbey Road/01 - Come Together.mp3": beatles(track(1, "Come Together")),
|
||||
"The Beatles/Abbey Road/02 - Something.mp3": beatles(track(2, "Something")),
|
||||
"Led Zeppelin/IV/01 - Black Dog.mp3": zeppelin(track(1, "Black Dog")),
|
||||
"Led Zeppelin/IV/02 - Rock and Roll.mp3": zeppelin(track(2, "Rock and Roll")),
|
||||
})
|
||||
|
||||
// Jazz library content
|
||||
miles := template(_t{"albumartist": "Miles Davis", "album": "Kind of Blue", "year": 1959, "genre": "Jazz"})
|
||||
coltrane := template(_t{"albumartist": "John Coltrane", "album": "Giant Steps", "year": 1960, "genre": "Jazz"})
|
||||
|
||||
_ = createFS("jazz", fstest.MapFS{
|
||||
"Miles Davis/Kind of Blue/01 - So What.mp3": miles(track(1, "So What")),
|
||||
"Miles Davis/Kind of Blue/02 - Freddie Freeloader.mp3": miles(track(2, "Freddie Freeloader")),
|
||||
"John Coltrane/Giant Steps/01 - Giant Steps.mp3": coltrane(track(1, "Giant Steps")),
|
||||
"John Coltrane/Giant Steps/02 - Cousin Mary.mp3": coltrane(track(2, "Cousin Mary")),
|
||||
})
|
||||
})
|
||||
|
||||
When("scanning both libraries", func() {
|
||||
It("should import files with correct library_id", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Check Rock library media files
|
||||
rockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
Sort: "title",
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockFiles).To(HaveLen(4))
|
||||
|
||||
rockTitles := slice.Map(rockFiles, func(f model.MediaFile) string { return f.Title })
|
||||
Expect(rockTitles).To(ContainElements("Come Together", "Something", "Black Dog", "Rock and Roll"))
|
||||
|
||||
// Verify all rock files have correct library_id
|
||||
for _, mf := range rockFiles {
|
||||
Expect(mf.LibraryID).To(Equal(lib1.ID), "Rock file %s should have library_id %d", mf.Title, lib1.ID)
|
||||
}
|
||||
|
||||
// Check Jazz library media files
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
Sort: "title",
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(4))
|
||||
|
||||
jazzTitles := slice.Map(jazzFiles, func(f model.MediaFile) string { return f.Title })
|
||||
Expect(jazzTitles).To(ContainElements("So What", "Freddie Freeloader", "Giant Steps", "Cousin Mary"))
|
||||
|
||||
// Verify all jazz files have correct library_id
|
||||
for _, mf := range jazzFiles {
|
||||
Expect(mf.LibraryID).To(Equal(lib2.ID), "Jazz file %s should have library_id %d", mf.Title, lib2.ID)
|
||||
}
|
||||
})
|
||||
|
||||
It("should create albums with correct library_id", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Check Rock library albums
|
||||
rockAlbums, err := ds.Album(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
Sort: "name",
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockAlbums).To(HaveLen(2))
|
||||
Expect(rockAlbums[0].Name).To(Equal("Abbey Road"))
|
||||
Expect(rockAlbums[0].LibraryID).To(Equal(lib1.ID))
|
||||
Expect(rockAlbums[0].SongCount).To(Equal(2))
|
||||
Expect(rockAlbums[1].Name).To(Equal("IV"))
|
||||
Expect(rockAlbums[1].LibraryID).To(Equal(lib1.ID))
|
||||
Expect(rockAlbums[1].SongCount).To(Equal(2))
|
||||
|
||||
// Check Jazz library albums
|
||||
jazzAlbums, err := ds.Album(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
Sort: "name",
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzAlbums).To(HaveLen(2))
|
||||
Expect(jazzAlbums[0].Name).To(Equal("Giant Steps"))
|
||||
Expect(jazzAlbums[0].LibraryID).To(Equal(lib2.ID))
|
||||
Expect(jazzAlbums[0].SongCount).To(Equal(2))
|
||||
Expect(jazzAlbums[1].Name).To(Equal("Kind of Blue"))
|
||||
Expect(jazzAlbums[1].LibraryID).To(Equal(lib2.ID))
|
||||
Expect(jazzAlbums[1].SongCount).To(Equal(2))
|
||||
})
|
||||
|
||||
It("should create folders with correct library_id", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Check Rock library folders
|
||||
rockFolders, err := ds.Folder(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockFolders).To(HaveLen(5)) // ., The Beatles, Led Zeppelin, Abbey Road, IV
|
||||
|
||||
for _, folder := range rockFolders {
|
||||
Expect(folder.LibraryID).To(Equal(lib1.ID), "Rock folder %s should have library_id %d", folder.Name, lib1.ID)
|
||||
}
|
||||
|
||||
// Check Jazz library folders
|
||||
jazzFolders, err := ds.Folder(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFolders).To(HaveLen(5)) // ., Miles Davis, John Coltrane, Kind of Blue, Giant Steps
|
||||
|
||||
for _, folder := range jazzFolders {
|
||||
Expect(folder.LibraryID).To(Equal(lib2.ID), "Jazz folder %s should have library_id %d", folder.Name, lib2.ID)
|
||||
}
|
||||
})
|
||||
|
||||
It("should create library-artist associations correctly", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Check library-artist associations
|
||||
|
||||
// Get all artists and check library associations
|
||||
allArtists, err := ds.Artist(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
rockArtistNames := []string{}
|
||||
jazzArtistNames := []string{}
|
||||
|
||||
for _, artist := range allArtists {
|
||||
// Check if artist is associated with rock library
|
||||
var count int64
|
||||
err := db.Db().QueryRow(
|
||||
"SELECT COUNT(*) FROM library_artist WHERE library_id = ? AND artist_id = ?",
|
||||
lib1.ID, artist.ID,
|
||||
).Scan(&count)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
if count > 0 {
|
||||
rockArtistNames = append(rockArtistNames, artist.Name)
|
||||
}
|
||||
|
||||
// Check if artist is associated with jazz library
|
||||
err = db.Db().QueryRow(
|
||||
"SELECT COUNT(*) FROM library_artist WHERE library_id = ? AND artist_id = ?",
|
||||
lib2.ID, artist.ID,
|
||||
).Scan(&count)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
if count > 0 {
|
||||
jazzArtistNames = append(jazzArtistNames, artist.Name)
|
||||
}
|
||||
}
|
||||
|
||||
Expect(rockArtistNames).To(ContainElements("The Beatles", "Led Zeppelin"))
|
||||
Expect(jazzArtistNames).To(ContainElements("Miles Davis", "John Coltrane"))
|
||||
|
||||
// Artists should not be shared between libraries (except [Unknown Artist])
|
||||
for _, name := range rockArtistNames {
|
||||
if name != "[Unknown Artist]" {
|
||||
Expect(jazzArtistNames).ToNot(ContainElement(name))
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
It("should update library statistics correctly", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Check Rock library stats
|
||||
rockLib, err := ds.Library(ctx).Get(lib1.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockLib.TotalSongs).To(Equal(4))
|
||||
Expect(rockLib.TotalAlbums).To(Equal(2))
|
||||
|
||||
Expect(rockLib.TotalArtists).To(Equal(3)) // The Beatles, Led Zeppelin, [Unknown Artist]
|
||||
Expect(rockLib.TotalFolders).To(Equal(2)) // Abbey Road, IV (only folders with audio files)
|
||||
|
||||
// Check Jazz library stats
|
||||
jazzLib, err := ds.Library(ctx).Get(lib2.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzLib.TotalSongs).To(Equal(4))
|
||||
Expect(jazzLib.TotalAlbums).To(Equal(2))
|
||||
Expect(jazzLib.TotalArtists).To(Equal(3)) // Miles Davis, John Coltrane, [Unknown Artist]
|
||||
Expect(jazzLib.TotalFolders).To(Equal(2)) // Kind of Blue, Giant Steps (only folders with audio files)
|
||||
})
|
||||
})
|
||||
|
||||
When("libraries have different content", func() {
|
||||
It("should maintain separate statistics per library", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Verify rock library stats
|
||||
rockLib, err := ds.Library(ctx).Get(lib1.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockLib.TotalSongs).To(Equal(4))
|
||||
Expect(rockLib.TotalAlbums).To(Equal(2))
|
||||
|
||||
// Verify jazz library stats
|
||||
jazzLib, err := ds.Library(ctx).Get(lib2.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzLib.TotalSongs).To(Equal(4))
|
||||
Expect(jazzLib.TotalAlbums).To(Equal(2))
|
||||
|
||||
// Verify that libraries don't interfere with each other
|
||||
rockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockFiles).To(HaveLen(4))
|
||||
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(4))
|
||||
})
|
||||
})
|
||||
|
||||
When("verifying library isolation", func() {
|
||||
It("should keep library data completely separate", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Verify that rock library only contains rock content
|
||||
rockAlbums, err := ds.Album(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
rockAlbumNames := slice.Map(rockAlbums, func(a model.Album) string { return a.Name })
|
||||
Expect(rockAlbumNames).To(ContainElements("Abbey Road", "IV"))
|
||||
Expect(rockAlbumNames).ToNot(ContainElements("Kind of Blue", "Giant Steps"))
|
||||
|
||||
// Verify that jazz library only contains jazz content
|
||||
jazzAlbums, err := ds.Album(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
jazzAlbumNames := slice.Map(jazzAlbums, func(a model.Album) string { return a.Name })
|
||||
Expect(jazzAlbumNames).To(ContainElements("Kind of Blue", "Giant Steps"))
|
||||
Expect(jazzAlbumNames).ToNot(ContainElements("Abbey Road", "IV"))
|
||||
})
|
||||
})
|
||||
|
||||
When("same artist appears in different libraries", func() {
|
||||
It("should associate artist with both libraries correctly", func() {
|
||||
// Create libraries with Jeff Beck albums in both
|
||||
jeffRock := template(_t{"albumartist": "Jeff Beck", "album": "Truth", "year": 1968, "genre": "Rock"})
|
||||
jeffJazz := template(_t{"albumartist": "Jeff Beck", "album": "Blow by Blow", "year": 1975, "genre": "Jazz"})
|
||||
beatles := template(_t{"albumartist": "The Beatles", "album": "Abbey Road", "year": 1969, "genre": "Rock"})
|
||||
miles := template(_t{"albumartist": "Miles Davis", "album": "Kind of Blue", "year": 1959, "genre": "Jazz"})
|
||||
|
||||
// Create rock library with Jeff Beck's Truth album
|
||||
_ = createFS("rock", fstest.MapFS{
|
||||
"The Beatles/Abbey Road/01 - Come Together.mp3": beatles(track(1, "Come Together")),
|
||||
"The Beatles/Abbey Road/02 - Something.mp3": beatles(track(2, "Something")),
|
||||
"Jeff Beck/Truth/01 - Beck's Bolero.mp3": jeffRock(track(1, "Beck's Bolero")),
|
||||
"Jeff Beck/Truth/02 - Ol' Man River.mp3": jeffRock(track(2, "Ol' Man River")),
|
||||
})
|
||||
|
||||
// Create jazz library with Jeff Beck's Blow by Blow album
|
||||
_ = createFS("jazz", fstest.MapFS{
|
||||
"Miles Davis/Kind of Blue/01 - So What.mp3": miles(track(1, "So What")),
|
||||
"Miles Davis/Kind of Blue/02 - Freddie Freeloader.mp3": miles(track(2, "Freddie Freeloader")),
|
||||
"Jeff Beck/Blow by Blow/01 - You Know What I Mean.mp3": jeffJazz(track(1, "You Know What I Mean")),
|
||||
"Jeff Beck/Blow by Blow/02 - She's a Woman.mp3": jeffJazz(track(2, "She's a Woman")),
|
||||
})
|
||||
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Jeff Beck should be associated with both libraries
|
||||
var rockCount, jazzCount int64
|
||||
|
||||
// Get Jeff Beck artist ID
|
||||
jeffArtists, err := ds.Artist(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"name": "Jeff Beck"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jeffArtists).To(HaveLen(1))
|
||||
jeffID := jeffArtists[0].ID
|
||||
|
||||
// Check rock library association
|
||||
err = db.Db().QueryRow(
|
||||
"SELECT COUNT(*) FROM library_artist WHERE library_id = ? AND artist_id = ?",
|
||||
lib1.ID, jeffID,
|
||||
).Scan(&rockCount)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockCount).To(Equal(int64(1)))
|
||||
|
||||
// Check jazz library association
|
||||
err = db.Db().QueryRow(
|
||||
"SELECT COUNT(*) FROM library_artist WHERE library_id = ? AND artist_id = ?",
|
||||
lib2.ID, jeffID,
|
||||
).Scan(&jazzCount)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzCount).To(Equal(int64(1)))
|
||||
|
||||
// Verify Jeff Beck albums are in correct libraries
|
||||
rockAlbums, err := ds.Album(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID, "album_artist": "Jeff Beck"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockAlbums).To(HaveLen(1))
|
||||
Expect(rockAlbums[0].Name).To(Equal("Truth"))
|
||||
|
||||
jazzAlbums, err := ds.Album(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID, "album_artist": "Jeff Beck"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzAlbums).To(HaveLen(1))
|
||||
Expect(jazzAlbums[0].Name).To(Equal("Blow by Blow"))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Context("Incremental Scan Behavior", func() {
|
||||
BeforeEach(func() {
|
||||
// Start with minimal content in both libraries
|
||||
rock := template(_t{"albumartist": "Queen", "album": "News of the World", "year": 1977, "genre": "Rock"})
|
||||
jazz := template(_t{"albumartist": "Bill Evans", "album": "Waltz for Debby", "year": 1961, "genre": "Jazz"})
|
||||
|
||||
createFS("rock", fstest.MapFS{
|
||||
"Queen/News of the World/01 - We Will Rock You.mp3": rock(track(1, "We Will Rock You")),
|
||||
})
|
||||
|
||||
createFS("jazz", fstest.MapFS{
|
||||
"Bill Evans/Waltz for Debby/01 - My Foolish Heart.mp3": jazz(track(1, "My Foolish Heart")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should handle incremental scans per library correctly", func() {
|
||||
// Initial full scan
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Verify initial state
|
||||
rockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockFiles).To(HaveLen(1))
|
||||
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(1))
|
||||
|
||||
// Incremental scan should not duplicate existing files
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
// Verify counts remain the same
|
||||
rockFiles, err = ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockFiles).To(HaveLen(1))
|
||||
|
||||
jazzFiles, err = ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(1))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Missing Files Handling", func() {
|
||||
var rockFS storagetest.FakeFS
|
||||
|
||||
BeforeEach(func() {
|
||||
rock := template(_t{"albumartist": "AC/DC", "album": "Back in Black", "year": 1980, "genre": "Rock"})
|
||||
|
||||
rockFS = createFS("rock", fstest.MapFS{
|
||||
"AC-DC/Back in Black/01 - Hells Bells.mp3": rock(track(1, "Hells Bells")),
|
||||
"AC-DC/Back in Black/02 - Shoot to Thrill.mp3": rock(track(2, "Shoot to Thrill")),
|
||||
})
|
||||
|
||||
createFS("jazz", fstest.MapFS{
|
||||
"Herbie Hancock/Head Hunters/01 - Chameleon.mp3": template(_t{
|
||||
"albumartist": "Herbie Hancock", "album": "Head Hunters", "year": 1973, "genre": "Jazz",
|
||||
})(track(1, "Chameleon")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should mark missing files correctly per library", func() {
|
||||
// Initial scan
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Remove one file from rock library only
|
||||
rockFS.Remove("AC-DC/Back in Black/02 - Shoot to Thrill.mp3")
|
||||
|
||||
// Rescan
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
// Check that only the rock library file is marked as missing
|
||||
missingRockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.And{
|
||||
squirrel.Eq{"library_id": lib1.ID},
|
||||
squirrel.Eq{"missing": true},
|
||||
},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(missingRockFiles).To(HaveLen(1))
|
||||
Expect(missingRockFiles[0].Title).To(Equal("Shoot to Thrill"))
|
||||
|
||||
// Check that jazz library files are not affected
|
||||
missingJazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.And{
|
||||
squirrel.Eq{"library_id": lib2.ID},
|
||||
squirrel.Eq{"missing": true},
|
||||
},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(missingJazzFiles).To(HaveLen(0))
|
||||
|
||||
// Verify non-missing files
|
||||
presentRockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.And{
|
||||
squirrel.Eq{"library_id": lib1.ID},
|
||||
squirrel.Eq{"missing": false},
|
||||
},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(presentRockFiles).To(HaveLen(1))
|
||||
Expect(presentRockFiles[0].Title).To(Equal("Hells Bells"))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Error Handling - Multi-Library", func() {
|
||||
Context("Filesystem errors affecting one library", func() {
|
||||
var rockFS storagetest.FakeFS
|
||||
|
||||
BeforeEach(func() {
|
||||
// Set up content for both libraries
|
||||
rock := template(_t{"albumartist": "AC/DC", "album": "Back in Black", "year": 1980, "genre": "Rock"})
|
||||
jazz := template(_t{"albumartist": "Miles Davis", "album": "Kind of Blue", "year": 1959, "genre": "Jazz"})
|
||||
|
||||
rockFS = createFS("rock", fstest.MapFS{
|
||||
"AC-DC/Back in Black/01 - Hells Bells.mp3": rock(track(1, "Hells Bells")),
|
||||
"AC-DC/Back in Black/02 - Shoot to Thrill.mp3": rock(track(2, "Shoot to Thrill")),
|
||||
})
|
||||
|
||||
createFS("jazz", fstest.MapFS{
|
||||
"Miles Davis/Kind of Blue/01 - So What.mp3": jazz(track(1, "So What")),
|
||||
"Miles Davis/Kind of Blue/02 - Freddie Freeloader.mp3": jazz(track(2, "Freddie Freeloader")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should not affect scanning of other libraries", func() {
|
||||
// Inject filesystem read error in rock library only
|
||||
rockFS.SetError("AC-DC/Back in Black/01 - Hells Bells.mp3", errors.New("filesystem read error"))
|
||||
|
||||
// Scan should succeed overall and return warnings
|
||||
warnings, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(warnings).ToNot(BeEmpty(), "Should have warnings for filesystem errors")
|
||||
|
||||
// Jazz library should have been scanned successfully
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(2))
|
||||
Expect(jazzFiles[0].Title).To(BeElementOf("So What", "Freddie Freeloader"))
|
||||
Expect(jazzFiles[1].Title).To(BeElementOf("So What", "Freddie Freeloader"))
|
||||
|
||||
// Rock library may have partial content (depending on scanner implementation)
|
||||
rockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
// No specific expectation - some files may have been imported despite errors
|
||||
_ = rockFiles
|
||||
|
||||
// Verify jazz library stats are correct
|
||||
jazzLib, err := ds.Library(ctx).Get(lib2.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzLib.TotalSongs).To(Equal(2))
|
||||
|
||||
// Error should be empty (warnings don't count as scan errors)
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "unset")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should continue with warnings for affected library", func() {
|
||||
// Inject read errors on multiple files in rock library
|
||||
rockFS.SetError("AC-DC/Back in Black/01 - Hells Bells.mp3", errors.New("read error 1"))
|
||||
rockFS.SetError("AC-DC/Back in Black/02 - Shoot to Thrill.mp3", errors.New("read error 2"))
|
||||
|
||||
// Scan should complete with warnings for multiple filesystem errors
|
||||
warnings, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(warnings).ToNot(BeEmpty(), "Should have warnings for multiple filesystem errors")
|
||||
|
||||
// Jazz library should be completely unaffected
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(2))
|
||||
|
||||
// Jazz library statistics should be accurate
|
||||
jazzLib, err := ds.Library(ctx).Get(lib2.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzLib.TotalSongs).To(Equal(2))
|
||||
Expect(jazzLib.TotalAlbums).To(Equal(1))
|
||||
|
||||
// Error should be empty (warnings don't count as scan errors)
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "unset")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
Context("Database errors during multi-library scanning", func() {
|
||||
BeforeEach(func() {
|
||||
// Set up content for both libraries
|
||||
rock := template(_t{"albumartist": "Queen", "album": "News of the World", "year": 1977, "genre": "Rock"})
|
||||
jazz := template(_t{"albumartist": "Bill Evans", "album": "Waltz for Debby", "year": 1961, "genre": "Jazz"})
|
||||
|
||||
createFS("rock", fstest.MapFS{
|
||||
"Queen/News of the World/01 - We Will Rock You.mp3": rock(track(1, "We Will Rock You")),
|
||||
})
|
||||
|
||||
createFS("jazz", fstest.MapFS{
|
||||
"Bill Evans/Waltz for Debby/01 - My Foolish Heart.mp3": jazz(track(1, "My Foolish Heart")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should propagate database errors and stop scanning", func() {
|
||||
// Install mock repo that injects DB error
|
||||
mfRepo := &mockMediaFileRepo{
|
||||
MediaFileRepository: ds.RealDS.MediaFile(ctx),
|
||||
GetMissingAndMatchingError: errors.New("database connection failed"),
|
||||
}
|
||||
ds.MockedMediaFile = mfRepo
|
||||
|
||||
// Scan should return the database error
|
||||
Expect(runScanner(ctx, false)).To(MatchError(ContainSubstring("database connection failed")))
|
||||
|
||||
// Error should be recorded in scanner properties
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(ContainSubstring("database connection failed"))
|
||||
})
|
||||
|
||||
It("should preserve error information in scanner properties", func() {
|
||||
// Install mock repo that injects DB error
|
||||
mfRepo := &mockMediaFileRepo{
|
||||
MediaFileRepository: ds.RealDS.MediaFile(ctx),
|
||||
GetMissingAndMatchingError: errors.New("critical database error"),
|
||||
}
|
||||
ds.MockedMediaFile = mfRepo
|
||||
|
||||
// Attempt scan (should fail)
|
||||
Expect(runScanner(ctx, false)).To(HaveOccurred())
|
||||
|
||||
// Check that error is recorded in scanner properties
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(ContainSubstring("critical database error"))
|
||||
|
||||
// Scan type should still be recorded
|
||||
scanType, _ := ds.Property(ctx).DefaultGet(consts.LastScanTypeKey, "")
|
||||
Expect(scanType).To(BeElementOf("incremental", "quick"))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Mixed error scenarios", func() {
|
||||
var rockFS storagetest.FakeFS
|
||||
|
||||
BeforeEach(func() {
|
||||
// Set up rock library with filesystem that can error
|
||||
rock := template(_t{"albumartist": "Metallica", "album": "Master of Puppets", "year": 1986, "genre": "Metal"})
|
||||
rockFS = createFS("rock", fstest.MapFS{
|
||||
"Metallica/Master of Puppets/01 - Battery.mp3": rock(track(1, "Battery")),
|
||||
"Metallica/Master of Puppets/02 - Master of Puppets.mp3": rock(track(2, "Master of Puppets")),
|
||||
})
|
||||
|
||||
// Set up jazz library normally
|
||||
jazz := template(_t{"albumartist": "Herbie Hancock", "album": "Head Hunters", "year": 1973, "genre": "Jazz"})
|
||||
createFS("jazz", fstest.MapFS{
|
||||
"Herbie Hancock/Head Hunters/01 - Chameleon.mp3": jazz(track(1, "Chameleon")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should handle filesystem errors in one library while other succeeds", func() {
|
||||
// Inject filesystem error in rock library
|
||||
rockFS.SetError("Metallica/Master of Puppets/01 - Battery.mp3", errors.New("disk read error"))
|
||||
|
||||
// Scan should complete with warnings (not hard error)
|
||||
warnings, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(warnings).ToNot(BeEmpty(), "Should have warnings for filesystem error")
|
||||
|
||||
// Jazz library should scan completely successfully
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(1))
|
||||
Expect(jazzFiles[0].Title).To(Equal("Chameleon"))
|
||||
|
||||
// Jazz library statistics should be accurate
|
||||
jazzLib, err := ds.Library(ctx).Get(lib2.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzLib.TotalSongs).To(Equal(1))
|
||||
Expect(jazzLib.TotalAlbums).To(Equal(1))
|
||||
|
||||
// Rock library may have partial content (depending on scanner implementation)
|
||||
rockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
// No specific expectation - some files may have been imported despite errors
|
||||
_ = rockFiles
|
||||
|
||||
// Error should be empty (warnings don't count as scan errors)
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "unset")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(BeEmpty())
|
||||
})
|
||||
|
||||
It("should handle partial failures gracefully", func() {
|
||||
// Create a scenario where rock has filesystem issues and jazz has normal content
|
||||
rockFS.SetError("Metallica/Master of Puppets/01 - Battery.mp3", errors.New("file corruption"))
|
||||
|
||||
// Do an initial scan with filesystem error
|
||||
warnings, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(warnings).ToNot(BeEmpty(), "Should have warnings for file corruption")
|
||||
|
||||
// Verify that the working parts completed successfully
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(1))
|
||||
|
||||
// Scanner properties should reflect successful completion despite warnings
|
||||
scanType, _ := ds.Property(ctx).DefaultGet(consts.LastScanTypeKey, "")
|
||||
Expect(scanType).To(Equal("full"))
|
||||
|
||||
// Start time should be recorded
|
||||
startTimeStr, _ := ds.Property(ctx).DefaultGet(consts.LastScanStartTimeKey, "")
|
||||
Expect(startTimeStr).ToNot(BeEmpty())
|
||||
|
||||
// Error should be empty (warnings don't count as scan errors)
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "unset")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
|
||||
Context("Error recovery in multi-library context", func() {
|
||||
It("should recover from previous library-specific errors", func() {
|
||||
// Set up initial content
|
||||
rock := template(_t{"albumartist": "Iron Maiden", "album": "The Number of the Beast", "year": 1982, "genre": "Metal"})
|
||||
jazz := template(_t{"albumartist": "John Coltrane", "album": "Giant Steps", "year": 1960, "genre": "Jazz"})
|
||||
|
||||
rockFS := createFS("rock", fstest.MapFS{
|
||||
"Iron Maiden/The Number of the Beast/01 - Invaders.mp3": rock(track(1, "Invaders")),
|
||||
})
|
||||
|
||||
createFS("jazz", fstest.MapFS{
|
||||
"John Coltrane/Giant Steps/01 - Giant Steps.mp3": jazz(track(1, "Giant Steps")),
|
||||
})
|
||||
|
||||
// First scan with filesystem error in rock
|
||||
rockFS.SetError("Iron Maiden/The Number of the Beast/01 - Invaders.mp3", errors.New("temporary disk error"))
|
||||
warnings, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred()) // Should succeed with warnings
|
||||
Expect(warnings).ToNot(BeEmpty(), "Should have warnings for temporary disk error")
|
||||
|
||||
// Clear the error and add more content - recreate the filesystem completely
|
||||
rockFS.ClearError("Iron Maiden/The Number of the Beast/01 - Invaders.mp3")
|
||||
|
||||
// Create a new filesystem with both files
|
||||
createFS("rock", fstest.MapFS{
|
||||
"Iron Maiden/The Number of the Beast/01 - Invaders.mp3": rock(track(1, "Invaders")),
|
||||
"Iron Maiden/The Number of the Beast/02 - Children of the Damned.mp3": rock(track(2, "Children of the Damned")),
|
||||
})
|
||||
|
||||
// Second scan should recover and import all rock content
|
||||
warnings, err = s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(warnings).ToNot(BeEmpty(), "Should have warnings for temporary disk error")
|
||||
|
||||
// Verify both libraries now have content (at least jazz should work)
|
||||
rockFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib1.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
// The scanner should recover and import both rock files
|
||||
Expect(len(rockFiles)).To(Equal(2))
|
||||
|
||||
jazzFiles, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"library_id": lib2.ID},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzFiles).To(HaveLen(1))
|
||||
|
||||
// Both libraries should have correct content counts
|
||||
rockLib, err := ds.Library(ctx).Get(lib1.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(rockLib.TotalSongs).To(Equal(2))
|
||||
|
||||
jazzLib, err := ds.Library(ctx).Get(lib2.ID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(jazzLib.TotalSongs).To(Equal(1))
|
||||
|
||||
// Error should be empty (successful recovery)
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "unset")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Context("Scanner Properties", func() {
|
||||
It("should persist last scan type, start time and error properties", func() {
|
||||
// trivial FS setup
|
||||
rock := template(_t{"albumartist": "AC/DC", "album": "Back in Black", "year": 1980, "genre": "Rock"})
|
||||
_ = createFS("rock", fstest.MapFS{
|
||||
"AC-DC/Back in Black/01 - Hells Bells.mp3": rock(track(1, "Hells Bells")),
|
||||
})
|
||||
|
||||
// Run a full scan
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Validate properties
|
||||
scanType, _ := ds.Property(ctx).DefaultGet(consts.LastScanTypeKey, "")
|
||||
Expect(scanType).To(Equal("full"))
|
||||
|
||||
startTimeStr, _ := ds.Property(ctx).DefaultGet(consts.LastScanStartTimeKey, "")
|
||||
Expect(startTimeStr).ToNot(BeEmpty())
|
||||
_, err := time.Parse(time.RFC3339, startTimeStr)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
lastError, err := ds.Property(ctx).DefaultGet(consts.LastScanErrorKey, "unset")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(lastError).To(BeEmpty())
|
||||
})
|
||||
})
|
||||
})
|
||||
293
scanner/scanner_selective_test.go
Normal file
293
scanner/scanner_selective_test.go
Normal file
@@ -0,0 +1,293 @@
|
||||
package scanner_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"path/filepath"
|
||||
"testing/fstest"
|
||||
|
||||
"github.com/Masterminds/squirrel"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/metrics"
|
||||
"github.com/navidrome/navidrome/core/storage/storagetest"
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/request"
|
||||
"github.com/navidrome/navidrome/persistence"
|
||||
"github.com/navidrome/navidrome/scanner"
|
||||
"github.com/navidrome/navidrome/server/events"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
"github.com/navidrome/navidrome/utils/slice"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("ScanFolders", Ordered, func() {
|
||||
var ctx context.Context
|
||||
var lib model.Library
|
||||
var ds model.DataStore
|
||||
var s model.Scanner
|
||||
var fsys storagetest.FakeFS
|
||||
|
||||
BeforeAll(func() {
|
||||
ctx = request.WithUser(GinkgoT().Context(), model.User{ID: "123", IsAdmin: true})
|
||||
tmpDir := GinkgoT().TempDir()
|
||||
conf.Server.DbPath = filepath.Join(tmpDir, "test-selective-scan.db?_journal_mode=WAL")
|
||||
log.Warn("Using DB at " + conf.Server.DbPath)
|
||||
db.Db().SetMaxOpenConns(1)
|
||||
})
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.MusicFolder = "fake:///music"
|
||||
conf.Server.DevExternalScanner = false
|
||||
|
||||
db.Init(ctx)
|
||||
DeferCleanup(func() {
|
||||
Expect(tests.ClearDB()).To(Succeed())
|
||||
})
|
||||
|
||||
ds = persistence.New(db.Db())
|
||||
|
||||
// Create the admin user in the database to match the context
|
||||
adminUser := model.User{
|
||||
ID: "123",
|
||||
UserName: "admin",
|
||||
Name: "Admin User",
|
||||
IsAdmin: true,
|
||||
NewPassword: "password",
|
||||
}
|
||||
Expect(ds.User(ctx).Put(&adminUser)).To(Succeed())
|
||||
|
||||
s = scanner.New(ctx, ds, artwork.NoopCacheWarmer(), events.NoopBroker(),
|
||||
core.NewPlaylists(ds), metrics.NewNoopInstance())
|
||||
|
||||
lib = model.Library{ID: 1, Name: "Fake Library", Path: "fake:///music"}
|
||||
Expect(ds.Library(ctx).Put(&lib)).To(Succeed())
|
||||
|
||||
// Initialize fake filesystem
|
||||
fsys = storagetest.FakeFS{}
|
||||
storagetest.Register("fake", &fsys)
|
||||
})
|
||||
|
||||
Describe("Adding tracks to the library", func() {
|
||||
It("scans specified folders recursively including all subdirectories", func() {
|
||||
rock := template(_t{"albumartist": "Rock Artist", "album": "Rock Album"})
|
||||
jazz := template(_t{"albumartist": "Jazz Artist", "album": "Jazz Album"})
|
||||
pop := template(_t{"albumartist": "Pop Artist", "album": "Pop Album"})
|
||||
createFS(fstest.MapFS{
|
||||
"rock/track1.mp3": rock(track(1, "Rock Track 1")),
|
||||
"rock/track2.mp3": rock(track(2, "Rock Track 2")),
|
||||
"rock/subdir/track3.mp3": rock(track(3, "Rock Track 3")),
|
||||
"jazz/track4.mp3": jazz(track(1, "Jazz Track 1")),
|
||||
"jazz/subdir/track5.mp3": jazz(track(2, "Jazz Track 2")),
|
||||
"pop/track6.mp3": pop(track(1, "Pop Track 1")),
|
||||
})
|
||||
|
||||
// Scan only the "rock" and "jazz" folders (including their subdirectories)
|
||||
targets := []model.ScanTarget{
|
||||
{LibraryID: lib.ID, FolderPath: "rock"},
|
||||
{LibraryID: lib.ID, FolderPath: "jazz"},
|
||||
}
|
||||
|
||||
warnings, err := s.ScanFolders(ctx, false, targets)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(warnings).To(BeEmpty())
|
||||
|
||||
// Verify all tracks in rock and jazz folders (including subdirectories) were imported
|
||||
allFiles, err := ds.MediaFile(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Should have 5 tracks (all rock and jazz tracks including subdirectories)
|
||||
Expect(allFiles).To(HaveLen(5))
|
||||
|
||||
// Get the file paths
|
||||
paths := slice.Map(allFiles, func(mf model.MediaFile) string {
|
||||
return filepath.ToSlash(mf.Path)
|
||||
})
|
||||
|
||||
// Verify the correct files were scanned (including subdirectories)
|
||||
Expect(paths).To(ContainElements(
|
||||
"rock/track1.mp3",
|
||||
"rock/track2.mp3",
|
||||
"rock/subdir/track3.mp3",
|
||||
"jazz/track4.mp3",
|
||||
"jazz/subdir/track5.mp3",
|
||||
))
|
||||
|
||||
// Verify files in the pop folder were NOT scanned
|
||||
Expect(paths).ToNot(ContainElement("pop/track6.mp3"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("Deleting folders", func() {
|
||||
Context("when a child folder is deleted", func() {
|
||||
var (
|
||||
revolver, help func(...map[string]any) *fstest.MapFile
|
||||
artistFolderID string
|
||||
album1FolderID string
|
||||
album2FolderID string
|
||||
album1TrackIDs []string
|
||||
album2TrackIDs []string
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
// Setup template functions for creating test files
|
||||
revolver = storagetest.Template(_t{"albumartist": "The Beatles", "album": "Revolver", "year": 1966})
|
||||
help = storagetest.Template(_t{"albumartist": "The Beatles", "album": "Help!", "year": 1965})
|
||||
|
||||
// Initial filesystem with nested folders
|
||||
fsys.SetFiles(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(storagetest.Track(1, "Help!")),
|
||||
"The Beatles/Help!/02 - The Night Before.mp3": help(storagetest.Track(2, "The Night Before")),
|
||||
})
|
||||
|
||||
// First scan - import everything
|
||||
_, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify initial state - all folders exist
|
||||
folders, err := ds.Folder(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"library_id": lib.ID}})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(folders).To(HaveLen(4)) // root, Artist, Album1, Album2
|
||||
|
||||
// Store folder IDs for later verification
|
||||
for _, f := range folders {
|
||||
switch f.Name {
|
||||
case "The Beatles":
|
||||
artistFolderID = f.ID
|
||||
case "Revolver":
|
||||
album1FolderID = f.ID
|
||||
case "Help!":
|
||||
album2FolderID = f.ID
|
||||
}
|
||||
}
|
||||
|
||||
// Verify all tracks exist
|
||||
allTracks, err := ds.MediaFile(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(allTracks).To(HaveLen(4))
|
||||
|
||||
// Store track IDs for later verification
|
||||
for _, t := range allTracks {
|
||||
if t.Album == "Revolver" {
|
||||
album1TrackIDs = append(album1TrackIDs, t.ID)
|
||||
} else if t.Album == "Help!" {
|
||||
album2TrackIDs = append(album2TrackIDs, t.ID)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify no tracks are missing initially
|
||||
for _, t := range allTracks {
|
||||
Expect(t.Missing).To(BeFalse())
|
||||
}
|
||||
})
|
||||
|
||||
It("should mark child folder and its tracks as missing when parent is scanned", func() {
|
||||
// Delete the child folder (Help!) from the filesystem
|
||||
fsys.SetFiles(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
|
||||
// "The Beatles/Help!" folder and its contents are DELETED
|
||||
})
|
||||
|
||||
// Run selective scan on the parent folder (Artist)
|
||||
// This simulates what the watcher does when a child folder is deleted
|
||||
_, err := s.ScanFolders(ctx, false, []model.ScanTarget{
|
||||
{LibraryID: lib.ID, FolderPath: "The Beatles"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify the deleted child folder is now marked as missing
|
||||
deletedFolder, err := ds.Folder(ctx).Get(album2FolderID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(deletedFolder.Missing).To(BeTrue(), "Deleted child folder should be marked as missing")
|
||||
|
||||
// Verify the deleted folder's tracks are marked as missing
|
||||
for _, trackID := range album2TrackIDs {
|
||||
track, err := ds.MediaFile(ctx).Get(trackID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(track.Missing).To(BeTrue(), "Track in deleted folder should be marked as missing")
|
||||
}
|
||||
|
||||
// Verify the parent folder is still present and not marked as missing
|
||||
parentFolder, err := ds.Folder(ctx).Get(artistFolderID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(parentFolder.Missing).To(BeFalse(), "Parent folder should not be marked as missing")
|
||||
|
||||
// Verify the sibling folder and its tracks are still present and not missing
|
||||
siblingFolder, err := ds.Folder(ctx).Get(album1FolderID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(siblingFolder.Missing).To(BeFalse(), "Sibling folder should not be marked as missing")
|
||||
|
||||
for _, trackID := range album1TrackIDs {
|
||||
track, err := ds.MediaFile(ctx).Get(trackID)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(track.Missing).To(BeFalse(), "Track in sibling folder should not be marked as missing")
|
||||
}
|
||||
})
|
||||
|
||||
It("should mark deeply nested child folders as missing", func() {
|
||||
// Add a deeply nested folder structure
|
||||
fsys.SetFiles(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(storagetest.Track(1, "Help!")),
|
||||
"The Beatles/Help!/02 - The Night Before.mp3": help(storagetest.Track(2, "The Night Before")),
|
||||
"The Beatles/Help!/Bonus/01 - Bonus Track.mp3": help(storagetest.Track(99, "Bonus Track")),
|
||||
"The Beatles/Help!/Bonus/Nested/01 - Deep Track.mp3": help(storagetest.Track(100, "Deep Track")),
|
||||
})
|
||||
|
||||
// Rescan to import the new nested structure
|
||||
_, err := s.ScanAll(ctx, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify nested folders were created
|
||||
allFolders, err := ds.Folder(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"library_id": lib.ID}})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(allFolders)).To(BeNumerically(">", 4), "Should have more folders with nested structure")
|
||||
|
||||
// Now delete the entire Help! folder including nested children
|
||||
fsys.SetFiles(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
|
||||
// All Help! subfolders are deleted
|
||||
})
|
||||
|
||||
// Run selective scan on parent
|
||||
_, err = s.ScanFolders(ctx, false, []model.ScanTarget{
|
||||
{LibraryID: lib.ID, FolderPath: "The Beatles"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Verify all Help! folders (including nested ones) are marked as missing
|
||||
missingFolders, err := ds.Folder(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.And{
|
||||
squirrel.Eq{"library_id": lib.ID},
|
||||
squirrel.Eq{"missing": true},
|
||||
},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(len(missingFolders)).To(BeNumerically(">", 0), "At least one folder should be marked as missing")
|
||||
|
||||
// Verify all tracks in deleted folders are marked as missing
|
||||
allTracks, err := ds.MediaFile(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(allTracks).To(HaveLen(6))
|
||||
|
||||
for _, track := range allTracks {
|
||||
if track.Album == "Help!" {
|
||||
Expect(track.Missing).To(BeTrue(), "All tracks in deleted Help! folder should be marked as missing")
|
||||
} else if track.Album == "Revolver" {
|
||||
Expect(track.Missing).To(BeFalse(), "Tracks in Revolver folder should not be marked as missing")
|
||||
}
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
26
scanner/scanner_suite_test.go
Normal file
26
scanner/scanner_suite_test.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package scanner_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
"go.uber.org/goleak"
|
||||
)
|
||||
|
||||
func TestScanner(t *testing.T) {
|
||||
// Detect any goroutine leaks in the scanner code under test
|
||||
defer goleak.VerifyNone(t,
|
||||
goleak.IgnoreTopFunction("github.com/onsi/ginkgo/v2/internal/interrupt_handler.(*InterruptHandler).registerForInterrupts.func2"),
|
||||
)
|
||||
|
||||
tests.Init(t, true)
|
||||
defer db.Close(context.Background())
|
||||
log.SetLevel(log.LevelFatal)
|
||||
RegisterFailHandler(Fail)
|
||||
RunSpecs(t, "Scanner Suite")
|
||||
}
|
||||
805
scanner/scanner_test.go
Normal file
805
scanner/scanner_test.go
Normal file
@@ -0,0 +1,805 @@
|
||||
package scanner_test
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"path/filepath"
|
||||
"testing/fstest"
|
||||
|
||||
"github.com/Masterminds/squirrel"
|
||||
"github.com/google/uuid"
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/consts"
|
||||
"github.com/navidrome/navidrome/core"
|
||||
"github.com/navidrome/navidrome/core/artwork"
|
||||
"github.com/navidrome/navidrome/core/metrics"
|
||||
"github.com/navidrome/navidrome/core/storage/storagetest"
|
||||
"github.com/navidrome/navidrome/db"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/model/request"
|
||||
"github.com/navidrome/navidrome/persistence"
|
||||
"github.com/navidrome/navidrome/scanner"
|
||||
"github.com/navidrome/navidrome/server/events"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
"github.com/navidrome/navidrome/utils/slice"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
// Easy aliases for the storagetest package
|
||||
type _t = map[string]any
|
||||
|
||||
var template = storagetest.Template
|
||||
var track = storagetest.Track
|
||||
|
||||
func createFS(files fstest.MapFS) storagetest.FakeFS {
|
||||
fs := storagetest.FakeFS{}
|
||||
fs.SetFiles(files)
|
||||
storagetest.Register("fake", &fs)
|
||||
return fs
|
||||
}
|
||||
|
||||
var _ = Describe("Scanner", Ordered, func() {
|
||||
var ctx context.Context
|
||||
var lib model.Library
|
||||
var ds *tests.MockDataStore
|
||||
var mfRepo *mockMediaFileRepo
|
||||
var s model.Scanner
|
||||
|
||||
BeforeAll(func() {
|
||||
ctx = request.WithUser(GinkgoT().Context(), model.User{ID: "123", IsAdmin: true})
|
||||
tmpDir := GinkgoT().TempDir()
|
||||
conf.Server.DbPath = filepath.Join(tmpDir, "test-scanner.db?_journal_mode=WAL")
|
||||
log.Warn("Using DB at " + conf.Server.DbPath)
|
||||
//conf.Server.DbPath = ":memory:"
|
||||
db.Db().SetMaxOpenConns(1)
|
||||
})
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.MusicFolder = "fake:///music" // Set to match test library path
|
||||
conf.Server.DevExternalScanner = false
|
||||
|
||||
db.Init(ctx)
|
||||
DeferCleanup(func() {
|
||||
Expect(tests.ClearDB()).To(Succeed())
|
||||
})
|
||||
|
||||
ds = &tests.MockDataStore{RealDS: persistence.New(db.Db())}
|
||||
mfRepo = &mockMediaFileRepo{
|
||||
MediaFileRepository: ds.RealDS.MediaFile(ctx),
|
||||
}
|
||||
ds.MockedMediaFile = mfRepo
|
||||
|
||||
// Create the admin user in the database to match the context
|
||||
adminUser := model.User{
|
||||
ID: "123",
|
||||
UserName: "admin",
|
||||
Name: "Admin User",
|
||||
IsAdmin: true,
|
||||
NewPassword: "password",
|
||||
}
|
||||
Expect(ds.User(ctx).Put(&adminUser)).To(Succeed())
|
||||
|
||||
s = scanner.New(ctx, ds, artwork.NoopCacheWarmer(), events.NoopBroker(),
|
||||
core.NewPlaylists(ds), metrics.NewNoopInstance())
|
||||
|
||||
lib = model.Library{ID: 1, Name: "Fake Library", Path: "fake:///music"}
|
||||
Expect(ds.Library(ctx).Put(&lib)).To(Succeed())
|
||||
})
|
||||
|
||||
runScanner := func(ctx context.Context, fullScan bool) error {
|
||||
_, err := s.ScanAll(ctx, fullScan)
|
||||
return err
|
||||
}
|
||||
|
||||
Context("Simple library, 'artis/album/track - title.mp3'", func() {
|
||||
var help, revolver func(...map[string]any) *fstest.MapFile
|
||||
var fsys storagetest.FakeFS
|
||||
BeforeEach(func() {
|
||||
revolver = template(_t{"albumartist": "The Beatles", "album": "Revolver", "year": 1966})
|
||||
help = template(_t{"albumartist": "The Beatles", "album": "Help!", "year": 1965})
|
||||
fsys = createFS(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(track(2, "Eleanor Rigby")),
|
||||
"The Beatles/Revolver/03 - I'm Only Sleeping.mp3": revolver(track(3, "I'm Only Sleeping")),
|
||||
"The Beatles/Revolver/04 - Love You To.mp3": revolver(track(4, "Love You To")),
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(track(1, "Help!")),
|
||||
"The Beatles/Help!/02 - The Night Before.mp3": help(track(2, "The Night Before")),
|
||||
"The Beatles/Help!/03 - You've Got to Hide Your Love Away.mp3": help(track(3, "You've Got to Hide Your Love Away")),
|
||||
})
|
||||
})
|
||||
When("it is the first scan", func() {
|
||||
It("should import all folders", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
folders, _ := ds.Folder(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"library_id": lib.ID}})
|
||||
paths := slice.Map(folders, func(f model.Folder) string { return f.Name })
|
||||
Expect(paths).To(SatisfyAll(
|
||||
HaveLen(4),
|
||||
ContainElements(".", "The Beatles", "Revolver", "Help!"),
|
||||
))
|
||||
})
|
||||
It("should import all mediafiles", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
mfs, _ := ds.MediaFile(ctx).GetAll()
|
||||
paths := slice.Map(mfs, func(f model.MediaFile) string { return f.Title })
|
||||
Expect(paths).To(SatisfyAll(
|
||||
HaveLen(7),
|
||||
ContainElements(
|
||||
"Taxman", "Eleanor Rigby", "I'm Only Sleeping", "Love You To",
|
||||
"Help!", "The Night Before", "You've Got to Hide Your Love Away",
|
||||
),
|
||||
))
|
||||
})
|
||||
It("should import all albums", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
albums, _ := ds.Album(ctx).GetAll(model.QueryOptions{Sort: "name"})
|
||||
Expect(albums).To(HaveLen(2))
|
||||
Expect(albums[0]).To(SatisfyAll(
|
||||
HaveField("Name", Equal("Help!")),
|
||||
HaveField("SongCount", Equal(3)),
|
||||
))
|
||||
Expect(albums[1]).To(SatisfyAll(
|
||||
HaveField("Name", Equal("Revolver")),
|
||||
HaveField("SongCount", Equal(4)),
|
||||
))
|
||||
})
|
||||
})
|
||||
When("a file was changed", func() {
|
||||
It("should update the media_file", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
mf, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"title": "Help!"}})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf[0].Tags).ToNot(HaveKey("barcode"))
|
||||
|
||||
fsys.UpdateTags("The Beatles/Help!/01 - Help!.mp3", _t{"barcode": "123"})
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
mf, err = ds.MediaFile(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"title": "Help!"}})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf[0].Tags).To(HaveKeyWithValue(model.TagName("barcode"), []string{"123"}))
|
||||
})
|
||||
|
||||
It("should update the album", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
albums, err := ds.Album(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"album.name": "Help!"}})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(albums).ToNot(BeEmpty())
|
||||
Expect(albums[0].Participants.First(model.RoleProducer).Name).To(BeEmpty())
|
||||
Expect(albums[0].SongCount).To(Equal(3))
|
||||
|
||||
fsys.UpdateTags("The Beatles/Help!/01 - Help!.mp3", _t{"producer": "George Martin"})
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
albums, err = ds.Album(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"album.name": "Help!"}})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(albums[0].Participants.First(model.RoleProducer).Name).To(Equal("George Martin"))
|
||||
Expect(albums[0].SongCount).To(Equal(3))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Context("Ignored entries", func() {
|
||||
BeforeEach(func() {
|
||||
revolver := template(_t{"albumartist": "The Beatles", "album": "Revolver", "year": 1966})
|
||||
createFS(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(track(1, "Taxman")),
|
||||
"The Beatles/Revolver/._01 - Taxman.mp3": &fstest.MapFile{Data: []byte("garbage data")},
|
||||
})
|
||||
})
|
||||
|
||||
It("should not import the ignored file", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
mfs, err := ds.MediaFile(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mfs).To(HaveLen(1))
|
||||
for _, mf := range mfs {
|
||||
Expect(mf.Title).To(Equal("Taxman"))
|
||||
Expect(mf.Path).To(Equal("The Beatles/Revolver/01 - Taxman.mp3"))
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
Context("Same album in two different folders", func() {
|
||||
BeforeEach(func() {
|
||||
revolver := template(_t{"albumartist": "The Beatles", "album": "Revolver", "year": 1966})
|
||||
createFS(fstest.MapFS{
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(track(1, "Taxman")),
|
||||
"The Beatles/Revolver2/02 - Eleanor Rigby.mp3": revolver(track(2, "Eleanor Rigby")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should import as one album", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
albums, err := ds.Album(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(albums).To(HaveLen(1))
|
||||
|
||||
mfs, err := ds.MediaFile(ctx).GetAll()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mfs).To(HaveLen(2))
|
||||
for _, mf := range mfs {
|
||||
Expect(mf.AlbumID).To(Equal(albums[0].ID))
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
Context("Same album, different release dates", func() {
|
||||
BeforeEach(func() {
|
||||
help := template(_t{"albumartist": "The Beatles", "album": "Help!", "releasedate": 1965})
|
||||
help2 := template(_t{"albumartist": "The Beatles", "album": "Help!", "releasedate": 2000})
|
||||
createFS(fstest.MapFS{
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(track(1, "Help!")),
|
||||
"The Beatles/Help! (remaster)/01 - Help!.mp3": help2(track(1, "Help!")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should import as two distinct albums", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
albums, err := ds.Album(ctx).GetAll(model.QueryOptions{Sort: "release_date"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(albums).To(HaveLen(2))
|
||||
Expect(albums[0]).To(SatisfyAll(
|
||||
HaveField("Name", Equal("Help!")),
|
||||
HaveField("ReleaseDate", Equal("1965")),
|
||||
))
|
||||
Expect(albums[1]).To(SatisfyAll(
|
||||
HaveField("Name", Equal("Help!")),
|
||||
HaveField("ReleaseDate", Equal("2000")),
|
||||
))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("Library changes'", func() {
|
||||
var help, revolver func(...map[string]any) *fstest.MapFile
|
||||
var fsys storagetest.FakeFS
|
||||
var findByPath func(string) (*model.MediaFile, error)
|
||||
var beatlesMBID = uuid.NewString()
|
||||
|
||||
BeforeEach(func() {
|
||||
By("Having two MP3 albums")
|
||||
beatles := _t{
|
||||
"artist": "The Beatles",
|
||||
"artistsort": "Beatles, The",
|
||||
"musicbrainz_artistid": beatlesMBID,
|
||||
}
|
||||
help = template(beatles, _t{"album": "Help!", "year": 1965})
|
||||
revolver = template(beatles, _t{"album": "Revolver", "year": 1966})
|
||||
fsys = createFS(fstest.MapFS{
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(track(1, "Help!")),
|
||||
"The Beatles/Help!/02 - The Night Before.mp3": help(track(2, "The Night Before")),
|
||||
"The Beatles/Revolver/01 - Taxman.mp3": revolver(track(1, "Taxman")),
|
||||
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(track(2, "Eleanor Rigby")),
|
||||
})
|
||||
|
||||
By("Doing a full scan")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
Expect(ds.MediaFile(ctx).CountAll()).To(Equal(int64(4)))
|
||||
findByPath = createFindByPath(ctx, ds)
|
||||
})
|
||||
|
||||
It("adds new files to the library", func() {
|
||||
fsys.Add("The Beatles/Revolver/03 - I'm Only Sleeping.mp3", revolver(track(3, "I'm Only Sleeping")))
|
||||
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
Expect(ds.MediaFile(ctx).CountAll()).To(Equal(int64(5)))
|
||||
mf, err := findByPath("The Beatles/Revolver/03 - I'm Only Sleeping.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Title).To(Equal("I'm Only Sleeping"))
|
||||
})
|
||||
|
||||
It("updates tags of a file in the library", func() {
|
||||
fsys.UpdateTags("The Beatles/Revolver/02 - Eleanor Rigby.mp3", _t{"title": "Eleanor Rigby (remix)"})
|
||||
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
Expect(ds.MediaFile(ctx).CountAll()).To(Equal(int64(4)))
|
||||
mf, _ := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(mf.Title).To(Equal("Eleanor Rigby (remix)"))
|
||||
})
|
||||
|
||||
It("upgrades file with same format in the library", func() {
|
||||
fsys.Add("The Beatles/Revolver/01 - Taxman.mp3", revolver(track(1, "Taxman", _t{"bitrate": 640})))
|
||||
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
Expect(ds.MediaFile(ctx).CountAll()).To(Equal(int64(4)))
|
||||
mf, _ := findByPath("The Beatles/Revolver/01 - Taxman.mp3")
|
||||
Expect(mf.BitRate).To(Equal(640))
|
||||
})
|
||||
|
||||
It("detects a file was removed from the library", func() {
|
||||
By("Removing a file")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Rescanning the library")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the file is marked as missing")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(3)))
|
||||
mf, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
})
|
||||
|
||||
It("detects a file was moved to a different folder", func() {
|
||||
By("Storing the original ID")
|
||||
original, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
originalId := original.ID
|
||||
|
||||
By("Moving the file to a different folder")
|
||||
fsys.Move("The Beatles/Revolver/02 - Eleanor Rigby.mp3", "The Beatles/Help!/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Rescanning the library")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the old file is not in the library")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(4)))
|
||||
_, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).To(MatchError(model.ErrNotFound))
|
||||
|
||||
By("Checking the new file is in the library")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": true},
|
||||
})).To(BeZero())
|
||||
mf, err := findByPath("The Beatles/Help!/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Title).To(Equal("Eleanor Rigby"))
|
||||
Expect(mf.Missing).To(BeFalse())
|
||||
|
||||
By("Checking the new file has the same ID as the original")
|
||||
Expect(mf.ID).To(Equal(originalId))
|
||||
})
|
||||
|
||||
It("detects a move after a scan is interrupted by an error", func() {
|
||||
By("Storing the original ID")
|
||||
By("Moving the file to a different folder")
|
||||
fsys.Move("The Beatles/Revolver/01 - Taxman.mp3", "The Beatles/Help!/01 - Taxman.mp3")
|
||||
|
||||
By("Interrupting the scan with an error before the move is processed")
|
||||
mfRepo.GetMissingAndMatchingError = errors.New("I/O read error")
|
||||
Expect(runScanner(ctx, false)).To(MatchError(ContainSubstring("I/O read error")))
|
||||
|
||||
By("Checking the both instances of the file are in the lib")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"title": "Taxman"},
|
||||
})).To(Equal(int64(2)))
|
||||
|
||||
By("Rescanning the library without error")
|
||||
mfRepo.GetMissingAndMatchingError = nil
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the old file is not in the library")
|
||||
mfs, err := ds.MediaFile(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"title": "Taxman"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mfs).To(HaveLen(1))
|
||||
Expect(mfs[0].Path).To(Equal("The Beatles/Help!/01 - Taxman.mp3"))
|
||||
})
|
||||
|
||||
It("detects file format upgrades", func() {
|
||||
By("Storing the original ID")
|
||||
original, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
originalId := original.ID
|
||||
|
||||
By("Replacing the file with a different format")
|
||||
fsys.Move("The Beatles/Revolver/02 - Eleanor Rigby.mp3", "The Beatles/Revolver/02 - Eleanor Rigby.flac")
|
||||
|
||||
By("Rescanning the library")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the old file is not in the library")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": true},
|
||||
})).To(BeZero())
|
||||
_, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).To(MatchError(model.ErrNotFound))
|
||||
|
||||
By("Checking the new file is in the library")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(4)))
|
||||
mf, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.flac")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Title).To(Equal("Eleanor Rigby"))
|
||||
Expect(mf.Missing).To(BeFalse())
|
||||
|
||||
By("Checking the new file has the same ID as the original")
|
||||
Expect(mf.ID).To(Equal(originalId))
|
||||
})
|
||||
|
||||
It("detects old missing tracks being added back", func() {
|
||||
By("Removing a file")
|
||||
origFile := fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Rescanning the library")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the file is marked as missing")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(3)))
|
||||
mf, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
|
||||
By("Adding the file back")
|
||||
fsys.Add("The Beatles/Revolver/02 - Eleanor Rigby.mp3", origFile)
|
||||
|
||||
By("Rescanning the library again")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the file is not marked as missing")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(4)))
|
||||
mf, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeFalse())
|
||||
|
||||
By("Removing it again")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Rescanning the library again")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the file is marked as missing")
|
||||
mf, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
|
||||
By("Adding the file back in a different folder")
|
||||
fsys.Add("The Beatles/Help!/02 - Eleanor Rigby.mp3", origFile)
|
||||
|
||||
By("Rescanning the library once more")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking the file was found in the new folder")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(4)))
|
||||
mf, err = findByPath("The Beatles/Help!/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeFalse())
|
||||
})
|
||||
|
||||
It("marks tracks as missing when scanning a deleted folder with ScanFolders", func() {
|
||||
By("Adding a third track to Revolver to have more test data")
|
||||
fsys.Add("The Beatles/Revolver/03 - I'm Only Sleeping.mp3", revolver(track(3, "I'm Only Sleeping")))
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Verifying initial state has 5 tracks")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(5)))
|
||||
|
||||
By("Removing the entire Revolver folder from filesystem")
|
||||
fsys.Remove("The Beatles/Revolver/01 - Taxman.mp3")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
fsys.Remove("The Beatles/Revolver/03 - I'm Only Sleeping.mp3")
|
||||
|
||||
By("Scanning the parent folder (simulating watcher behavior)")
|
||||
targets := []model.ScanTarget{
|
||||
{LibraryID: lib.ID, FolderPath: "The Beatles"},
|
||||
}
|
||||
_, err := s.ScanFolders(ctx, false, targets)
|
||||
Expect(err).To(Succeed())
|
||||
|
||||
By("Checking all Revolver tracks are marked as missing")
|
||||
mf, err := findByPath("The Beatles/Revolver/01 - Taxman.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
|
||||
mf, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
|
||||
mf, err = findByPath("The Beatles/Revolver/03 - I'm Only Sleeping.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
|
||||
By("Checking the Help! tracks are not affected")
|
||||
mf, err = findByPath("The Beatles/Help!/01 - Help!.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeFalse())
|
||||
|
||||
mf, err = findByPath("The Beatles/Help!/02 - The Night Before.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeFalse())
|
||||
|
||||
By("Verifying only 2 non-missing tracks remain (Help! tracks)")
|
||||
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": false},
|
||||
})).To(Equal(int64(2)))
|
||||
})
|
||||
|
||||
It("does not override artist fields when importing an undertagged file", func() {
|
||||
By("Making sure artist in the DB contains MBID and sort name")
|
||||
aa, err := ds.Artist(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"name": "The Beatles"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(aa).To(HaveLen(1))
|
||||
Expect(aa[0].Name).To(Equal("The Beatles"))
|
||||
Expect(aa[0].MbzArtistID).To(Equal(beatlesMBID))
|
||||
Expect(aa[0].SortArtistName).To(Equal("Beatles, The"))
|
||||
|
||||
By("Adding a new undertagged file (no MBID or sort name)")
|
||||
newTrack := revolver(track(4, "Love You Too",
|
||||
_t{"artist": "The Beatles", "musicbrainz_artistid": "", "artistsort": ""}),
|
||||
)
|
||||
fsys.Add("The Beatles/Revolver/04 - Love You Too.mp3", newTrack)
|
||||
|
||||
By("Doing a partial scan")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Asserting MediaFile have the artist name, but not the MBID or sort name")
|
||||
mf, err := findByPath("The Beatles/Revolver/04 - Love You Too.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Title).To(Equal("Love You Too"))
|
||||
Expect(mf.AlbumArtist).To(Equal("The Beatles"))
|
||||
Expect(mf.MbzAlbumArtistID).To(BeEmpty())
|
||||
Expect(mf.SortArtistName).To(BeEmpty())
|
||||
|
||||
By("Makingsure the artist in the DB has not changed")
|
||||
aa, err = ds.Artist(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"name": "The Beatles"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(aa).To(HaveLen(1))
|
||||
Expect(aa[0].Name).To(Equal("The Beatles"))
|
||||
Expect(aa[0].MbzArtistID).To(Equal(beatlesMBID))
|
||||
Expect(aa[0].SortArtistName).To(Equal("Beatles, The"))
|
||||
})
|
||||
|
||||
Context("When PurgeMissing is configured", func() {
|
||||
When("PurgeMissing is set to 'never'", func() {
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.Scanner.PurgeMissing = consts.PurgeMissingNever
|
||||
})
|
||||
|
||||
It("should mark files as missing but not delete them", func() {
|
||||
By("Running initial scan")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
By("Removing a file")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Running another scan")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
By("Checking files are marked as missing but not deleted")
|
||||
count, err := ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": true},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(count).To(Equal(int64(1)))
|
||||
|
||||
mf, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
When("PurgeMissing is set to 'always'", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.PurgeMissing = consts.PurgeMissingAlways
|
||||
})
|
||||
|
||||
It("should purge missing files on any scan", func() {
|
||||
By("Running initial scan")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Removing a file")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Running an incremental scan")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking missing files are deleted")
|
||||
count, err := ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": true},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(count).To(BeZero())
|
||||
|
||||
_, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).To(MatchError(model.ErrNotFound))
|
||||
})
|
||||
})
|
||||
|
||||
When("PurgeMissing is set to 'full'", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.PurgeMissing = consts.PurgeMissingFull
|
||||
})
|
||||
|
||||
It("should not purge missing files on incremental scans", func() {
|
||||
By("Running initial scan")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
By("Removing a file")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Running an incremental scan")
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Checking files are marked as missing but not deleted")
|
||||
count, err := ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": true},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(count).To(Equal(int64(1)))
|
||||
|
||||
mf, err := findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(mf.Missing).To(BeTrue())
|
||||
})
|
||||
|
||||
It("should purge missing files only on full scans", func() {
|
||||
By("Running initial scan")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
By("Removing a file")
|
||||
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
|
||||
By("Running a full scan")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
By("Checking missing files are deleted")
|
||||
count, err := ds.MediaFile(ctx).CountAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"missing": true},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(count).To(BeZero())
|
||||
|
||||
_, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
|
||||
Expect(err).To(MatchError(model.ErrNotFound))
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("RefreshStats", func() {
|
||||
var refreshStatsCalls []bool
|
||||
var fsys storagetest.FakeFS
|
||||
var help func(...map[string]any) *fstest.MapFile
|
||||
|
||||
BeforeEach(func() {
|
||||
refreshStatsCalls = nil
|
||||
|
||||
// Create a mock artist repository that tracks RefreshStats calls
|
||||
originalArtistRepo := ds.RealDS.Artist(ctx)
|
||||
ds.MockedArtist = &testArtistRepo{
|
||||
ArtistRepository: originalArtistRepo,
|
||||
callTracker: &refreshStatsCalls,
|
||||
}
|
||||
|
||||
// Create a simple filesystem for testing
|
||||
help = template(_t{"albumartist": "The Beatles", "album": "Help!", "year": 1965})
|
||||
fsys = createFS(fstest.MapFS{
|
||||
"The Beatles/Help!/01 - Help!.mp3": help(track(1, "Help!")),
|
||||
})
|
||||
})
|
||||
|
||||
It("should call RefreshStats with allArtists=true for full scans", func() {
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
Expect(refreshStatsCalls).To(HaveLen(1))
|
||||
Expect(refreshStatsCalls[0]).To(BeTrue(), "RefreshStats should be called with allArtists=true for full scans")
|
||||
})
|
||||
|
||||
It("should call RefreshStats with allArtists=false for incremental scans", func() {
|
||||
// First do a full scan to set up the data
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Reset the tracker to only track the incremental scan
|
||||
refreshStatsCalls = nil
|
||||
|
||||
// Add a new file to trigger changes detection
|
||||
fsys.Add("The Beatles/Help!/02 - The Night Before.mp3", help(track(2, "The Night Before")))
|
||||
|
||||
// Do an incremental scan
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
Expect(refreshStatsCalls).To(HaveLen(1))
|
||||
Expect(refreshStatsCalls[0]).To(BeFalse(), "RefreshStats should be called with allArtists=false for incremental scans")
|
||||
})
|
||||
|
||||
It("should update artist stats during quick scans when new albums are added", func() {
|
||||
// Don't use the mocked artist repo for this test - we need the real one
|
||||
ds.MockedArtist = nil
|
||||
|
||||
By("Initial scan with one album")
|
||||
Expect(runScanner(ctx, true)).To(Succeed())
|
||||
|
||||
// Verify initial artist stats - should have 1 album, 1 song
|
||||
artists, err := ds.Artist(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"name": "The Beatles"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(artists).To(HaveLen(1))
|
||||
artist := artists[0]
|
||||
Expect(artist.AlbumCount).To(Equal(1)) // 1 album
|
||||
Expect(artist.SongCount).To(Equal(1)) // 1 song
|
||||
|
||||
By("Adding files to an existing directory during incremental scan")
|
||||
// Add more files to the existing Help! album - this should trigger artist stats update during incremental scan
|
||||
fsys.Add("The Beatles/Help!/02 - The Night Before.mp3", help(track(2, "The Night Before")))
|
||||
fsys.Add("The Beatles/Help!/03 - You've Got to Hide Your Love Away.mp3", help(track(3, "You've Got to Hide Your Love Away")))
|
||||
|
||||
// Do a quick scan (incremental)
|
||||
Expect(runScanner(ctx, false)).To(Succeed())
|
||||
|
||||
By("Verifying artist stats were updated correctly")
|
||||
// Fetch the artist again to check updated stats
|
||||
artists, err = ds.Artist(ctx).GetAll(model.QueryOptions{
|
||||
Filters: squirrel.Eq{"name": "The Beatles"},
|
||||
})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(artists).To(HaveLen(1))
|
||||
updatedArtist := artists[0]
|
||||
|
||||
// Should now have 1 album and 3 songs total
|
||||
// This is the key test - that artist stats are updated during quick scans
|
||||
Expect(updatedArtist.AlbumCount).To(Equal(1)) // 1 album
|
||||
Expect(updatedArtist.SongCount).To(Equal(3)) // 3 songs
|
||||
|
||||
// Also verify that role-specific stats are updated (albumartist role)
|
||||
Expect(updatedArtist.Stats).To(HaveKey(model.RoleAlbumArtist))
|
||||
albumArtistStats := updatedArtist.Stats[model.RoleAlbumArtist]
|
||||
Expect(albumArtistStats.AlbumCount).To(Equal(1)) // 1 album
|
||||
Expect(albumArtistStats.SongCount).To(Equal(3)) // 3 songs
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
func createFindByPath(ctx context.Context, ds model.DataStore) func(string) (*model.MediaFile, error) {
|
||||
return func(path string) (*model.MediaFile, error) {
|
||||
list, err := ds.MediaFile(ctx).FindByPaths([]string{path})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(list) == 0 {
|
||||
return nil, model.ErrNotFound
|
||||
}
|
||||
return &list[0], nil
|
||||
}
|
||||
}
|
||||
|
||||
type mockMediaFileRepo struct {
|
||||
model.MediaFileRepository
|
||||
GetMissingAndMatchingError error
|
||||
}
|
||||
|
||||
func (m *mockMediaFileRepo) GetMissingAndMatching(libId int) (model.MediaFileCursor, error) {
|
||||
if m.GetMissingAndMatchingError != nil {
|
||||
return nil, m.GetMissingAndMatchingError
|
||||
}
|
||||
return m.MediaFileRepository.GetMissingAndMatching(libId)
|
||||
}
|
||||
|
||||
type testArtistRepo struct {
|
||||
model.ArtistRepository
|
||||
callTracker *[]bool
|
||||
}
|
||||
|
||||
func (m *testArtistRepo) RefreshStats(allArtists bool) (int64, error) {
|
||||
*m.callTracker = append(*m.callTracker, allArtists)
|
||||
return m.ArtistRepository.RefreshStats(allArtists)
|
||||
}
|
||||
254
scanner/walk_dir_tree.go
Normal file
254
scanner/walk_dir_tree.go
Normal file
@@ -0,0 +1,254 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io/fs"
|
||||
"maps"
|
||||
"path"
|
||||
"slices"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/utils"
|
||||
)
|
||||
|
||||
// walkDirTree recursively walks the directory tree starting from the given targetFolders.
|
||||
// If no targetFolders are provided, it starts from the root folder (".").
|
||||
// It returns a channel of folderEntry pointers representing each folder found.
|
||||
func walkDirTree(ctx context.Context, job *scanJob, targetFolders ...string) (<-chan *folderEntry, error) {
|
||||
results := make(chan *folderEntry)
|
||||
folders := targetFolders
|
||||
if len(targetFolders) == 0 {
|
||||
// No specific folders provided, scan the root folder
|
||||
folders = []string{"."}
|
||||
}
|
||||
go func() {
|
||||
defer close(results)
|
||||
for _, folderPath := range folders {
|
||||
if utils.IsCtxDone(ctx) {
|
||||
return
|
||||
}
|
||||
|
||||
// Check if target folder exists before walking it
|
||||
// If it doesn't exist (e.g., deleted between watcher detection and scan execution),
|
||||
// skip it so it remains in job.lastUpdates and gets handled in following steps
|
||||
_, err := fs.Stat(job.fs, folderPath)
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Target folder does not exist.", "path", folderPath, err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Create checker and push patterns from root to this folder
|
||||
checker := newIgnoreChecker(job.fs)
|
||||
err = checker.PushAllParents(ctx, folderPath)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error pushing ignore patterns for target folder", "path", folderPath, err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Recursively walk this folder and all its children
|
||||
err = walkFolder(ctx, job, folderPath, checker, results)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Scanner: Error walking target folder", "path", folderPath, err)
|
||||
continue
|
||||
}
|
||||
}
|
||||
log.Debug(ctx, "Scanner: Finished reading target folders", "lib", job.lib.Name, "path", job.lib.Path, "numFolders", job.numFolders.Load())
|
||||
}()
|
||||
return results, nil
|
||||
}
|
||||
|
||||
func walkFolder(ctx context.Context, job *scanJob, currentFolder string, checker *IgnoreChecker, results chan<- *folderEntry) error {
|
||||
// Push patterns for this folder onto the stack
|
||||
_ = checker.Push(ctx, currentFolder)
|
||||
defer checker.Pop() // Pop patterns when leaving this folder
|
||||
|
||||
folder, children, err := loadDir(ctx, job, currentFolder, checker)
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Error loading dir. Skipping", "path", currentFolder, err)
|
||||
return nil
|
||||
}
|
||||
for _, c := range children {
|
||||
err := walkFolder(ctx, job, c, checker, results)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
dir := path.Clean(currentFolder)
|
||||
log.Trace(ctx, "Scanner: Found directory", " path", dir, "audioFiles", maps.Keys(folder.audioFiles),
|
||||
"images", maps.Keys(folder.imageFiles), "playlists", folder.numPlaylists, "imagesUpdatedAt", folder.imagesUpdatedAt,
|
||||
"updTime", folder.updTime, "modTime", folder.modTime, "numChildren", len(children))
|
||||
folder.path = dir
|
||||
folder.elapsed.Start()
|
||||
|
||||
results <- folder
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func loadDir(ctx context.Context, job *scanJob, dirPath string, checker *IgnoreChecker) (folder *folderEntry, children []string, err error) {
|
||||
// Check if directory exists before creating the folder entry
|
||||
// This is important to avoid removing the folder from lastUpdates if it doesn't exist
|
||||
dirInfo, err := fs.Stat(job.fs, dirPath)
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Error stating dir", "path", dirPath, err)
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
// Now that we know the folder exists, create the entry (which removes it from lastUpdates)
|
||||
folder = job.createFolderEntry(dirPath)
|
||||
folder.modTime = dirInfo.ModTime()
|
||||
|
||||
dir, err := job.fs.Open(dirPath)
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Error in Opening directory", "path", dirPath, err)
|
||||
return folder, children, err
|
||||
}
|
||||
defer dir.Close()
|
||||
dirFile, ok := dir.(fs.ReadDirFile)
|
||||
if !ok {
|
||||
log.Error(ctx, "Not a directory", "path", dirPath)
|
||||
return folder, children, err
|
||||
}
|
||||
|
||||
entries := fullReadDir(ctx, dirFile)
|
||||
children = make([]string, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
entryPath := path.Join(dirPath, entry.Name())
|
||||
if checker.ShouldIgnore(ctx, entryPath) {
|
||||
log.Trace(ctx, "Scanner: Ignoring entry", "path", entryPath)
|
||||
continue
|
||||
}
|
||||
if isEntryIgnored(entry.Name()) {
|
||||
continue
|
||||
}
|
||||
if ctx.Err() != nil {
|
||||
return folder, children, ctx.Err()
|
||||
}
|
||||
isDir, err := isDirOrSymlinkToDir(job.fs, dirPath, entry)
|
||||
// Skip invalid symlinks
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Invalid symlink", "dir", entryPath, err)
|
||||
continue
|
||||
}
|
||||
if isDir && !isDirIgnored(entry.Name()) && isDirReadable(ctx, job.fs, entryPath) {
|
||||
children = append(children, entryPath)
|
||||
folder.numSubFolders++
|
||||
} else {
|
||||
fileInfo, err := entry.Info()
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Error getting fileInfo", "name", entry.Name(), err)
|
||||
return folder, children, err
|
||||
}
|
||||
if fileInfo.ModTime().After(folder.modTime) {
|
||||
folder.modTime = fileInfo.ModTime()
|
||||
}
|
||||
switch {
|
||||
case model.IsAudioFile(entry.Name()):
|
||||
folder.audioFiles[entry.Name()] = entry
|
||||
case model.IsValidPlaylist(entry.Name()):
|
||||
folder.numPlaylists++
|
||||
case model.IsImageFile(entry.Name()):
|
||||
folder.imageFiles[entry.Name()] = entry
|
||||
folder.imagesUpdatedAt = utils.TimeNewest(folder.imagesUpdatedAt, fileInfo.ModTime(), folder.modTime)
|
||||
}
|
||||
}
|
||||
}
|
||||
return folder, children, nil
|
||||
}
|
||||
|
||||
// fullReadDir reads all files in the folder, skipping the ones with errors.
|
||||
// It also detects when it is "stuck" with an error in the same directory over and over.
|
||||
// In this case, it stops and returns whatever it was able to read until it got stuck.
|
||||
// See discussion here: https://github.com/navidrome/navidrome/issues/1164#issuecomment-881922850
|
||||
func fullReadDir(ctx context.Context, dir fs.ReadDirFile) []fs.DirEntry {
|
||||
var allEntries []fs.DirEntry
|
||||
var prevErrStr = ""
|
||||
for {
|
||||
if ctx.Err() != nil {
|
||||
return nil
|
||||
}
|
||||
entries, err := dir.ReadDir(-1)
|
||||
allEntries = append(allEntries, entries...)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
log.Warn(ctx, "Skipping DirEntry", err)
|
||||
if prevErrStr == err.Error() {
|
||||
log.Error(ctx, "Scanner: Duplicate DirEntry failure, bailing", err)
|
||||
break
|
||||
}
|
||||
prevErrStr = err.Error()
|
||||
}
|
||||
sort.Slice(allEntries, func(i, j int) bool { return allEntries[i].Name() < allEntries[j].Name() })
|
||||
return allEntries
|
||||
}
|
||||
|
||||
// isDirOrSymlinkToDir returns true if and only if the dirEnt represents a file
|
||||
// system directory, or a symbolic link to a directory. Note that if the dirEnt
|
||||
// is not a directory but is a symbolic link, this method will resolve by
|
||||
// sending a request to the operating system to follow the symbolic link.
|
||||
// originally copied from github.com/karrick/godirwalk, modified to use dirEntry for
|
||||
// efficiency for go 1.16 and beyond
|
||||
func isDirOrSymlinkToDir(fsys fs.FS, baseDir string, dirEnt fs.DirEntry) (bool, error) {
|
||||
if dirEnt.IsDir() {
|
||||
return true, nil
|
||||
}
|
||||
if dirEnt.Type()&fs.ModeSymlink == 0 {
|
||||
return false, nil
|
||||
}
|
||||
// If symlinks are disabled, return false for symlinks
|
||||
if !conf.Server.Scanner.FollowSymlinks {
|
||||
return false, nil
|
||||
}
|
||||
// Does this symlink point to a directory?
|
||||
fileInfo, err := fs.Stat(fsys, path.Join(baseDir, dirEnt.Name()))
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return fileInfo.IsDir(), nil
|
||||
}
|
||||
|
||||
// isDirReadable returns true if the directory represented by dirEnt is readable
|
||||
func isDirReadable(ctx context.Context, fsys fs.FS, dirPath string) bool {
|
||||
dir, err := fsys.Open(dirPath)
|
||||
if err != nil {
|
||||
log.Warn("Scanner: Skipping unreadable directory", "path", dirPath, err)
|
||||
return false
|
||||
}
|
||||
err = dir.Close()
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Scanner: Error closing directory", "path", dirPath, err)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// List of special directories to ignore
|
||||
var ignoredDirs = []string{
|
||||
"$RECYCLE.BIN",
|
||||
"#snapshot",
|
||||
"@Recycle",
|
||||
"@Recently-Snapshot",
|
||||
".streams",
|
||||
"lost+found",
|
||||
}
|
||||
|
||||
// isDirIgnored returns true if the directory represented by dirEnt should be ignored
|
||||
func isDirIgnored(name string) bool {
|
||||
// allows Album folders for albums which eg start with ellipses
|
||||
if strings.HasPrefix(name, ".") && !strings.HasPrefix(name, "..") {
|
||||
return true
|
||||
}
|
||||
if slices.ContainsFunc(ignoredDirs, func(s string) bool { return strings.EqualFold(s, name) }) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isEntryIgnored(name string) bool {
|
||||
return strings.HasPrefix(name, ".") && !strings.HasPrefix(name, "..")
|
||||
}
|
||||
414
scanner/walk_dir_tree_test.go
Normal file
414
scanner/walk_dir_tree_test.go
Normal file
@@ -0,0 +1,414 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing/fstest"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/core/storage"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
var _ = Describe("walk_dir_tree", func() {
|
||||
Describe("walkDirTree", func() {
|
||||
var (
|
||||
fsys storage.MusicFS
|
||||
job *scanJob
|
||||
ctx context.Context
|
||||
)
|
||||
|
||||
Context("full library", func() {
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
ctx = GinkgoT().Context()
|
||||
fsys = &mockMusicFS{
|
||||
FS: fstest.MapFS{
|
||||
"root/a/.ndignore": {Data: []byte("ignored/*")},
|
||||
"root/a/f1.mp3": {},
|
||||
"root/a/f2.mp3": {},
|
||||
"root/a/ignored/bad.mp3": {},
|
||||
"root/b/cover.jpg": {},
|
||||
"root/c/f3": {},
|
||||
"root/d": {},
|
||||
"root/d/.ndignore": {},
|
||||
"root/d/f1.mp3": {},
|
||||
"root/d/f2.mp3": {},
|
||||
"root/d/f3.mp3": {},
|
||||
"root/e/original/f1.mp3": {},
|
||||
"root/e/symlink": {Mode: fs.ModeSymlink, Data: []byte("original")},
|
||||
},
|
||||
}
|
||||
job = &scanJob{
|
||||
fs: fsys,
|
||||
lib: model.Library{Path: "/music"},
|
||||
}
|
||||
})
|
||||
|
||||
// Helper function to call walkDirTree and collect folders from the results channel
|
||||
getFolders := func() map[string]*folderEntry {
|
||||
results, err := walkDirTree(ctx, job)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
folders := map[string]*folderEntry{}
|
||||
g := errgroup.Group{}
|
||||
g.Go(func() error {
|
||||
for folder := range results {
|
||||
folders[folder.path] = folder
|
||||
}
|
||||
return nil
|
||||
})
|
||||
_ = g.Wait()
|
||||
return folders
|
||||
}
|
||||
|
||||
DescribeTable("symlink handling",
|
||||
func(followSymlinks bool, expectedFolderCount int) {
|
||||
conf.Server.Scanner.FollowSymlinks = followSymlinks
|
||||
folders := getFolders()
|
||||
|
||||
Expect(folders).To(HaveLen(expectedFolderCount + 2)) // +2 for `.` and `root`
|
||||
|
||||
// Basic folder structure checks
|
||||
Expect(folders["root/a"].audioFiles).To(SatisfyAll(
|
||||
HaveLen(2),
|
||||
HaveKey("f1.mp3"),
|
||||
HaveKey("f2.mp3"),
|
||||
))
|
||||
Expect(folders["root/a"].imageFiles).To(BeEmpty())
|
||||
Expect(folders["root/b"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/b"].imageFiles).To(SatisfyAll(
|
||||
HaveLen(1),
|
||||
HaveKey("cover.jpg"),
|
||||
))
|
||||
Expect(folders["root/c"].audioFiles).To(BeEmpty())
|
||||
Expect(folders["root/c"].imageFiles).To(BeEmpty())
|
||||
Expect(folders).ToNot(HaveKey("root/d"))
|
||||
|
||||
// Symlink specific checks
|
||||
if followSymlinks {
|
||||
Expect(folders["root/e/symlink"].audioFiles).To(HaveLen(1))
|
||||
} else {
|
||||
Expect(folders).ToNot(HaveKey("root/e/symlink"))
|
||||
}
|
||||
},
|
||||
Entry("with symlinks enabled", true, 7),
|
||||
Entry("with symlinks disabled", false, 6),
|
||||
)
|
||||
})
|
||||
|
||||
Context("with target folders", func() {
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
ctx = GinkgoT().Context()
|
||||
fsys = &mockMusicFS{
|
||||
FS: fstest.MapFS{
|
||||
"Artist/Album1/track1.mp3": {},
|
||||
"Artist/Album1/track2.mp3": {},
|
||||
"Artist/Album2/track1.mp3": {},
|
||||
"Artist/Album2/track2.mp3": {},
|
||||
"Artist/Album2/Sub/track3.mp3": {},
|
||||
"OtherArtist/Album3/track1.mp3": {},
|
||||
},
|
||||
}
|
||||
job = &scanJob{
|
||||
fs: fsys,
|
||||
lib: model.Library{Path: "/music"},
|
||||
}
|
||||
})
|
||||
|
||||
It("should recursively walk all subdirectories of target folders", func() {
|
||||
results, err := walkDirTree(ctx, job, "Artist")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
folders := map[string]*folderEntry{}
|
||||
g := errgroup.Group{}
|
||||
g.Go(func() error {
|
||||
for folder := range results {
|
||||
folders[folder.path] = folder
|
||||
}
|
||||
return nil
|
||||
})
|
||||
_ = g.Wait()
|
||||
|
||||
// Should include the target folder and all its descendants
|
||||
Expect(folders).To(SatisfyAll(
|
||||
HaveKey("Artist"),
|
||||
HaveKey("Artist/Album1"),
|
||||
HaveKey("Artist/Album2"),
|
||||
HaveKey("Artist/Album2/Sub"),
|
||||
))
|
||||
|
||||
// Should not include folders outside the target
|
||||
Expect(folders).ToNot(HaveKey("OtherArtist"))
|
||||
Expect(folders).ToNot(HaveKey("OtherArtist/Album3"))
|
||||
|
||||
// Verify audio files are present
|
||||
Expect(folders["Artist/Album1"].audioFiles).To(HaveLen(2))
|
||||
Expect(folders["Artist/Album2"].audioFiles).To(HaveLen(2))
|
||||
Expect(folders["Artist/Album2/Sub"].audioFiles).To(HaveLen(1))
|
||||
})
|
||||
|
||||
It("should handle multiple target folders", func() {
|
||||
results, err := walkDirTree(ctx, job, "Artist/Album1", "OtherArtist")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
folders := map[string]*folderEntry{}
|
||||
g := errgroup.Group{}
|
||||
g.Go(func() error {
|
||||
for folder := range results {
|
||||
folders[folder.path] = folder
|
||||
}
|
||||
return nil
|
||||
})
|
||||
_ = g.Wait()
|
||||
|
||||
// Should include both target folders and their descendants
|
||||
Expect(folders).To(SatisfyAll(
|
||||
HaveKey("Artist/Album1"),
|
||||
HaveKey("OtherArtist"),
|
||||
HaveKey("OtherArtist/Album3"),
|
||||
))
|
||||
|
||||
// Should not include other folders
|
||||
Expect(folders).ToNot(HaveKey("Artist"))
|
||||
Expect(folders).ToNot(HaveKey("Artist/Album2"))
|
||||
Expect(folders).ToNot(HaveKey("Artist/Album2/Sub"))
|
||||
})
|
||||
|
||||
It("should skip non-existent target folders and preserve them in lastUpdates", func() {
|
||||
// Setup job with lastUpdates for both existing and non-existing folders
|
||||
job.lastUpdates = map[string]model.FolderUpdateInfo{
|
||||
model.FolderID(job.lib, "Artist/Album1"): {},
|
||||
model.FolderID(job.lib, "NonExistent/DeletedFolder"): {},
|
||||
model.FolderID(job.lib, "OtherArtist/Album3"): {},
|
||||
}
|
||||
|
||||
// Try to scan existing folder and non-existing folder
|
||||
results, err := walkDirTree(ctx, job, "Artist/Album1", "NonExistent/DeletedFolder")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Collect results
|
||||
folders := map[string]struct{}{}
|
||||
for folder := range results {
|
||||
folders[folder.path] = struct{}{}
|
||||
}
|
||||
|
||||
// Should only include the existing folder
|
||||
Expect(folders).To(HaveKey("Artist/Album1"))
|
||||
Expect(folders).ToNot(HaveKey("NonExistent/DeletedFolder"))
|
||||
|
||||
// The non-existent folder should still be in lastUpdates (not removed by popLastUpdate)
|
||||
Expect(job.lastUpdates).To(HaveKey(model.FolderID(job.lib, "NonExistent/DeletedFolder")))
|
||||
|
||||
// The existing folder should have been removed from lastUpdates
|
||||
Expect(job.lastUpdates).ToNot(HaveKey(model.FolderID(job.lib, "Artist/Album1")))
|
||||
|
||||
// Folders not in targets should remain in lastUpdates
|
||||
Expect(job.lastUpdates).To(HaveKey(model.FolderID(job.lib, "OtherArtist/Album3")))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
Describe("helper functions", func() {
|
||||
dir, _ := os.Getwd()
|
||||
fsys := os.DirFS(dir)
|
||||
baseDir := filepath.Join("tests", "fixtures")
|
||||
|
||||
Describe("isDirOrSymlinkToDir", func() {
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
})
|
||||
|
||||
Context("with symlinks enabled", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.FollowSymlinks = true
|
||||
})
|
||||
|
||||
DescribeTable("returns expected result",
|
||||
func(dirName string, expected bool) {
|
||||
dirEntry := getDirEntry("tests/fixtures", dirName)
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(Equal(expected))
|
||||
},
|
||||
Entry("normal dir", "empty_folder", true),
|
||||
Entry("symlink to dir", "symlink2dir", true),
|
||||
Entry("regular file", "test.mp3", false),
|
||||
Entry("symlink to file", "symlink", false),
|
||||
)
|
||||
})
|
||||
|
||||
Context("with symlinks disabled", func() {
|
||||
BeforeEach(func() {
|
||||
conf.Server.Scanner.FollowSymlinks = false
|
||||
})
|
||||
|
||||
DescribeTable("returns expected result",
|
||||
func(dirName string, expected bool) {
|
||||
dirEntry := getDirEntry("tests/fixtures", dirName)
|
||||
Expect(isDirOrSymlinkToDir(fsys, baseDir, dirEntry)).To(Equal(expected))
|
||||
},
|
||||
Entry("normal dir", "empty_folder", true),
|
||||
Entry("symlink to dir", "symlink2dir", false),
|
||||
Entry("regular file", "test.mp3", false),
|
||||
Entry("symlink to file", "symlink", false),
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
Describe("isDirIgnored", func() {
|
||||
DescribeTable("returns expected result",
|
||||
func(dirName string, expected bool) {
|
||||
Expect(isDirIgnored(dirName)).To(Equal(expected))
|
||||
},
|
||||
Entry("normal dir", "empty_folder", false),
|
||||
Entry("hidden dir", ".hidden_folder", true),
|
||||
Entry("dir starting with ellipsis", "...unhidden_folder", false),
|
||||
Entry("recycle bin", "$Recycle.Bin", true),
|
||||
Entry("snapshot dir", "#snapshot", true),
|
||||
)
|
||||
})
|
||||
|
||||
Describe("fullReadDir", func() {
|
||||
var (
|
||||
fsys fakeFS
|
||||
ctx context.Context
|
||||
)
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx = GinkgoT().Context()
|
||||
fsys = fakeFS{MapFS: fstest.MapFS{
|
||||
"root/a/f1": {},
|
||||
"root/b/f2": {},
|
||||
"root/c/f3": {},
|
||||
}}
|
||||
})
|
||||
|
||||
DescribeTable("reading directory entries",
|
||||
func(failOn string, expectedErr error, expectedNames []string) {
|
||||
fsys.failOn = failOn
|
||||
fsys.err = expectedErr
|
||||
dir, _ := fsys.Open("root")
|
||||
entries := fullReadDir(ctx, dir.(fs.ReadDirFile))
|
||||
Expect(entries).To(HaveLen(len(expectedNames)))
|
||||
for i, name := range expectedNames {
|
||||
Expect(entries[i].Name()).To(Equal(name))
|
||||
}
|
||||
},
|
||||
Entry("reads all entries", "", nil, []string{"a", "b", "c"}),
|
||||
Entry("skips entries with permission error", "b", nil, []string{"a", "c"}),
|
||||
Entry("aborts on fs.ErrNotExist", "", fs.ErrNotExist, []string{}),
|
||||
)
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
type fakeFS struct {
|
||||
fstest.MapFS
|
||||
failOn string
|
||||
err error
|
||||
}
|
||||
|
||||
func (f *fakeFS) Open(name string) (fs.File, error) {
|
||||
dir, err := f.MapFS.Open(name)
|
||||
return &fakeDirFile{File: dir, fail: f.failOn, err: f.err}, err
|
||||
}
|
||||
|
||||
type fakeDirFile struct {
|
||||
fs.File
|
||||
entries []fs.DirEntry
|
||||
pos int
|
||||
fail string
|
||||
err error
|
||||
}
|
||||
|
||||
// Only works with n == -1
|
||||
func (fd *fakeDirFile) ReadDir(int) ([]fs.DirEntry, error) {
|
||||
if fd.err != nil {
|
||||
return nil, fd.err
|
||||
}
|
||||
if fd.entries == nil {
|
||||
fd.entries, _ = fd.File.(fs.ReadDirFile).ReadDir(-1)
|
||||
}
|
||||
var dirs []fs.DirEntry
|
||||
for {
|
||||
if fd.pos >= len(fd.entries) {
|
||||
break
|
||||
}
|
||||
e := fd.entries[fd.pos]
|
||||
fd.pos++
|
||||
if e.Name() == fd.fail {
|
||||
return dirs, &fs.PathError{Op: "lstat", Path: e.Name(), Err: fs.ErrPermission}
|
||||
}
|
||||
dirs = append(dirs, e)
|
||||
}
|
||||
return dirs, nil
|
||||
}
|
||||
|
||||
func getDirEntry(baseDir, name string) os.DirEntry {
|
||||
dirEntries, _ := os.ReadDir(baseDir)
|
||||
for _, entry := range dirEntries {
|
||||
if entry.Name() == name {
|
||||
return entry
|
||||
}
|
||||
}
|
||||
panic(fmt.Sprintf("Could not find %s in %s", name, baseDir))
|
||||
}
|
||||
|
||||
// mockMusicFS is a mock implementation of the MusicFS interface that supports symlinks
|
||||
type mockMusicFS struct {
|
||||
storage.MusicFS
|
||||
fs.FS
|
||||
}
|
||||
|
||||
// Open resolves symlinks
|
||||
func (m *mockMusicFS) Open(name string) (fs.File, error) {
|
||||
f, err := m.FS.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
info, err := f.Stat()
|
||||
if err != nil {
|
||||
f.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if info.Mode()&fs.ModeSymlink != 0 {
|
||||
// For symlinks, read the target path from the Data field
|
||||
target := string(m.FS.(fstest.MapFS)[name].Data)
|
||||
f.Close()
|
||||
return m.FS.Open(target)
|
||||
}
|
||||
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// Stat uses Open to resolve symlinks
|
||||
func (m *mockMusicFS) Stat(name string) (fs.FileInfo, error) {
|
||||
f, err := m.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
return f.Stat()
|
||||
}
|
||||
|
||||
// ReadDir uses Open to resolve symlinks
|
||||
func (m *mockMusicFS) ReadDir(name string) ([]fs.DirEntry, error) {
|
||||
f, err := m.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
if dirFile, ok := f.(fs.ReadDirFile); ok {
|
||||
return dirFile.ReadDir(-1)
|
||||
}
|
||||
return nil, fmt.Errorf("not a directory")
|
||||
}
|
||||
335
scanner/watcher.go
Normal file
335
scanner/watcher.go
Normal file
@@ -0,0 +1,335 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/core/storage"
|
||||
"github.com/navidrome/navidrome/log"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/utils/singleton"
|
||||
)
|
||||
|
||||
type Watcher interface {
|
||||
Run(ctx context.Context) error
|
||||
Watch(ctx context.Context, lib *model.Library) error
|
||||
StopWatching(ctx context.Context, libraryID int) error
|
||||
}
|
||||
|
||||
type watcher struct {
|
||||
mainCtx context.Context
|
||||
ds model.DataStore
|
||||
scanner model.Scanner
|
||||
triggerWait time.Duration
|
||||
watcherNotify chan scanNotification
|
||||
libraryWatchers map[int]*libraryWatcherInstance
|
||||
mu sync.RWMutex
|
||||
}
|
||||
|
||||
type libraryWatcherInstance struct {
|
||||
library *model.Library
|
||||
cancel context.CancelFunc
|
||||
}
|
||||
|
||||
type scanNotification struct {
|
||||
Library *model.Library
|
||||
FolderPath string
|
||||
}
|
||||
|
||||
// GetWatcher returns the watcher singleton
|
||||
func GetWatcher(ds model.DataStore, s model.Scanner) Watcher {
|
||||
return singleton.GetInstance(func() *watcher {
|
||||
return &watcher{
|
||||
ds: ds,
|
||||
scanner: s,
|
||||
triggerWait: conf.Server.Scanner.WatcherWait,
|
||||
watcherNotify: make(chan scanNotification, 1),
|
||||
libraryWatchers: make(map[int]*libraryWatcherInstance),
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func (w *watcher) Run(ctx context.Context) error {
|
||||
// Keep the main context to be used in all watchers added later
|
||||
w.mainCtx = ctx
|
||||
|
||||
// Start watchers for all existing libraries
|
||||
libs, err := w.ds.Library(ctx).GetAll()
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting libraries: %w", err)
|
||||
}
|
||||
|
||||
for _, lib := range libs {
|
||||
if err := w.Watch(ctx, &lib); err != nil {
|
||||
log.Warn(ctx, "Failed to start watcher for existing library", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Main scan triggering loop
|
||||
trigger := time.NewTimer(w.triggerWait)
|
||||
trigger.Stop()
|
||||
targets := make(map[model.ScanTarget]struct{})
|
||||
for {
|
||||
select {
|
||||
case <-trigger.C:
|
||||
log.Info("Watcher: Triggering scan for changed folders", "numTargets", len(targets))
|
||||
status, err := w.scanner.Status(ctx)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Watcher: Error retrieving Scanner status", err)
|
||||
break
|
||||
}
|
||||
if status.Scanning {
|
||||
log.Debug(ctx, "Watcher: Already scanning, will retry later", "waitTime", w.triggerWait*3)
|
||||
trigger.Reset(w.triggerWait * 3)
|
||||
continue
|
||||
}
|
||||
|
||||
// Convert targets map to slice
|
||||
targetSlice := make([]model.ScanTarget, 0, len(targets))
|
||||
for target := range targets {
|
||||
targetSlice = append(targetSlice, target)
|
||||
}
|
||||
|
||||
// Clear targets for next batch
|
||||
targets = make(map[model.ScanTarget]struct{})
|
||||
|
||||
go func() {
|
||||
var err error
|
||||
if conf.Server.DevSelectiveWatcher {
|
||||
_, err = w.scanner.ScanFolders(ctx, false, targetSlice)
|
||||
} else {
|
||||
_, err = w.scanner.ScanAll(ctx, false)
|
||||
}
|
||||
if err != nil {
|
||||
log.Error(ctx, "Watcher: Error scanning", err)
|
||||
} else {
|
||||
log.Info(ctx, "Watcher: Scan completed")
|
||||
}
|
||||
}()
|
||||
case <-ctx.Done():
|
||||
// Stop all library watchers
|
||||
w.mu.Lock()
|
||||
for libraryID, instance := range w.libraryWatchers {
|
||||
log.Debug(ctx, "Stopping library watcher due to context cancellation", "libraryID", libraryID)
|
||||
instance.cancel()
|
||||
}
|
||||
w.libraryWatchers = make(map[int]*libraryWatcherInstance)
|
||||
w.mu.Unlock()
|
||||
return nil
|
||||
case notification := <-w.watcherNotify:
|
||||
// Reset the trigger timer for debounce
|
||||
trigger.Reset(w.triggerWait)
|
||||
|
||||
lib := notification.Library
|
||||
folderPath := notification.FolderPath
|
||||
|
||||
// If already scheduled for scan, skip
|
||||
target := model.ScanTarget{LibraryID: lib.ID, FolderPath: folderPath}
|
||||
if _, exists := targets[target]; exists {
|
||||
continue
|
||||
}
|
||||
targets[target] = struct{}{}
|
||||
|
||||
log.Debug(ctx, "Watcher: Detected changes. Waiting for more changes before triggering scan",
|
||||
"libraryID", lib.ID, "name", lib.Name, "path", lib.Path, "folderPath", folderPath)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *watcher) Watch(ctx context.Context, lib *model.Library) error {
|
||||
w.mu.Lock()
|
||||
defer w.mu.Unlock()
|
||||
|
||||
// Stop existing watcher if any
|
||||
if existingInstance, exists := w.libraryWatchers[lib.ID]; exists {
|
||||
log.Debug(ctx, "Stopping existing watcher before starting new one", "libraryID", lib.ID, "name", lib.Name)
|
||||
existingInstance.cancel()
|
||||
}
|
||||
|
||||
// Start new watcher
|
||||
watcherCtx, cancel := context.WithCancel(w.mainCtx)
|
||||
instance := &libraryWatcherInstance{
|
||||
library: lib,
|
||||
cancel: cancel,
|
||||
}
|
||||
|
||||
w.libraryWatchers[lib.ID] = instance
|
||||
|
||||
// Start watching in a goroutine
|
||||
go func() {
|
||||
defer func() {
|
||||
w.mu.Lock()
|
||||
if currentInstance, exists := w.libraryWatchers[lib.ID]; exists && currentInstance == instance {
|
||||
delete(w.libraryWatchers, lib.ID)
|
||||
}
|
||||
w.mu.Unlock()
|
||||
}()
|
||||
|
||||
err := w.watchLibrary(watcherCtx, lib)
|
||||
if err != nil && watcherCtx.Err() == nil { // Only log error if not due to cancellation
|
||||
log.Error(ctx, "Watcher error", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path, err)
|
||||
}
|
||||
}()
|
||||
|
||||
log.Info(ctx, "Started watcher for library", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *watcher) StopWatching(ctx context.Context, libraryID int) error {
|
||||
w.mu.Lock()
|
||||
defer w.mu.Unlock()
|
||||
|
||||
instance, exists := w.libraryWatchers[libraryID]
|
||||
if !exists {
|
||||
log.Debug(ctx, "No watcher found to stop", "libraryID", libraryID)
|
||||
return nil
|
||||
}
|
||||
|
||||
instance.cancel()
|
||||
delete(w.libraryWatchers, libraryID)
|
||||
|
||||
log.Info(ctx, "Stopped watcher for library", "libraryID", libraryID, "name", instance.library.Name)
|
||||
return nil
|
||||
}
|
||||
|
||||
// watchLibrary implements the core watching logic for a single library (extracted from old watchLib function)
|
||||
func (w *watcher) watchLibrary(ctx context.Context, lib *model.Library) error {
|
||||
s, err := storage.For(lib.Path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating storage: %w", err)
|
||||
}
|
||||
|
||||
fsys, err := s.FS()
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting FS: %w", err)
|
||||
}
|
||||
|
||||
watcher, ok := s.(storage.Watcher)
|
||||
if !ok {
|
||||
log.Info(ctx, "Watcher not supported for storage type", "libraryID", lib.ID, "path", lib.Path)
|
||||
return nil
|
||||
}
|
||||
|
||||
c, err := watcher.Start(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("starting watcher: %w", err)
|
||||
}
|
||||
|
||||
absLibPath, err := filepath.Abs(lib.Path)
|
||||
if err != nil {
|
||||
return fmt.Errorf("converting to absolute path: %w", err)
|
||||
}
|
||||
|
||||
log.Info(ctx, "Watcher started for library", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path, "absoluteLibPath", absLibPath)
|
||||
|
||||
return w.processLibraryEvents(ctx, lib, fsys, c, absLibPath)
|
||||
}
|
||||
|
||||
// processLibraryEvents processes filesystem events for a library.
|
||||
func (w *watcher) processLibraryEvents(ctx context.Context, lib *model.Library, fsys storage.MusicFS, events <-chan string, absLibPath string) error {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
log.Debug(ctx, "Watcher stopped due to context cancellation", "libraryID", lib.ID, "name", lib.Name)
|
||||
return nil
|
||||
case path := <-events:
|
||||
path, err := filepath.Rel(absLibPath, path)
|
||||
if err != nil {
|
||||
log.Error(ctx, "Error getting relative path", "libraryID", lib.ID, "absolutePath", absLibPath, "path", path, err)
|
||||
continue
|
||||
}
|
||||
|
||||
if isIgnoredPath(ctx, fsys, path) {
|
||||
log.Trace(ctx, "Ignoring change", "libraryID", lib.ID, "path", path)
|
||||
continue
|
||||
}
|
||||
log.Trace(ctx, "Detected change", "libraryID", lib.ID, "path", path, "absoluteLibPath", absLibPath)
|
||||
|
||||
// Check if the original path (before resolution) matches .ndignore patterns
|
||||
// This is crucial for deleted folders - if a deleted folder matches .ndignore,
|
||||
// we should ignore it BEFORE resolveFolderPath walks up to the parent
|
||||
if w.shouldIgnoreFolderPath(ctx, fsys, path) {
|
||||
log.Debug(ctx, "Ignoring change matching .ndignore pattern", "libraryID", lib.ID, "path", path)
|
||||
continue
|
||||
}
|
||||
|
||||
// Find the folder to scan - validate path exists as directory, walk up if needed
|
||||
folderPath := resolveFolderPath(fsys, path)
|
||||
// Double-check after resolution in case the resolved path is different and also matches patterns
|
||||
if folderPath != path && w.shouldIgnoreFolderPath(ctx, fsys, folderPath) {
|
||||
log.Trace(ctx, "Ignoring change in folder matching .ndignore pattern", "libraryID", lib.ID, "folderPath", folderPath)
|
||||
continue
|
||||
}
|
||||
|
||||
// Notify the main watcher of changes
|
||||
select {
|
||||
case w.watcherNotify <- scanNotification{Library: lib, FolderPath: folderPath}:
|
||||
default:
|
||||
// Channel is full, notification already pending
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// resolveFolderPath takes a path (which may be a file or directory) and returns
|
||||
// the folder path to scan. If the path is a file, it walks up to find the parent
|
||||
// directory. Returns empty string if the path should scan the library root.
|
||||
func resolveFolderPath(fsys fs.FS, path string) string {
|
||||
// Handle root paths immediately
|
||||
if path == "." || path == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
folderPath := path
|
||||
for {
|
||||
info, err := fs.Stat(fsys, folderPath)
|
||||
if err == nil && info.IsDir() {
|
||||
// Found a valid directory
|
||||
return folderPath
|
||||
}
|
||||
if folderPath == "." || folderPath == "" {
|
||||
// Reached root, scan entire library
|
||||
return ""
|
||||
}
|
||||
// Walk up the tree
|
||||
dir, _ := filepath.Split(folderPath)
|
||||
if dir == "" || dir == "." {
|
||||
return ""
|
||||
}
|
||||
// Remove trailing slash
|
||||
folderPath = filepath.Clean(dir)
|
||||
}
|
||||
}
|
||||
|
||||
// shouldIgnoreFolderPath checks if the given folderPath should be ignored based on .ndignore patterns
|
||||
// in the library. It pushes all parent folders onto the IgnoreChecker stack before checking.
|
||||
func (w *watcher) shouldIgnoreFolderPath(ctx context.Context, fsys storage.MusicFS, folderPath string) bool {
|
||||
checker := newIgnoreChecker(fsys)
|
||||
err := checker.PushAllParents(ctx, folderPath)
|
||||
if err != nil {
|
||||
log.Warn(ctx, "Watcher: Error pushing ignore patterns for folder", "path", folderPath, err)
|
||||
}
|
||||
return checker.ShouldIgnore(ctx, folderPath)
|
||||
}
|
||||
|
||||
func isIgnoredPath(_ context.Context, _ fs.FS, path string) bool {
|
||||
baseDir, name := filepath.Split(path)
|
||||
switch {
|
||||
case model.IsAudioFile(path):
|
||||
return false
|
||||
case model.IsValidPlaylist(path):
|
||||
return false
|
||||
case model.IsImageFile(path):
|
||||
return false
|
||||
case name == ".DS_Store":
|
||||
return true
|
||||
}
|
||||
// As it can be a deletion and not a change, we cannot reliably know if the path is a file or directory.
|
||||
// But at this point, we can assume it's a directory. If it's a file, it would be ignored anyway
|
||||
return isDirIgnored(baseDir)
|
||||
}
|
||||
491
scanner/watcher_test.go
Normal file
491
scanner/watcher_test.go
Normal file
@@ -0,0 +1,491 @@
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io/fs"
|
||||
"path/filepath"
|
||||
"testing/fstest"
|
||||
"time"
|
||||
|
||||
"github.com/navidrome/navidrome/conf"
|
||||
"github.com/navidrome/navidrome/conf/configtest"
|
||||
"github.com/navidrome/navidrome/model"
|
||||
"github.com/navidrome/navidrome/tests"
|
||||
. "github.com/onsi/ginkgo/v2"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Watcher", func() {
|
||||
var ctx context.Context
|
||||
var cancel context.CancelFunc
|
||||
var mockScanner *tests.MockScanner
|
||||
var mockDS *tests.MockDataStore
|
||||
var w *watcher
|
||||
var lib *model.Library
|
||||
|
||||
BeforeEach(func() {
|
||||
DeferCleanup(configtest.SetupConfig())
|
||||
conf.Server.Scanner.WatcherWait = 50 * time.Millisecond // Short wait for tests
|
||||
|
||||
ctx, cancel = context.WithCancel(context.Background())
|
||||
DeferCleanup(cancel)
|
||||
|
||||
lib = &model.Library{
|
||||
ID: 1,
|
||||
Name: "Test Library",
|
||||
Path: "/test/library",
|
||||
}
|
||||
|
||||
// Set up mocks
|
||||
mockScanner = tests.NewMockScanner()
|
||||
mockDS = &tests.MockDataStore{}
|
||||
mockLibRepo := &tests.MockLibraryRepo{}
|
||||
mockLibRepo.SetData(model.Libraries{*lib})
|
||||
mockDS.MockedLibrary = mockLibRepo
|
||||
|
||||
// Create a new watcher instance (not singleton) for testing
|
||||
w = &watcher{
|
||||
ds: mockDS,
|
||||
scanner: mockScanner,
|
||||
triggerWait: conf.Server.Scanner.WatcherWait,
|
||||
watcherNotify: make(chan scanNotification, 10),
|
||||
libraryWatchers: make(map[int]*libraryWatcherInstance),
|
||||
mainCtx: ctx,
|
||||
}
|
||||
})
|
||||
|
||||
Describe("Target Collection and Deduplication", func() {
|
||||
BeforeEach(func() {
|
||||
// Start watcher in background
|
||||
go func() {
|
||||
_ = w.Run(ctx)
|
||||
}()
|
||||
|
||||
// Give watcher time to initialize
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
})
|
||||
|
||||
It("creates separate targets for different folders", func() {
|
||||
// Send notifications for different folders
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist2"}
|
||||
|
||||
// Wait for watcher to process and trigger scan
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
|
||||
// Verify two targets
|
||||
calls := mockScanner.GetScanFoldersCalls()
|
||||
Expect(calls).To(HaveLen(1))
|
||||
Expect(calls[0].Targets).To(HaveLen(2))
|
||||
|
||||
// Extract folder paths
|
||||
folderPaths := make(map[string]bool)
|
||||
for _, target := range calls[0].Targets {
|
||||
Expect(target.LibraryID).To(Equal(1))
|
||||
folderPaths[target.FolderPath] = true
|
||||
}
|
||||
Expect(folderPaths).To(HaveKey("artist1"))
|
||||
Expect(folderPaths).To(HaveKey("artist2"))
|
||||
})
|
||||
|
||||
It("handles different folder paths correctly", func() {
|
||||
// Send notification for nested folder
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
|
||||
|
||||
// Wait for watcher to process and trigger scan
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
|
||||
// Verify the target
|
||||
calls := mockScanner.GetScanFoldersCalls()
|
||||
Expect(calls).To(HaveLen(1))
|
||||
Expect(calls[0].Targets).To(HaveLen(1))
|
||||
Expect(calls[0].Targets[0].FolderPath).To(Equal("artist1/album1"))
|
||||
})
|
||||
|
||||
It("deduplicates folder and file within same folder", func() {
|
||||
// Send notification for a folder
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
// Send notification for same folder (as if file change was detected there)
|
||||
// In practice, watchLibrary() would walk up from file path to folder
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
// Send another for same folder
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
|
||||
|
||||
// Wait for watcher to process and trigger scan
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
|
||||
// Verify only one target despite multiple file/folder changes
|
||||
calls := mockScanner.GetScanFoldersCalls()
|
||||
Expect(calls).To(HaveLen(1))
|
||||
Expect(calls[0].Targets).To(HaveLen(1))
|
||||
Expect(calls[0].Targets[0].FolderPath).To(Equal("artist1/album1"))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("Timer Behavior", func() {
|
||||
BeforeEach(func() {
|
||||
// Start watcher in background
|
||||
go func() {
|
||||
_ = w.Run(ctx)
|
||||
}()
|
||||
|
||||
// Give watcher time to initialize
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
})
|
||||
|
||||
It("resets timer on each change (debouncing)", func() {
|
||||
// Send first notification
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
|
||||
|
||||
// Wait a bit less than half the watcher wait time to ensure timer doesn't fire
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
|
||||
// No scan should have been triggered yet
|
||||
Expect(mockScanner.GetScanFoldersCallCount()).To(Equal(0))
|
||||
|
||||
// Send another notification (resets timer)
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
|
||||
|
||||
// Wait a bit less than half the watcher wait time again
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
|
||||
// Still no scan
|
||||
Expect(mockScanner.GetScanFoldersCallCount()).To(Equal(0))
|
||||
|
||||
// Wait for full timer to expire after last notification (plus margin)
|
||||
time.Sleep(60 * time.Millisecond)
|
||||
|
||||
// Now scan should have been triggered
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 100*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
})
|
||||
|
||||
It("triggers scan after quiet period", func() {
|
||||
// Send notification
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
|
||||
|
||||
// No scan immediately
|
||||
Expect(mockScanner.GetScanFoldersCallCount()).To(Equal(0))
|
||||
|
||||
// Wait for quiet period
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("Empty and Root Paths", func() {
|
||||
BeforeEach(func() {
|
||||
// Start watcher in background
|
||||
go func() {
|
||||
_ = w.Run(ctx)
|
||||
}()
|
||||
|
||||
// Give watcher time to initialize
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
})
|
||||
|
||||
It("handles empty folder path (library root)", func() {
|
||||
// Send notification with empty folder path
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: ""}
|
||||
|
||||
// Wait for scan
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
|
||||
// Should scan the library root
|
||||
calls := mockScanner.GetScanFoldersCalls()
|
||||
Expect(calls).To(HaveLen(1))
|
||||
Expect(calls[0].Targets).To(HaveLen(1))
|
||||
Expect(calls[0].Targets[0].FolderPath).To(Equal(""))
|
||||
})
|
||||
|
||||
It("deduplicates empty and dot paths", func() {
|
||||
// Send notifications with empty and dot paths
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: ""}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: ""}
|
||||
|
||||
// Wait for scan
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
|
||||
// Should have only one target
|
||||
calls := mockScanner.GetScanFoldersCalls()
|
||||
Expect(calls).To(HaveLen(1))
|
||||
Expect(calls[0].Targets).To(HaveLen(1))
|
||||
})
|
||||
})
|
||||
|
||||
Describe("Multiple Libraries", func() {
|
||||
var lib2 *model.Library
|
||||
|
||||
BeforeEach(func() {
|
||||
// Create second library
|
||||
lib2 = &model.Library{
|
||||
ID: 2,
|
||||
Name: "Test Library 2",
|
||||
Path: "/test/library2",
|
||||
}
|
||||
|
||||
mockLibRepo := mockDS.MockedLibrary.(*tests.MockLibraryRepo)
|
||||
mockLibRepo.SetData(model.Libraries{*lib, *lib2})
|
||||
|
||||
// Start watcher in background
|
||||
go func() {
|
||||
_ = w.Run(ctx)
|
||||
}()
|
||||
|
||||
// Give watcher time to initialize
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
})
|
||||
|
||||
It("creates separate targets for different libraries", func() {
|
||||
// Send notifications for both libraries
|
||||
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
w.watcherNotify <- scanNotification{Library: lib2, FolderPath: "artist2"}
|
||||
|
||||
// Wait for scan
|
||||
Eventually(func() int {
|
||||
return mockScanner.GetScanFoldersCallCount()
|
||||
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
|
||||
|
||||
// Verify two targets for different libraries
|
||||
calls := mockScanner.GetScanFoldersCalls()
|
||||
Expect(calls).To(HaveLen(1))
|
||||
Expect(calls[0].Targets).To(HaveLen(2))
|
||||
|
||||
// Verify library IDs are different
|
||||
libraryIDs := make(map[int]bool)
|
||||
for _, target := range calls[0].Targets {
|
||||
libraryIDs[target.LibraryID] = true
|
||||
}
|
||||
Expect(libraryIDs).To(HaveKey(1))
|
||||
Expect(libraryIDs).To(HaveKey(2))
|
||||
})
|
||||
})
|
||||
|
||||
Describe(".ndignore handling", func() {
|
||||
var ctx context.Context
|
||||
var cancel context.CancelFunc
|
||||
var w *watcher
|
||||
var mockFS *mockMusicFS
|
||||
var lib *model.Library
|
||||
var eventChan chan string
|
||||
var absLibPath string
|
||||
|
||||
BeforeEach(func() {
|
||||
ctx, cancel = context.WithCancel(GinkgoT().Context())
|
||||
DeferCleanup(cancel)
|
||||
|
||||
// Set up library
|
||||
var err error
|
||||
absLibPath, err = filepath.Abs(".")
|
||||
Expect(err).NotTo(HaveOccurred())
|
||||
|
||||
lib = &model.Library{
|
||||
ID: 1,
|
||||
Name: "Test Library",
|
||||
Path: absLibPath,
|
||||
}
|
||||
|
||||
// Create watcher with notification channel
|
||||
w = &watcher{
|
||||
watcherNotify: make(chan scanNotification, 10),
|
||||
}
|
||||
|
||||
eventChan = make(chan string, 10)
|
||||
})
|
||||
|
||||
// Helper to send an event - converts relative path to absolute
|
||||
sendEvent := func(relativePath string) {
|
||||
path := filepath.Join(absLibPath, relativePath)
|
||||
eventChan <- path
|
||||
}
|
||||
|
||||
// Helper to start the real event processing loop
|
||||
startEventProcessing := func() {
|
||||
go func() {
|
||||
defer GinkgoRecover()
|
||||
// Call the actual processLibraryEvents method - testing the real implementation!
|
||||
_ = w.processLibraryEvents(ctx, lib, mockFS, eventChan, absLibPath)
|
||||
}()
|
||||
}
|
||||
|
||||
Context("when a folder matching .ndignore is deleted", func() {
|
||||
BeforeEach(func() {
|
||||
// Create filesystem with .ndignore containing _TEMP pattern
|
||||
// The deleted folder (_TEMP) will NOT exist in the filesystem
|
||||
mockFS = &mockMusicFS{
|
||||
FS: fstest.MapFS{
|
||||
"rock": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"rock/.ndignore": &fstest.MapFile{Data: []byte("_TEMP\n")},
|
||||
"rock/valid_album": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"rock/valid_album/track.mp3": &fstest.MapFile{Data: []byte("audio")},
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
It("should NOT send scan notification when deleted folder matches .ndignore", func() {
|
||||
startEventProcessing()
|
||||
|
||||
// Simulate deletion event for rock/_TEMP
|
||||
sendEvent("rock/_TEMP")
|
||||
|
||||
// Wait a bit to ensure event is processed
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// No notification should have been sent
|
||||
Consistently(eventChan, 100*time.Millisecond).Should(BeEmpty())
|
||||
})
|
||||
|
||||
It("should send scan notification for valid folder deletion", func() {
|
||||
startEventProcessing()
|
||||
|
||||
// Simulate deletion event for rock/other_folder (not in .ndignore and doesn't exist)
|
||||
// Since it doesn't exist in mockFS, resolveFolderPath will walk up to "rock"
|
||||
sendEvent("rock/other_folder")
|
||||
|
||||
// Should receive notification for parent folder
|
||||
Eventually(w.watcherNotify, 200*time.Millisecond).Should(Receive(Equal(scanNotification{
|
||||
Library: lib,
|
||||
FolderPath: "rock",
|
||||
})))
|
||||
})
|
||||
})
|
||||
|
||||
Context("with nested folder patterns", func() {
|
||||
BeforeEach(func() {
|
||||
mockFS = &mockMusicFS{
|
||||
FS: fstest.MapFS{
|
||||
"music": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"music/.ndignore": &fstest.MapFile{Data: []byte("**/temp\n**/cache\n")},
|
||||
"music/rock": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"music/rock/artist": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
It("should NOT send notification when nested ignored folder is deleted", func() {
|
||||
startEventProcessing()
|
||||
|
||||
// Simulate deletion of music/rock/artist/temp (matches **/temp)
|
||||
sendEvent("music/rock/artist/temp")
|
||||
|
||||
// Wait to ensure event is processed
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// No notification should be sent
|
||||
Expect(w.watcherNotify).To(BeEmpty(), "Expected no scan notification for nested ignored folder")
|
||||
})
|
||||
|
||||
It("should send notification for non-ignored nested folder", func() {
|
||||
startEventProcessing()
|
||||
|
||||
// Simulate change in music/rock/artist (doesn't match any pattern)
|
||||
sendEvent("music/rock/artist")
|
||||
|
||||
// Should receive notification
|
||||
Eventually(w.watcherNotify, 200*time.Millisecond).Should(Receive(Equal(scanNotification{
|
||||
Library: lib,
|
||||
FolderPath: "music/rock/artist",
|
||||
})))
|
||||
})
|
||||
})
|
||||
|
||||
Context("with file events in ignored folders", func() {
|
||||
BeforeEach(func() {
|
||||
mockFS = &mockMusicFS{
|
||||
FS: fstest.MapFS{
|
||||
"rock": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"rock/.ndignore": &fstest.MapFile{Data: []byte("_TEMP\n")},
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
It("should NOT send notification for file changes in ignored folders", func() {
|
||||
startEventProcessing()
|
||||
|
||||
// Simulate file change in rock/_TEMP/file.mp3
|
||||
sendEvent("rock/_TEMP/file.mp3")
|
||||
|
||||
// Wait to ensure event is processed
|
||||
time.Sleep(50 * time.Millisecond)
|
||||
|
||||
// No notification should be sent
|
||||
Expect(w.watcherNotify).To(BeEmpty(), "Expected no scan notification for file in ignored folder")
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
var _ = Describe("resolveFolderPath", func() {
|
||||
var mockFS fs.FS
|
||||
|
||||
BeforeEach(func() {
|
||||
// Create a mock filesystem with some directories and files
|
||||
mockFS = fstest.MapFS{
|
||||
"artist1": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"artist1/album1": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"artist1/album1/track1.mp3": &fstest.MapFile{Data: []byte("audio")},
|
||||
"artist1/album1/track2.mp3": &fstest.MapFile{Data: []byte("audio")},
|
||||
"artist1/album2": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"artist1/album2/song.flac": &fstest.MapFile{Data: []byte("audio")},
|
||||
"artist2": &fstest.MapFile{Mode: fs.ModeDir},
|
||||
"artist2/cover.jpg": &fstest.MapFile{Data: []byte("image")},
|
||||
}
|
||||
})
|
||||
|
||||
It("returns directory path when given a directory", func() {
|
||||
result := resolveFolderPath(mockFS, "artist1/album1")
|
||||
Expect(result).To(Equal("artist1/album1"))
|
||||
})
|
||||
|
||||
It("walks up to parent directory when given a file path", func() {
|
||||
result := resolveFolderPath(mockFS, "artist1/album1/track1.mp3")
|
||||
Expect(result).To(Equal("artist1/album1"))
|
||||
})
|
||||
|
||||
It("walks up multiple levels if needed", func() {
|
||||
result := resolveFolderPath(mockFS, "artist1/album1/nonexistent/file.mp3")
|
||||
Expect(result).To(Equal("artist1/album1"))
|
||||
})
|
||||
|
||||
It("returns empty string for non-existent paths at root", func() {
|
||||
result := resolveFolderPath(mockFS, "nonexistent/path/file.mp3")
|
||||
Expect(result).To(Equal(""))
|
||||
})
|
||||
|
||||
It("returns empty string for dot path", func() {
|
||||
result := resolveFolderPath(mockFS, ".")
|
||||
Expect(result).To(Equal(""))
|
||||
})
|
||||
|
||||
It("returns empty string for empty path", func() {
|
||||
result := resolveFolderPath(mockFS, "")
|
||||
Expect(result).To(Equal(""))
|
||||
})
|
||||
|
||||
It("handles nested file paths correctly", func() {
|
||||
result := resolveFolderPath(mockFS, "artist1/album2/song.flac")
|
||||
Expect(result).To(Equal("artist1/album2"))
|
||||
})
|
||||
|
||||
It("resolves to top-level directory", func() {
|
||||
result := resolveFolderPath(mockFS, "artist2/cover.jpg")
|
||||
Expect(result).To(Equal("artist2"))
|
||||
})
|
||||
})
|
||||
Reference in New Issue
Block a user