Merge pull request #11 from PiratesIRC/claude/selectable-channel-databases-011CV1BxggpwpUe3u86UDzvN

Add selectable channel databases to GUI
This commit is contained in:
Pirates IRC
2025-11-10 19:34:52 -06:00
committed by GitHub
4 changed files with 657 additions and 196 deletions

View File

@@ -15,6 +15,8 @@ Before installing or using this plugin, it is **highly recommended** that you cr
* **Advanced Fuzzy Matching**: Automatically finds and assigns streams to channels using an advanced fuzzy-matching engine (`fuzzy_matcher.py`). * **Advanced Fuzzy Matching**: Automatically finds and assigns streams to channels using an advanced fuzzy-matching engine (`fuzzy_matcher.py`).
* **Unlimited Stream Support**: Fetches and processes ALL available streams regardless of quantity (no 10,000 stream limit). * **Unlimited Stream Support**: Fetches and processes ALL available streams regardless of quantity (no 10,000 stream limit).
* **Enhanced OTA Callsign Matching**: Uses a robust `*_channels.json` database for superior callsign extraction and matching for Over-The-Air broadcast channels. * **Enhanced OTA Callsign Matching**: Uses a robust `*_channels.json` database for superior callsign extraction and matching for Over-The-Air broadcast channels.
* **Selectable Channel Databases** *(NEW v0.5.0a)*: Enable or disable specific channel databases through the GUI settings.
* **Multi-Country Support** *(NEW v0.5.0a)*: Support for multiple country databases with automatic country code prefix handling (e.g., `CA:`, `UK `).
* **Multi-Stream Assignment**: Assigns **all** matching streams to each channel (e.g., 4K, FHD, HD versions), sorted by quality. * **Multi-Stream Assignment**: Assigns **all** matching streams to each channel (e.g., 4K, FHD, HD versions), sorted by quality.
* **Quality Prioritization**: Sorts matched streams by quality (4K → FHD → HD → (H) → (F) → (D) → SD → Slow). * **Quality Prioritization**: Sorts matched streams by quality (4K → FHD → HD → (H) → (F) → (D) → SD → Slow).
* **Channel Visibility Management**: Automatically enables/disables channels based on stream assignments and duplicate detection. * **Channel Visibility Management**: Automatically enables/disables channels based on stream assignments and duplicate detection.
@@ -44,14 +46,22 @@ Before installing or using this plugin, it is **highly recommended** that you cr
Stream-Mapparr uses `*_channels.json` files to improve OTA (Over-The-Air) and cable channel matching. The plugin includes `US_channels.json` by default, but you can create additional database files for other countries or regions. Stream-Mapparr uses `*_channels.json` files to improve OTA (Over-The-Air) and cable channel matching. The plugin includes `US_channels.json` by default, but you can create additional database files for other countries or regions.
**NEW in v0.5.0a**: Channel databases are now **selectable within the GUI**! You can enable or disable specific databases in the plugin settings.
### Database File Format ### Database File Format
Channel database files follow the naming pattern: `[COUNTRY_CODE]_channels.json` (e.g., `US_channels.json`, `CA_channels.json`, `UK_channels.json`) Channel database files follow the naming pattern: `[COUNTRY_CODE]_channels.json` (e.g., `US_channels.json`, `CA_channels.json`, `UK_channels.json`)
Each file contains a JSON array of channel objects with three required fields: #### Recommended Format (v0.5.0a+)
The recommended format includes metadata at the top level:
```json ```json
[ {
"country_code": "CA",
"country_name": "Canada",
"version": "2025-11-10",
"channels": [
{ {
"channel_name": "CBC", "channel_name": "CBC",
"category": "News", "category": "News",
@@ -67,11 +77,44 @@ Each file contains a JSON array of channel objects with three required fields:
"category": "Entertainment", "category": "Entertainment",
"type": "National" "type": "National"
} }
]
}
```
#### Legacy Format (Still Supported)
The legacy format is still supported and uses a direct array:
```json
[
{
"channel_name": "CBC",
"category": "News",
"type": "National"
},
{
"channel_name": "CTV",
"category": "Entertainment",
"type": "National"
}
] ]
``` ```
**Note**: If using the legacy format without metadata, the database will be displayed in settings using the filename.
### Field Descriptions ### Field Descriptions
#### Metadata Fields (Recommended Format Only)
| Field | Required | Description | Examples |
|:---|:---|:---|:---|
| **country_code** | Recommended | Two-letter ISO country code | `US`, `CA`, `UK`, `AU`, `DE` |
| **country_name** | Recommended | Full country/region name | `United States`, `Canada`, `United Kingdom` |
| **version** | Optional | Database version or date | `2025-11-10`, `1.0`, `v2` |
| **channels** | Yes | Array of channel objects | See below |
#### Channel Object Fields
| Field | Required | Description | Examples | | Field | Required | Description | Examples |
|:---|:---|:---|:---| |:---|:---|:---|:---|
| **channel_name** | Yes | The channel name or callsign | `CBC`, `BBC One`, `WSBT`, `Sky Sports` | | **channel_name** | Yes | The channel name or callsign | `CBC`, `BBC One`, `WSBT`, `Sky Sports` |
@@ -113,10 +156,14 @@ Each file contains a JSON array of channel objects with three required fields:
``` ```
* The plugin will automatically detect and use all `*_channels.json` files in the directory * The plugin will automatically detect and use all `*_channels.json` files in the directory
### Example: Creating UK_channels.json ### Example: Creating UK_channels.json (Recommended Format)
```json ```json
[ {
"country_code": "UK",
"country_name": "United Kingdom",
"version": "2025-11-11",
"channels": [
{ {
"channel_name": "BBC One", "channel_name": "BBC One",
"category": "Entertainment", "category": "Entertainment",
@@ -142,29 +189,65 @@ Each file contains a JSON array of channel objects with three required fields:
"category": "Sports", "category": "Sports",
"type": "National" "type": "National"
} }
] ]
}
``` ```
### Managing Channel Databases in the GUI
**NEW in v0.5.0a**: All channel databases are now manageable through the plugin settings!
1. **Viewing Available Databases**
* Navigate to **Plugins** → **Stream-Mapparr** → **Settings**
* Scroll to the **"📚 Channel Databases"** section
* All detected `*_channels.json` files will be listed with checkboxes
2. **Enabling/Disabling Databases**
* Check the box next to a database to enable it for matching
* Uncheck the box to disable it
* By default, only the **US** database is enabled
* If only one database exists, it will be enabled by default
3. **Database Labels**
* Databases using the **recommended format** show: `Country Name (vVersion)`
* Example: `Canada (v2025-11-10)`
* Databases using the **legacy format** show: `Filename`
* Example: `UK_channels.json`
4. **Country Code Prefix Handling**
* Stream names may be prefixed with country codes (e.g., `CA: CBC`, `UK BBC One`, `USA News`)
* The plugin automatically removes these prefixes during matching
* Supported formats: `CC:` or `CC ` (2-letter codes), `CCC:` or `CCC ` (3-letter codes)
* Smart detection avoids removing quality tags like HD, SD, UHD, FHD
### Tips for Better Matching ### Tips for Better Matching
* Include all variations of channel names (e.g., `BBC 1`, `BBC One`, `BBC1`) * Include all variations of channel names (e.g., `BBC 1`, `BBC One`, `BBC1`)
* Add both full names and abbreviations (e.g., `The Sports Network`, `TSN`) * Add both full names and abbreviations (e.g., `The Sports Network`, `TSN`)
* Include regional variants if applicable (e.g., `BBC One London`, `BBC One Scotland`) * Include regional variants if applicable (e.g., `BBC One London`, `BBC One Scotland`)
* Use the exact callsigns for OTA broadcast stations * Use the exact callsigns for OTA broadcast stations
* Test your database by running the plugin and checking the logs for matching activity * Enable only the databases relevant to your region for better matching accuracy
* Use the recommended format with metadata for clearer identification in the GUI
* Test your database by enabling it in settings and checking the logs for matching activity
## Settings Reference ## Settings Reference
| Setting | Type | Default | Description | | Setting | Type | Default | Description |
|:---|:---|:---|:---| |:---|:---|:---|:---|
| **Fuzzy Match Threshold** | `number` | 85 | Minimum similarity score (0-100) for fuzzy matching. Higher values require closer matches. | | **Overwrite Existing Streams** | `boolean` | True | If enabled, removes all existing streams and replaces with matched streams |
| **Fuzzy Match Threshold** | `number` | 85 | Minimum similarity score (0-100) for fuzzy matching. Higher values require closer matches |
| **Dispatcharr URL** | `string` | - | Full URL of your Dispatcharr instance (e.g., `http://192.168.1.10:9191`) | | **Dispatcharr URL** | `string` | - | Full URL of your Dispatcharr instance (e.g., `http://192.168.1.10:9191`) |
| **Dispatcharr Admin Username** | `string` | - | Username for API authentication | | **Dispatcharr Admin Username** | `string` | - | Username for API authentication |
| **Dispatcharr Admin Password** | `password` | - | Password for API authentication | | **Dispatcharr Admin Password** | `password` | - | Password for API authentication |
| **Profile Name** | `string` | - | Name of an existing Channel Profile to process (e.g., "Primary", "Sports") | | **Profile Name** | `string` | - | Name of an existing Channel Profile to process (e.g., "Primary", "Sports") |
| **Channel Groups** | `string` | - | Comma-separated group names to process, or leave empty for all groups | | **Channel Groups** | `string` | - | Comma-separated group names to process, or leave empty for all groups |
| **Ignore Tags** | `string` | - | Comma-separated tags to ignore during matching (e.g., `4K, [4K], [Dead]`) | | **Ignore Tags** | `string` | - | Comma-separated tags to ignore during matching (e.g., `4K, [4K], [Dead]`) |
| **Ignore Quality Tags** | `boolean` | True | Remove quality-related patterns like [4K], HD, (SD) during matching |
| **Ignore Regional Tags** | `boolean` | True | Remove regional indicators like "East" during matching |
| **Ignore Geographic Tags** | `boolean` | True | Remove geographic prefixes like US:, CA:, UK: during matching |
| **Ignore Miscellaneous Tags** | `boolean` | True | Remove miscellaneous tags like (CX), (Backup) during matching |
| **Visible Channel Limit** | `number` | 1 | Number of channels per matching group that will be visible and have streams added | | **Visible Channel Limit** | `number` | 1 | Number of channels per matching group that will be visible and have streams added |
| **Enable [Database]** *(v0.5.0a)* | `boolean` | US: True, Others: False | Enable or disable specific channel databases for matching |
## Usage Guide ## Usage Guide

View File

@@ -0,0 +1,107 @@
{
"country_code": "CA",
"country_name": "Canada",
"version": "2025-11-11",
"channels": [
{
"channel_name": "CBC",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "CBC News Network",
"category": "News",
"type": "National"
},
{
"channel_name": "CTV",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "CTV News Channel",
"category": "News",
"type": "National"
},
{
"channel_name": "Global",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "Citytv",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "TSN",
"category": "Sports",
"type": "National"
},
{
"channel_name": "The Sports Network",
"category": "Sports",
"type": "National"
},
{
"channel_name": "Sportsnet",
"category": "Sports",
"type": "National"
},
{
"channel_name": "TVA",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "Ici Radio-Canada",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "CTV Comedy Channel",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "CTV Drama Channel",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "Discovery Channel",
"category": "Documentary",
"type": "National"
},
{
"channel_name": "History",
"category": "Documentary",
"type": "National"
},
{
"channel_name": "Food Network Canada",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "HGTV Canada",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "W Network",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "Showcase",
"category": "Entertainment",
"type": "National"
},
{
"channel_name": "Space",
"category": "Entertainment",
"type": "National"
}
]
}

View File

@@ -11,7 +11,7 @@ import logging
from glob import glob from glob import glob
# Version: YY.DDD.HHMM (Julian date format: Year.DayOfYear.Time) # Version: YY.DDD.HHMM (Julian date format: Year.DayOfYear.Time)
__version__ = "25.313.1157" __version__ = "25.314.1907"
# Setup logging # Setup logging
LOGGER = logging.getLogger("plugins.fuzzy_matcher") LOGGER = logging.getLogger("plugins.fuzzy_matcher")
@@ -193,7 +193,7 @@ class FuzzyMatcher:
return callsign return callsign
def normalize_name(self, name, user_ignored_tags=None, ignore_quality=True, ignore_regional=True, def normalize_name(self, name, user_ignored_tags=None, ignore_quality=True, ignore_regional=True,
ignore_geographic=True, ignore_misc=True, remove_cinemax=False): ignore_geographic=True, ignore_misc=True, remove_cinemax=False, remove_country_prefix=False):
""" """
Normalize channel or stream name for matching by removing tags, prefixes, and other noise. Normalize channel or stream name for matching by removing tags, prefixes, and other noise.
@@ -205,6 +205,7 @@ class FuzzyMatcher:
ignore_geographic: If True, remove geographic prefix patterns (e.g., US:, USA) ignore_geographic: If True, remove geographic prefix patterns (e.g., US:, USA)
ignore_misc: If True, remove miscellaneous patterns (e.g., (CX), (Backup), single-letter tags) ignore_misc: If True, remove miscellaneous patterns (e.g., (CX), (Backup), single-letter tags)
remove_cinemax: If True, remove "Cinemax" prefix (useful when channel name contains "max") remove_cinemax: If True, remove "Cinemax" prefix (useful when channel name contains "max")
remove_country_prefix: If True, remove country code prefixes (e.g., CA:, UK , DE: ) from start of name
Returns: Returns:
Normalized name Normalized name
@@ -215,6 +216,20 @@ class FuzzyMatcher:
# Remove leading parenthetical prefixes like (SP2), (D1), etc. # Remove leading parenthetical prefixes like (SP2), (D1), etc.
name = re.sub(r'^\([^\)]+\)\s*', '', name) name = re.sub(r'^\([^\)]+\)\s*', '', name)
# Remove country code prefix if requested (e.g., "CA:", "UK ", "USA: ")
# This handles multi-country databases where streams may be prefixed with country codes
if remove_country_prefix:
# Known quality tags that should NOT be removed (to avoid false positives)
quality_tags = {'HD', 'SD', 'FD', 'UHD', 'FHD'}
# Check for 2-3 letter prefix with colon or space at start
prefix_match = re.match(r'^([A-Z]{2,3})[:|\s]\s*', name)
if prefix_match:
prefix = prefix_match.group(1).upper()
# Only remove if it's NOT a quality tag
if prefix not in quality_tags:
name = name[len(prefix_match.group(0)):]
# Remove "Cinemax" prefix if requested (for channels containing "max") # Remove "Cinemax" prefix if requested (for channels containing "max")
if remove_cinemax: if remove_cinemax:
name = re.sub(r'\bCinemax\b\s*', '', name, flags=re.IGNORECASE) name = re.sub(r'\bCinemax\b\s*', '', name, flags=re.IGNORECASE)

View File

@@ -31,11 +31,14 @@ class Plugin:
"""Dispatcharr Stream-Mapparr Plugin""" """Dispatcharr Stream-Mapparr Plugin"""
name = "Stream-Mapparr" name = "Stream-Mapparr"
version = "0.5.0d" version = "0.5.0a"
description = "🎯 Automatically add matching streams to channels based on name similarity and quality precedence with enhanced fuzzy matching" description = "🎯 Automatically add matching streams to channels based on name similarity and quality precedence with enhanced fuzzy matching"
# Settings rendered by UI @property
fields = [ def fields(self):
"""Dynamically generate settings fields including channel database selection."""
# Static fields that are always present
static_fields = [
{ {
"id": "overwrite_streams", "id": "overwrite_streams",
"label": "🔄 Overwrite Existing Streams", "label": "🔄 Overwrite Existing Streams",
@@ -132,6 +135,46 @@ class Plugin:
}, },
] ]
# Add channel database section header
static_fields.append({
"id": "channel_databases_header",
"type": "info",
"label": "📚 Channel Databases",
})
# Dynamically add channel database enable/disable fields
try:
databases = self._get_channel_databases()
if databases:
for db_info in databases:
db_id = db_info['id']
db_label = db_info['label']
db_default = db_info['default']
static_fields.append({
"id": f"db_enabled_{db_id}",
"type": "boolean",
"label": f"Enable {db_label}",
"help_text": f"Enable or disable the {db_label} channel database for matching.",
"default": db_default
})
else:
static_fields.append({
"id": "no_databases_found",
"type": "info",
"label": "⚠️ No channel databases found. Place XX_channels.json files in the plugin directory.",
})
except Exception as e:
LOGGER.error(f"[Stream-Mapparr] Error loading channel databases for settings: {e}")
static_fields.append({
"id": "database_error",
"type": "info",
"label": f"⚠️ Error loading channel databases: {e}",
})
return static_fields
# Actions for Dispatcharr UI # Actions for Dispatcharr UI
actions = [ actions = [
{ {
@@ -204,6 +247,68 @@ class Plugin:
LOGGER.info(f"[Stream-Mapparr] {self.name} Plugin v{self.version} initialized") LOGGER.info(f"[Stream-Mapparr] {self.name} Plugin v{self.version} initialized")
def _get_channel_databases(self):
"""
Scan for channel database files and return metadata for each.
Returns:
List of dicts with 'id', 'label', 'default', and 'file_path' keys
"""
plugin_dir = os.path.dirname(__file__)
databases = []
try:
from glob import glob
pattern = os.path.join(plugin_dir, '*_channels.json')
channel_files = sorted(glob(pattern))
for channel_file in channel_files:
try:
filename = os.path.basename(channel_file)
# Extract country code from filename (e.g., "US" from "US_channels.json")
country_code = filename.split('_')[0].upper()
# Try to read the file and extract metadata
with open(channel_file, 'r', encoding='utf-8') as f:
file_data = json.load(f)
# Check if it's the new format with metadata
if isinstance(file_data, dict) and 'country_code' in file_data:
country_name = file_data.get('country_name', filename)
version = file_data.get('version', '')
if version:
label = f"{country_name} (v{version})"
else:
label = country_name
else:
# Old format or missing metadata - use filename
label = filename
# Determine default value: US enabled by default, or if only one database, enable it
# We'll check the count later
default = (country_code == 'US')
databases.append({
'id': country_code,
'label': label,
'default': default,
'file_path': channel_file,
'filename': filename
})
except Exception as e:
LOGGER.warning(f"[Stream-Mapparr] Error reading database file {channel_file}: {e}")
continue
# If only one database exists, enable it by default
if len(databases) == 1:
databases[0]['default'] = True
except Exception as e:
LOGGER.error(f"[Stream-Mapparr] Error scanning for channel databases: {e}")
return databases
def _initialize_fuzzy_matcher(self, match_threshold=85): def _initialize_fuzzy_matcher(self, match_threshold=85):
"""Initialize the fuzzy matcher with configured threshold.""" """Initialize the fuzzy matcher with configured threshold."""
if self.fuzzy_matcher is None: if self.fuzzy_matcher is None:
@@ -445,7 +550,7 @@ class Plugin:
return tags return tags
def _clean_channel_name(self, name, ignore_tags=None, ignore_quality=True, ignore_regional=True, def _clean_channel_name(self, name, ignore_tags=None, ignore_quality=True, ignore_regional=True,
ignore_geographic=True, ignore_misc=True, remove_cinemax=False): ignore_geographic=True, ignore_misc=True, remove_cinemax=False, remove_country_prefix=True):
""" """
Remove brackets and their contents from channel name for matching, and remove ignore tags. Remove brackets and their contents from channel name for matching, and remove ignore tags.
Uses fuzzy matcher's normalization if available, otherwise falls back to basic cleaning. Uses fuzzy matcher's normalization if available, otherwise falls back to basic cleaning.
@@ -458,6 +563,7 @@ class Plugin:
ignore_geographic: If True, remove geographic prefix patterns (e.g., US:, USA) ignore_geographic: If True, remove geographic prefix patterns (e.g., US:, USA)
ignore_misc: If True, remove miscellaneous patterns (e.g., (CX), (Backup), single-letter tags) ignore_misc: If True, remove miscellaneous patterns (e.g., (CX), (Backup), single-letter tags)
remove_cinemax: If True, remove "Cinemax" prefix (for streams when channel contains "max") remove_cinemax: If True, remove "Cinemax" prefix (for streams when channel contains "max")
remove_country_prefix: If True, remove country code prefixes (e.g., CA:, UK ) from start of name
""" """
if self.fuzzy_matcher: if self.fuzzy_matcher:
# Use fuzzy matcher's normalization # Use fuzzy matcher's normalization
@@ -467,15 +573,27 @@ class Plugin:
ignore_regional=ignore_regional, ignore_regional=ignore_regional,
ignore_geographic=ignore_geographic, ignore_geographic=ignore_geographic,
ignore_misc=ignore_misc, ignore_misc=ignore_misc,
remove_cinemax=remove_cinemax remove_cinemax=remove_cinemax,
remove_country_prefix=remove_country_prefix
) )
# Fallback to basic cleaning # Fallback to basic cleaning
if ignore_tags is None: if ignore_tags is None:
ignore_tags = [] ignore_tags = []
cleaned = name
# Remove country code prefix if requested
if remove_country_prefix:
quality_tags = {'HD', 'SD', 'FD', 'UHD', 'FHD'}
prefix_match = re.match(r'^([A-Z]{2,3})[:|\s]\s*', cleaned)
if prefix_match:
prefix = prefix_match.group(1).upper()
if prefix not in quality_tags:
cleaned = cleaned[len(prefix_match.group(0)):]
# Remove anything in square brackets or parentheses at the end # Remove anything in square brackets or parentheses at the end
cleaned = re.sub(r'\s*[\[\(][^\[\]\(\)]*[\]\)]\s*$', '', name) cleaned = re.sub(r'\s*[\[\(][^\[\]\(\)]*[\]\)]\s*$', '', cleaned)
# Keep removing until no more brackets at the end # Keep removing until no more brackets at the end
while True: while True:
new_cleaned = re.sub(r'\s*[\[\(][^\[\]\(\)]*[\]\)]\s*$', '', cleaned) new_cleaned = re.sub(r'\s*[\[\(][^\[\]\(\)]*[\]\)]\s*$', '', cleaned)
@@ -545,30 +663,83 @@ class Plugin:
return sorted(streams, key=get_quality_index) return sorted(streams, key=get_quality_index)
def _load_channels_data(self, logger): def _load_channels_data(self, logger, settings=None):
"""Load channel data from *_channels.json files.""" """
Load channel data from enabled *_channels.json files.
Args:
logger: Logger instance
settings: Plugin settings dict (optional, for filtering by enabled databases)
Returns:
List of channel data from enabled databases
"""
plugin_dir = os.path.dirname(__file__) plugin_dir = os.path.dirname(__file__)
channels_data = [] channels_data = []
try: try:
# Find all *_channels.json files # Get all available databases
from glob import glob databases = self._get_channel_databases()
pattern = os.path.join(plugin_dir, '*_channels.json')
channel_files = glob(pattern) if not databases:
logger.warning(f"[Stream-Mapparr] No *_channels.json files found in {plugin_dir}")
return channels_data
# Filter to only enabled databases
enabled_databases = []
for db_info in databases:
db_id = db_info['id']
setting_key = f"db_enabled_{db_id}"
# Check if this database is enabled in settings
if settings:
is_enabled = settings.get(setting_key, db_info['default'])
else:
# No settings provided, use default
is_enabled = db_info['default']
if is_enabled:
enabled_databases.append(db_info)
if not enabled_databases:
logger.warning("[Stream-Mapparr] No channel databases are enabled. Please enable at least one database in settings.")
return channels_data
# Load data from enabled databases
for db_info in enabled_databases:
channel_file = db_info['file_path']
db_label = db_info['label']
country_code = db_info['id']
if channel_files:
for channel_file in channel_files:
try: try:
with open(channel_file, 'r', encoding='utf-8') as f: with open(channel_file, 'r', encoding='utf-8') as f:
file_data = json.load(f) file_data = json.load(f)
channels_data.extend(file_data)
logger.info(f"[Stream-Mapparr] Loaded {len(file_data)} channels from {os.path.basename(channel_file)}") # Handle both old and new format
if isinstance(file_data, dict) and 'channels' in file_data:
# New format with metadata
channels_list = file_data['channels']
# Add country_code to each channel for prefix handling
for channel in channels_list:
channel['_country_code'] = country_code
elif isinstance(file_data, list):
# Old format - direct array
channels_list = file_data
# Add country_code to each channel for prefix handling
for channel in channels_list:
channel['_country_code'] = country_code
else:
logger.error(f"[Stream-Mapparr] Invalid format in {channel_file}")
continue
channels_data.extend(channels_list)
logger.info(f"[Stream-Mapparr] Loaded {len(channels_list)} channels from {db_label}")
except Exception as e: except Exception as e:
logger.error(f"[Stream-Mapparr] Error loading {channel_file}: {e}") logger.error(f"[Stream-Mapparr] Error loading {channel_file}: {e}")
logger.info(f"[Stream-Mapparr] Loaded total of {len(channels_data)} channels from {len(channel_files)} file(s)") logger.info(f"[Stream-Mapparr] Loaded total of {len(channels_data)} channels from {len(enabled_databases)} enabled database(s)")
else:
logger.warning(f"[Stream-Mapparr] No *_channels.json files found in {plugin_dir}")
except Exception as e: except Exception as e:
logger.error(f"[Stream-Mapparr] Error loading channel data files: {e}") logger.error(f"[Stream-Mapparr] Error loading channel data files: {e}")
@@ -595,7 +766,11 @@ class Plugin:
def _match_streams_to_channel(self, channel, all_streams, logger, ignore_tags=None, def _match_streams_to_channel(self, channel, all_streams, logger, ignore_tags=None,
ignore_quality=True, ignore_regional=True, ignore_geographic=True, ignore_quality=True, ignore_regional=True, ignore_geographic=True,
ignore_misc=True, channels_data=None): ignore_misc=True, channels_data=None):
"""Find matching streams for a channel using fuzzy matching when available.""" """Find matching streams for a channel using fuzzy matching when available.
Returns:
tuple: (matching_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used)
"""
if ignore_tags is None: if ignore_tags is None:
ignore_tags = [] ignore_tags = []
if channels_data is None: if channels_data is None:
@@ -606,6 +781,9 @@ class Plugin:
# Get channel info from JSON # Get channel info from JSON
channel_info = self._get_channel_info_from_json(channel_name, channels_data, logger) channel_info = self._get_channel_info_from_json(channel_name, channels_data, logger)
# Determine which database was used (if any)
database_used = channel_info.get('_country_code', 'N/A') if channel_info else 'N/A'
# Check if channel name contains "max" (case insensitive) - used for Cinemax handling # Check if channel name contains "max" (case insensitive) - used for Cinemax handling
channel_has_max = 'max' in channel_name.lower() channel_has_max = 'max' in channel_name.lower()
@@ -645,7 +823,7 @@ class Plugin:
) for s in sorted_streams] ) for s in sorted_streams]
match_reason = "Callsign match" match_reason = "Callsign match"
return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used
else: else:
logger.info(f"[Stream-Mapparr] No callsign matches found for {callsign}") logger.info(f"[Stream-Mapparr] No callsign matches found for {callsign}")
# Fall through to fuzzy matching # Fall through to fuzzy matching
@@ -693,11 +871,11 @@ class Plugin:
) for s in sorted_streams] ) for s in sorted_streams]
match_reason = f"Fuzzy match ({match_type}, score: {score})" match_reason = f"Fuzzy match ({match_type}, score: {score})"
return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used
# No fuzzy match found # No fuzzy match found
logger.info(f"[Stream-Mapparr] No fuzzy match found for channel: {channel_name}") logger.info(f"[Stream-Mapparr] No fuzzy match found for channel: {channel_name}")
return [], cleaned_channel_name, [], "No fuzzy match" return [], cleaned_channel_name, [], "No fuzzy match", database_used
# Fallback to basic substring matching if fuzzy matcher unavailable # Fallback to basic substring matching if fuzzy matcher unavailable
logger.info(f"[Stream-Mapparr] Using basic substring matching for channel: {channel_name}") logger.info(f"[Stream-Mapparr] Using basic substring matching for channel: {channel_name}")
@@ -705,7 +883,7 @@ class Plugin:
if not all_streams: if not all_streams:
logger.warning("[Stream-Mapparr] No streams available for matching!") logger.warning("[Stream-Mapparr] No streams available for matching!")
return [], cleaned_channel_name, [], "No streams available" return [], cleaned_channel_name, [], "No streams available", database_used
# Try exact channel name matching from JSON first # Try exact channel name matching from JSON first
if channel_info and channel_info.get('channel_name'): if channel_info and channel_info.get('channel_name'):
@@ -732,7 +910,7 @@ class Plugin:
) for s in sorted_streams] ) for s in sorted_streams]
match_reason = "Exact match (channels.json)" match_reason = "Exact match (channels.json)"
return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used
# Fallback to basic substring matching # Fallback to basic substring matching
for stream in all_streams: for stream in all_streams:
@@ -755,10 +933,10 @@ class Plugin:
) for s in sorted_streams] ) for s in sorted_streams]
match_reason = "Basic substring match" match_reason = "Basic substring match"
return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason return sorted_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used
# No match found # No match found
return [], cleaned_channel_name, [], "No match" return [], cleaned_channel_name, [], "No match", database_used
def _get_channel_info_from_json(self, channel_name, channels_data, logger): def _get_channel_info_from_json(self, channel_name, channels_data, logger):
"""Find channel info from channels.json by matching channel name.""" """Find channel info from channels.json by matching channel name."""
@@ -953,24 +1131,75 @@ class Plugin:
self._initialize_fuzzy_matcher(match_threshold) self._initialize_fuzzy_matcher(match_threshold)
if self.fuzzy_matcher: if self.fuzzy_matcher:
validation_results.append(f"✅ Fuzzy Matcher: SUCCESS - Initialized with threshold {match_threshold}") validation_results.append(f"✅ Fuzzy Matcher: Initialized (threshold: {match_threshold})")
else: else:
validation_results.append("⚠️ Fuzzy Matcher: WARNING - Could not initialize (will use fallback matching)") validation_results.append("⚠️ Fuzzy Matcher: WARNING - Could not initialize (will use fallback matching)")
except Exception as e: except Exception as e:
validation_results.append(f"⚠️ Fuzzy Matcher: WARNING - {str(e)} (will use fallback matching)") validation_results.append(f"⚠️ Fuzzy Matcher: WARNING - {str(e)} (will use fallback matching)")
# 7. Check other settings # 7. Validate Channel Databases
logger.info("[Stream-Mapparr] Validating channel databases...")
try:
databases = self._get_channel_databases()
if not databases:
validation_results.append("❌ Channel Databases: FAILED - No *_channels.json files found in plugin directory")
has_errors = True
else:
# Check which databases are enabled
enabled_databases = []
invalid_databases = []
for db_info in databases:
db_id = db_info['id']
setting_key = f"db_enabled_{db_id}"
is_enabled = settings.get(setting_key, db_info['default'])
if is_enabled:
# Validate JSON format
try:
with open(db_info['file_path'], 'r', encoding='utf-8') as f:
file_data = json.load(f)
# Check format
if isinstance(file_data, dict):
if 'channels' not in file_data:
invalid_databases.append(f"{db_info['label']} (missing 'channels' key)")
elif not isinstance(file_data['channels'], list):
invalid_databases.append(f"{db_info['label']} ('channels' must be an array)")
else:
enabled_databases.append(db_info['label'])
elif isinstance(file_data, list):
enabled_databases.append(db_info['label'])
else:
invalid_databases.append(f"{db_info['label']} (invalid format)")
except json.JSONDecodeError as e:
invalid_databases.append(f"{db_info['label']} (JSON error: {str(e)[:50]})")
except Exception as e:
invalid_databases.append(f"{db_info['label']} (error: {str(e)[:50]})")
if invalid_databases:
validation_results.append(f"❌ Channel Databases: FAILED - Invalid database(s): {', '.join(invalid_databases)}")
has_errors = True
elif not enabled_databases:
validation_results.append("❌ Channel Databases: FAILED - No databases enabled. Enable at least one database in settings.")
has_errors = True
else:
validation_results.append(f"✅ Channel Databases: {len(enabled_databases)} enabled")
except Exception as e:
validation_results.append(f"❌ Channel Databases: FAILED - {str(e)}")
has_errors = True
# 8. Check other settings
overwrite_streams = settings.get('overwrite_streams', True) overwrite_streams = settings.get('overwrite_streams', True)
if isinstance(overwrite_streams, str): if isinstance(overwrite_streams, str):
overwrite_streams = overwrite_streams.lower() in ('true', 'yes', '1') overwrite_streams = overwrite_streams.lower() in ('true', 'yes', '1')
validation_results.append(f" Overwrite Existing Streams: {'Enabled' if overwrite_streams else 'Disabled'}")
ignore_tags_str = settings.get("ignore_tags", "").strip() ignore_tags_str = settings.get("ignore_tags", "").strip()
if ignore_tags_str: if ignore_tags_str:
ignore_tags = self._parse_tags(ignore_tags_str) ignore_tags = self._parse_tags(ignore_tags_str)
validation_results.append(f" Ignore Tags: {len(ignore_tags)} tag(s) configured: {', '.join(repr(tag) for tag in ignore_tags)}") validation_results.append(f" {len(ignore_tags)} ignore tag(s) configured")
else:
validation_results.append(" Ignore Tags: None configured")
# Return validation results # Return validation results
return has_errors, validation_results, token return has_errors, validation_results, token
@@ -991,8 +1220,14 @@ class Plugin:
message += "\n\nPlease fix the errors above before proceeding." message += "\n\nPlease fix the errors above before proceeding."
return {"status": "error", "message": message} return {"status": "error", "message": message}
else: else:
message = "All settings validated successfully!\n\n" + "\n".join(validation_results) # Condensed success message - only show key items
message += "\n\nYou can now proceed with 'Load/Process Channels'." success_items = [item for item in validation_results if item.startswith("")]
info_items = [item for item in validation_results if item.startswith("")]
message = "Settings validated! " + " | ".join(success_items)
if info_items:
message += "\n" + " | ".join(info_items)
message += "\n\nReady to proceed with 'Load/Process Channels'."
return {"status": "success", "message": message} return {"status": "success", "message": message}
def load_process_channels_action(self, settings, logger): def load_process_channels_action(self, settings, logger):
@@ -1251,6 +1486,21 @@ class Plugin:
overwrite_streams = overwrite_streams.lower() in ('true', 'yes', '1') overwrite_streams = overwrite_streams.lower() in ('true', 'yes', '1')
fuzzy_match_threshold = settings.get('fuzzy_match_threshold', 85) fuzzy_match_threshold = settings.get('fuzzy_match_threshold', 85)
# Get enabled databases
try:
databases = self._get_channel_databases()
enabled_dbs = []
for db_info in databases:
db_id = db_info['id']
setting_key = f"db_enabled_{db_id}"
is_enabled = settings.get(setting_key, db_info['default'])
if is_enabled:
enabled_dbs.append(db_info['label'])
db_info_str = ', '.join(enabled_dbs) if enabled_dbs else 'None'
except Exception:
db_info_str = 'Unknown'
# Build header lines # Build header lines
header_lines = [ header_lines = [
f"# Stream-Mapparr Export", f"# Stream-Mapparr Export",
@@ -1264,6 +1514,7 @@ class Plugin:
f"# Channel Groups: {', '.join(selected_groups) if selected_groups else 'All groups'}", f"# Channel Groups: {', '.join(selected_groups) if selected_groups else 'All groups'}",
f"# Ignore Tags: {', '.join(ignore_tags) if ignore_tags else 'None'}", f"# Ignore Tags: {', '.join(ignore_tags) if ignore_tags else 'None'}",
f"# Visible Channel Limit: {visible_channel_limit}", f"# Visible Channel Limit: {visible_channel_limit}",
f"# Channel Databases Loaded: {db_info_str}",
f"#", f"#",
f"# Statistics:", f"# Statistics:",
f"# Total Visible Channels: {total_visible_channels}", f"# Total Visible Channels: {total_visible_channels}",
@@ -1313,7 +1564,7 @@ class Plugin:
logger.info("[Stream-Mapparr] Settings validated successfully, proceeding with preview...") logger.info("[Stream-Mapparr] Settings validated successfully, proceeding with preview...")
# Load channel data from channels.json # Load channel data from channels.json
channels_data = self._load_channels_data(logger) channels_data = self._load_channels_data(logger, settings)
# Load processed data # Load processed data
with open(self.processed_data_file, 'r') as f: with open(self.processed_data_file, 'r') as f:
@@ -1377,7 +1628,7 @@ class Plugin:
sorted_channels = self._sort_channels_by_priority(group_channels) sorted_channels = self._sort_channels_by_priority(group_channels)
# Match streams for this channel group (using first channel as representative) # Match streams for this channel group (using first channel as representative)
matched_streams, cleaned_channel_name, cleaned_stream_names, match_reason = self._match_streams_to_channel( matched_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used = self._match_streams_to_channel(
sorted_channels[0], streams, logger, ignore_tags, sorted_channels[0], streams, logger, ignore_tags,
ignore_quality, ignore_regional, ignore_geographic, ignore_misc, ignore_quality, ignore_regional, ignore_geographic, ignore_misc,
channels_data channels_data
@@ -1398,6 +1649,7 @@ class Plugin:
"stream_names": [s['name'] for s in matched_streams], "stream_names": [s['name'] for s in matched_streams],
"stream_names_cleaned": cleaned_stream_names, "stream_names_cleaned": cleaned_stream_names,
"match_reason": match_reason, "match_reason": match_reason,
"database_used": database_used,
"will_update": True "will_update": True
} }
all_matches.append(match_info) all_matches.append(match_info)
@@ -1419,6 +1671,7 @@ class Plugin:
"stream_names": [s['name'] for s in matched_streams], "stream_names": [s['name'] for s in matched_streams],
"stream_names_cleaned": cleaned_stream_names, "stream_names_cleaned": cleaned_stream_names,
"match_reason": f"Skipped (exceeds limit of {visible_channel_limit})", "match_reason": f"Skipped (exceeds limit of {visible_channel_limit})",
"database_used": database_used,
"will_update": False "will_update": False
} }
all_matches.append(match_info) all_matches.append(match_info)
@@ -1455,6 +1708,7 @@ class Plugin:
'channel_number', 'channel_number',
'matched_streams', 'matched_streams',
'match_reason', 'match_reason',
'database_used',
'stream_names' 'stream_names'
] ]
writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
@@ -1469,6 +1723,7 @@ class Plugin:
'channel_number': match.get('channel_number', 'N/A'), 'channel_number': match.get('channel_number', 'N/A'),
'matched_streams': match['matched_streams'], 'matched_streams': match['matched_streams'],
'match_reason': match['match_reason'], 'match_reason': match['match_reason'],
'database_used': match['database_used'],
'stream_names': '; '.join(match['stream_names']) # Show all streams 'stream_names': '; '.join(match['stream_names']) # Show all streams
}) })
@@ -1524,7 +1779,7 @@ class Plugin:
return {"status": "error", "message": error} return {"status": "error", "message": error}
# Load channel data from channels.json # Load channel data from channels.json
channels_data = self._load_channels_data(logger) channels_data = self._load_channels_data(logger, settings)
# Load processed data # Load processed data
with open(self.processed_data_file, 'r') as f: with open(self.processed_data_file, 'r') as f:
@@ -1596,7 +1851,7 @@ class Plugin:
sorted_channels = self._sort_channels_by_priority(group_channels) sorted_channels = self._sort_channels_by_priority(group_channels)
# Match streams for this channel group # Match streams for this channel group
matched_streams, cleaned_channel_name, cleaned_stream_names, match_reason = self._match_streams_to_channel( matched_streams, cleaned_channel_name, cleaned_stream_names, match_reason, database_used = self._match_streams_to_channel(
sorted_channels[0], streams, logger, ignore_tags, sorted_channels[0], streams, logger, ignore_tags,
ignore_quality, ignore_regional, ignore_geographic, ignore_misc, ignore_quality, ignore_regional, ignore_geographic, ignore_misc,
channels_data channels_data
@@ -1650,7 +1905,8 @@ class Plugin:
update_details.append({ update_details.append({
'channel_name': channel_name, 'channel_name': channel_name,
'stream_names': stream_names_list, 'stream_names': stream_names_list,
'matched_streams': len(matched_streams) 'matched_streams': len(matched_streams),
'database_used': database_used
}) })
if overwrite_streams: if overwrite_streams:
@@ -1711,7 +1967,7 @@ class Plugin:
csvfile.write(header_comment) csvfile.write(header_comment)
# Write CSV data # Write CSV data
fieldnames = ['channel_name', 'stream_names', 'matched_streams'] fieldnames = ['channel_name', 'stream_names', 'matched_streams', 'database_used']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader() writer.writeheader()