Skip to content
Go back

Building a Headless YouTube Playlist Generator with OAuth and Quota Management

Updated: (Originally published 11 Sep, 2025 )

TL;DR: I built a CLI tool that creates YouTube playlists from your subscriptions without a frontend. It uses OAuth, respects API quota, handles caching and retries, and works great as a cron job. Source code on GitHub.

As someone subscribed to too many YouTube channels, I wanted a simple way to keep up with new content while I worked—a reliable, personal channel of new(ish) videos from the creators I actually care about. YouTube’s feed couldn’t keep up, and manual playlisting was a tedious mess. So I built a headless CLI tool that turns my recent subscriptions into an actual playlist, with intelligent caching and quota-aware design.

The Problem: YouTube’s Subscription Overload

I feel like my kids reflect the modern YouTube users - subscribed to dozens or hundreds of channels. The platform’s algorithm-driven feed doesn’t reliably show all new content, especially from smaller creators. I needed a system that could:

Understanding YouTube’s Data API v3

The YouTube Data API v3 is powerful but comes with strict quota limitations. Each operation consumes “quota units”:

Key Endpoints and Their Costs

{
    "subscriptions.list": 1, // Get user subscriptions
    "channels.list": 1, // Get channel details
    "playlistItems.list": 1, // List playlist contents
    "videos.list": 1, // Get video metadata
    "playlistItems.insert": 50, // Add video to playlist
    "search.list": 100 // Search for videos (expensive!)
}

Here’s the problem: naive implementations burn through that 10,000 limit fast. My first version hit the quota in about 20 minutes.

The Expensive Operations

Using search.list to find recent videos? That’s 100 units per call. Unsustainable. I needed a smarter approach:

  1. Use subscriptions.list to get all subscribed channels (1 unit per ~50 channels)
  2. Extract each channel’s “uploads playlist ID” via channels.list (1 unit per ~50 channels)
  3. Fetch recent videos from uploads playlists via playlistItems.list (1 unit per ~50 videos)
  4. Batch video details through videos.list (1 unit per ~50 videos)

This approach reduced quota consumption by approximately 98% compared to search-based methods.

OAuth Flow with Refresh Tokens and Caching

YouTube API access requires OAuth 2.0 authentication. For a headless tool that runs on cron schedules, the authentication flow needed to be robust and automatic.

Initial Authentication

The first-time setup requires user interaction:

def get_authenticated_service():
    creds = None

    # Load existing token if available
    if os.path.exists(TOKEN_FILE):
        with open(TOKEN_FILE, "rb") as token:
            creds = pickle.load(token)

    # Refresh expired tokens automatically
    if not creds or not creds.valid:
        if creds and creds.expired and creds.refresh_token:
            try:
                creds.refresh(Request())
                logger.info("Token refreshed successfully")
            except RefreshError:
                creds = None  # Force new auth flow

Handling Refresh Token Expiration

Google’s refresh tokens can expire after extended periods of inactivity or if the user changes their password. The authentication module gracefully handles this:

# If refresh fails, fall back to interactive flow
if not creds:
    flow = InstalledAppFlow.from_client_secrets_file(
        CREDENTIALS_FILE, SCOPES
    )
    creds = flow.run_local_server(port=0)

# Always save credentials for future runs
with open(TOKEN_FILE, "wb") as token:
    pickle.dump(creds, token)

Security Considerations

The implementation stores tokens locally using Python’s pickle module. For production deployments, consider:

Intelligent Quota Management

With only 10,000 quota units per day, every API call needed to be optimized and tracked.

Real-Time Quota Tracking

Every API call is automatically instrumented:

def track_api_call(method_name: str) -> None:
    """Track an API call for quota analysis."""
    global api_call_counter
    api_call_counter[method_name] = api_call_counter.get(method_name, 0) + 1
    logger.debug(f"API call tracked: {method_name} (count: {api_call_counter[method_name]})")

This creates a real-time log of API usage patterns.

Batching Strategy

The most significant optimization I ran into, was implementing proper batching:

def _get_videos_details(self, video_ids: List[str]) -> Dict[str, Dict[str, Any]]:
    """Fetch video details for up to 50 video IDs per API call."""
    details = {}
    batch_size = 50

    for i in range(0, len(video_ids), batch_size):
        batch = video_ids[i:i + batch_size]

        # Single API call for up to 50 videos
        request = self.service.videos().list(
            part="contentDetails,snippet",
            id=",".join(batch)
        )
        response = request.execute()
        track_api_call("videos.list")

        # Process batch results...

This approach reduced video detail fetching from potentially hundreds of API calls to just a few batched requests.

Caching Strategy

Multiple caching layers to avoid redundant API calls:

  1. Playlist Content Cache: Existing playlist contents are cached for 12 hours
  2. Processed Video Cache: Maintains a persistent list of previously processed videos
  3. Duplicate Detection: Pre-filters videos already in the playlist before attempting insertion
def fetch_existing_playlist_items(self, playlist_id: str) -> Set[str]:
    """Fetch playlist contents with 12-hour TTL disk cache."""
    cache_file = os.path.join(self.data_dir, "playlist_cache", f"existing_playlist_items_{playlist_id}.json")

    # Check cache validity
    if os.path.exists(cache_file):
        cache_age = time.time() - os.path.getmtime(cache_file)
        if cache_age < 12 * 3600:  # 12 hours
            # Return cached data
            pass

    # Fresh fetch if cache miss/expired
    # ... API call and cache update

Retry Logic and CLI Design

Error Handling

YouTube API calls fail for all sorts of reasons: quota exceeded, network timeouts, temporary service hiccups. Here’s how I handle it:

def _get_videos_details_batch(self, video_ids: List[str]) -> Dict[str, Dict[str, Any]]:
    max_retries = 1
    for attempt in range(max_retries + 1):
        try:
            response = request.execute()
            track_api_call("videos.list")
            return batch_details

        except HttpError as e:
            if e.resp.status == 403 and "quotaExceeded" in str(e):
                self.quota_exceeded = True
                logger.error("YouTube API quota exceeded. Try again after 12AM Pacific Time.")
                return {}
            elif attempt < max_retries:
                logger.warning(f"API error (attempt {attempt + 1}/{max_retries + 1}): {e}")
                time.sleep(1)  # Brief delay before retry
                continue

When quota limits are hit, the system gracefully terminates and provides clear instructions to the user.

CLI Flags and User Experience

For a headless tool, the command-line interface needed to be both powerful and intuitive:

# Basic operation
python -m yt_sub_playlist

# Development and testing
python -m yt_sub_playlist --dry-run --verbose

# Production with limits
python -m yt_sub_playlist --limit 20 --report output.csv

Key UX decisions:

Shell Script Wrappers

To simplify common operations, the project includes convenience scripts:

#!/bin/bash
# dryrun.sh - Safe testing
cd "$(dirname "$0")/../.." || exit 1
python -m yt_sub_playlist --dry-run --verbose --report "reports/dryrun_$(date +%Y%m%d_%H%M%S).csv"

#!/bin/bash
# run.sh - Production execution
cd "$(dirname "$0")/../.." || exit 1
python -m yt_sub_playlist --report "reports/videos_added_$(date +%Y%m%d_%H%M%S).csv"

These scripts handle common patterns and provide consistent logging/reporting.

What I Learned

1. Quota is a Design Constraint, Not an Afterthought

Most database apps don’t worry about read quotas. API-based tools? Quota is the constraint. Every feature decision needs to consider it.

You can’t optimize what you can’t measure — so I tracked quota from day one.

2. Caching Isn’t About Speed Here

With API quotas, caching is about feasibility. A cache miss doesn’t just slow things down — it could blow your entire daily quota.

Design your cache invalidation around API economics, not just data freshness.

3. Surface Platform Errors Clearly

YouTube’s quota errors tell you exactly when to retry: “wait until 12AM Pacific.” The app surfaces this immediately:

if e.resp.status == 403 and "quotaExceeded" in str(e):
    self.quota_exceeded = True
    logger.error("YouTube API quota exceeded. Try again after 12AM Pacific Time.")

4. Batching Changes Everything

Individual API calls vs. batched? Often a 50x difference in quota consumption. Always batch when you can.

API patterns that work fine for small datasets can be catastrophically expensive at scale.

5. Logs Are Your UI

Without a GUI, logs and reports are the interface. Invest in:

Real-World Performance

After optimization, typical runs show dramatic improvements:

A sample quota report:

yes, I asked AI to find me emojis to add! I like them in reports/logs

📊 Quota Usage Analysis:
channels.list            :  15 calls ×  1 units =   15 units
subscriptions.list       :   3 calls ×  1 units =    3 units
playlistItems.list       :  12 calls ×  1 units =   12 units
videos.list              :   8 calls ×  1 units =    8 units
playlistItems.insert     :  45 calls × 50 units = 2250 units
playlists.list           :   1 calls ×  1 units =    1 units

🔢 Total Estimated Usage: 2289 / 10000 units

The Result

The system runs on cron, processing hundreds of subscription channels while staying well within YouTube’s quota limits. Been running for months now, automatically maintaining a playlist of recent content filtered by my preferences.

What it looks like:

Contributing and Source Code

The complete source code is available on GitHub: yt-sub-playlist

Areas where contributions would be welcome:


The main takeaway? With careful API design and quota management, you can build powerful automation tools that work within platform constraints. Treat API quotas as a fundamental architectural constraint from the start, not something you bolt on later.

If you’re building your own quota-limited API tools: track everything, cache aggressively, and batch operations whenever possible.


Share this post on:

Previous Post
Goodbye Jib: Modernizing Container Builds for a Simpler CI/CD Workflow
Next Post
Forking and Modernizing PointWith.me: 52% Smaller, Dramatically Faster