Back to Blog
DevOps11 min read

How to Run Cron Jobs in Docker Containers

Docker containers are designed to run a single process and exit. Cron daemons expect to run forever in the background. This fundamental mismatch causes most of the problems developers face when trying to schedule tasks in containers. Here is how to solve it.

The Challenge: Why Cron in Docker is Hard

Docker containers follow the "one process per container" philosophy. The container starts, runs its main process (PID 1), and stops when that process exits. The traditional Unix cron daemon doesn't fit this model well for several reasons:

  • Environment variables are invisible. Cron starts a fresh shell for each job, which does not inherit the environment variables Docker passes to the container. Your DATABASE_URL is set in the container environment, but the cron job cannot see it.
  • Logs go to a black hole. By default, cron tries to send output via the local mail system, which doesn't exist in a minimal container. Output from your jobs simply disappears.
  • Signal handling breaks. Docker sends SIGTERM to PID 1 during shutdown. If cron is PID 1, it may not properly forward that signal to running jobs, leading to data corruption or incomplete operations.
  • No failure visibility. If a cron job fails inside a container, the container keeps running. Docker health checks see a healthy container even though your critical scheduled task is broken.

Understanding these problems is essential before choosing a solution. Let's look at the three main approaches, from simplest to most robust.

Approach 1: Cron Daemon Inside the Container

The most straightforward approach is to install cron in your Docker image and run it as the main process. Here is a complete, working Dockerfile:

FROM python:3.12-slim

# Install cron
RUN apt-get update && apt-get install -y cron && \
    rm -rf /var/lib/apt/lists/*

# Copy your application
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

# Create the crontab file
# Important: must end with a newline
RUN echo "*/15 * * * * cd /app && /usr/local/bin/python sync_data.py >> /proc/1/fd/1 2>&1" > /etc/cron.d/app-cron && \
    echo "0 2 * * * cd /app && /usr/local/bin/python backup.py >> /proc/1/fd/1 2>&1" >> /etc/cron.d/app-cron && \
    echo "" >> /etc/cron.d/app-cron && \
    chmod 0644 /etc/cron.d/app-cron && \
    crontab /etc/cron.d/app-cron

# Create an entrypoint that dumps env vars for cron, then starts cron
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

The entrypoint script solves the environment variable problem by writing them to a file that cron can source:

#!/bin/bash
# entrypoint.sh

# Dump all environment variables to a file cron can source
printenv | grep -v "no_proxy" > /etc/environment

# Start cron in the foreground
# -f keeps it as PID 1 so Docker can manage the lifecycle
exec cron -f

Note the >> /proc/1/fd/1 2>&1 redirect in the crontab. This sends output to PID 1's stdout, which Docker captures as container logs. Without this, you will see nothing in docker logs.

Limitation

This approach ties your cron schedule to the Docker image. Changing a schedule requires rebuilding and redeploying the container. For dynamic schedules, consider an external scheduler.

Approach 2: Entrypoint Script with Sleep Loop

If you only need a simple schedule (like "run every N minutes") and want to avoid the cron daemon entirely, a shell loop works:

#!/bin/bash
# run-scheduled.sh

echo "Starting scheduled task. Interval: ${INTERVAL:-900} seconds"

while true; do
  echo "[$(date -Iseconds)] Running task..."

  # Run your actual task
  python /app/sync_data.py
  EXIT_CODE=$?

  if [ $EXIT_CODE -ne 0 ]; then
    echo "[$(date -Iseconds)] Task failed with exit code $EXIT_CODE"
  else
    echo "[$(date -Iseconds)] Task completed successfully"
  fi

  # Sleep for the configured interval (default 15 minutes)
  sleep ${INTERVAL:-900}
done
FROM python:3.12-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

ENV INTERVAL=900

CMD ["/bin/bash", "/app/run-scheduled.sh"]

The advantage: environment variables work naturally, logs go to stdout, and signal handling is straightforward. The downside: you can only do fixed intervals, not complex cron expressions like "weekdays at 9 AM." The interval also drifts over time because it counts from when the task finishes, not from a fixed clock.

Approach 3: Supercronic (Drop-in Replacement)

Supercronic is a cron replacement built specifically for containers. It solves every problem with traditional cron in Docker:

  • Logs to stdout/stderr automatically (no mail daemon needed)
  • Inherits environment variables from the parent process
  • Handles SIGTERM properly for graceful shutdown
  • Supports full cron expression syntax including seconds
  • Single static binary with no dependencies
FROM python:3.12-slim

# Install supercronic
ARG SUPERCRONIC_VERSION=v0.2.33
ARG SUPERCRONIC_ARCH=linux-amd64
RUN apt-get update && apt-get install -y curl && \
    curl -fsSLo /usr/local/bin/supercronic \
      https://github.com/aptible/supercronic/releases/download/${SUPERCRONIC_VERSION}/supercronic-${SUPERCRONIC_ARCH} && \
    chmod +x /usr/local/bin/supercronic && \
    apt-get purge -y curl && apt-get autoremove -y && \
    rm -rf /var/lib/apt/lists/*

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

# Create crontab file
COPY crontab /app/crontab

CMD ["supercronic", "/app/crontab"]

Your crontab file uses the same syntax you already know:

# /app/crontab
# Sync data every 15 minutes
*/15 * * * * cd /app && python sync_data.py

# Daily backup at 2 AM
0 2 * * * cd /app && python backup.py

# Weekly cleanup on Sunday at 3 AM
0 3 * * 0 cd /app && python cleanup.py

Supercronic is the recommended approach for production Docker deployments. It is well-maintained, handles edge cases properly, and works with Alpine, Debian, and Ubuntu base images.

Docker Compose Scheduled Services

In a docker-compose setup, the cleanest pattern is a dedicated cron service that shares the same image as your application but runs supercronic instead of your web server:

# docker-compose.yml
services:
  app:
    build: .
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis

  scheduler:
    build: .
    command: ["supercronic", "/app/crontab"]
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: 256M
    healthcheck:
      test: ["CMD", "pgrep", "supercronic"]
      interval: 30s
      timeout: 5s
      retries: 3

  db:
    image: postgres:16-alpine
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine

volumes:
  pgdata:

This approach keeps your web service and scheduler separate. The scheduler container has access to the same code and environment variables, but runs cron jobs instead of serving HTTP requests. If the scheduler crashes, Docker Compose restarts it without affecting your web application.

Logging Cron Output Properly

The number one mistake with Docker cron jobs is losing log output. Here is a comparison of logging strategies:

MethodWorks WithProsCons
>> /proc/1/fd/1 2>&1Traditional cronShows in docker logsBrittle, requires PID 1 awareness
SupercronicSupercronicAutomatic stdout, structuredExtra binary to install
Log to file + volumeAny approachPersistent, rotatableNeed log rotation, not in docker logs
Syslog driverDocker daemonCentralized, works with ELK/LokiMore complex setup

For most setups, supercronic's automatic stdout logging combined with a Docker logging driver (like json-file with rotation) is the best balance of simplicity and reliability.

Health Checks for Cron Containers

A running cron process does not mean your jobs are succeeding. Here is a pattern that exposes job health through Docker's health check mechanism:

#!/bin/bash
# healthcheck.sh

# Check 1: Is the cron process running?
pgrep supercronic > /dev/null || exit 1

# Check 2: Has any job run recently?
# Each job writes a timestamp file after completion
HEARTBEAT_FILE="/tmp/cron-heartbeat"

if [ ! -f "$HEARTBEAT_FILE" ]; then
  # No heartbeat yet — allow 5 minutes for first run
  CONTAINER_AGE=$(( $(date +%s) - $(stat -c %Y /proc/1/cmdline) ))
  if [ $CONTAINER_AGE -gt 300 ]; then
    echo "No heartbeat after 5 minutes"
    exit 1
  fi
  exit 0
fi

# Check that heartbeat is less than 20 minutes old
LAST_BEAT=$(cat "$HEARTBEAT_FILE")
NOW=$(date +%s)
AGE=$(( NOW - LAST_BEAT ))

if [ $AGE -gt 1200 ]; then
  echo "Last heartbeat was $AGE seconds ago"
  exit 1
fi

exit 0

In your cron jobs, add a heartbeat at the end of each run:

*/15 * * * * cd /app && python sync_data.py && date +%s > /tmp/cron-heartbeat

Environment Variables and Secrets

This is where most Docker cron setups break. The traditional cron daemon starts jobs in a minimal environment that does not include Docker's environment variables. Here are the three reliable solutions, depending on your approach:

Supercronic (recommended)

Supercronic inherits environment variables from its parent process. Environment variables set by Docker (-e flags or env_file) are automatically available in your cron jobs. No extra work needed.

Traditional cron with /etc/environment

# In your entrypoint.sh, before starting cron:
printenv | sed 's/^\(.*\)$/export \1/g' > /etc/environment

# In your crontab, source it:
*/15 * * * * . /etc/environment; cd /app && python sync_data.py

Docker secrets (Swarm or Compose)

If you use Docker Swarm secrets or Compose secrets, they appear as files in /run/secrets/. Read them in your scripts instead of relying on environment variables:

# In your Python script:
import pathlib

db_password = pathlib.Path("/run/secrets/db_password").read_text().strip()

Skip the Docker cron complexity

If your scheduled task is an HTTP endpoint, CronJobPro calls it externally with built-in retries, logging, and alerts. No container configuration needed.

Try CronJobPro Free

Best Practices

After years of running cron jobs in Docker across production environments, these practices consistently prevent the most common failures:

  1. 1Use supercronic over traditional cron. It was designed for containers and eliminates the environment variable and logging problems by design, not by workaround.
  2. 2Separate your web and cron containers. Run the same image with different commands. This prevents a stuck cron job from affecting your web server and lets you scale them independently.
  3. 3Set resource limits. A runaway cron job that consumes all available memory will kill your container (and potentially your host). Use Docker's memory and CPU limits.
  4. 4Add timeouts to your scripts. Use timeout 300 python script.py in your crontab to prevent jobs from hanging indefinitely.
  5. 5Implement locking for overlapping jobs. If a job runs longer than the schedule interval, use flock or a Redis-based lock to prevent concurrent runs.
  6. 6Use restart policies wisely. Set restart: unless-stopped for cron containers so they recover from crashes but stop cleanly during deployments.
  7. 7Pin your base image versions. Use python:3.12-slim, not python:latest. A surprise base image update can break your cron jobs silently.

When to Use an External Scheduler

Running cron inside Docker works for batch processing that needs direct access to your application code, database connections, or local filesystem. But if your scheduled task boils down to making an HTTP request, an external scheduler is simpler, more reliable, and easier to monitor.

Consider an external scheduler like CronJobPro when:

  • Your task is triggered by calling a URL (like /api/cleanup or /webhooks/daily-sync)
  • You need email or Slack alerts when a job fails, without setting up Prometheus and Alertmanager
  • You want to see a visual history of every execution with response codes, timing, and response bodies
  • You run on serverless or PaaS (Vercel, Railway, Fly.io) where there is no persistent container to run cron in
  • You need schedules that non-technical team members can view and modify from a dashboard

Many teams use a hybrid approach: Docker cron (or Kubernetes CronJobs) for heavy batch processing, and CronJobPro for HTTP-triggered tasks that benefit from external monitoring. The cron expression generator works for both Docker crontab files and CronJobPro schedules.

Quick Reference: Choosing Your Approach

ScenarioRecommended Approach
Simple interval, one taskSleep loop or external scheduler
Multiple tasks, complex schedulesSupercronic in dedicated container
HTTP endpoint triggersExternal scheduler (CronJobPro)
Kubernetes clusterK8s CronJob for batch, external for HTTP
Serverless / PaaSExternal scheduler (no persistent container)

Related Articles

Need reliable scheduling without container complexity?

CronJobPro calls your HTTP endpoints on schedule with automatic retries, monitoring, and alerts. No Docker configuration required.