No description
  • Dockerfile 55%
  • Shell 45%
Find a file
damfle fcee403c76
All checks were successful
CD / Build and Push Container (push) Successful in 4m18s
min: add neovim and unzip
2026-02-13 19:08:03 +01:00
.forgejo/workflows fix(ci): reuse code from other pipelines 2026-02-13 15:33:43 +01:00
.dockerignore init: initial commit 2026-02-13 15:23:51 +01:00
Dockerfile min: add neovim and unzip 2026-02-13 19:08:03 +01:00
entrypoint.sh mod: use system packages 2026-02-13 18:51:10 +01:00
README.md init: initial commit 2026-02-13 15:23:51 +01:00
requirements.txt init: initial commit 2026-02-13 15:23:51 +01:00

Jupyter on Debian Trixie

A Debian Trixie-based Docker container running Jupyter Lab with a non-root jupyter user, suitable for Kubernetes deployment.

Features

  • Based on Debian Trixie (slim variant)
  • Runs as non-root user jupyter (UID/GID 1000) with sudo permissions
  • CUDA Toolkit 12.6 from NVIDIA's Debian 13 repository
  • PyTorch with CUDA support
  • Jupyter Lab pre-configured
  • Common data science packages included (numpy, pandas, matplotlib, scipy, scikit-learn, seaborn)
  • Health check endpoint configured
  • Ready for Kubernetes deployment with GPU support

Building the Image

docker build -t jupyter-trixie:latest .

Running Locally

Basic run (CPU only):

docker run -p 8888:8888 jupyter-trixie:latest

With GPU support (requires NVIDIA Container Toolkit):

# Docker
docker run --gpus all -p 8888:8888 jupyter-trixie:latest

# Podman
podman run --device nvidia.com/gpu=all -p 8888:8888 jupyter-trixie:latest

Access Jupyter Lab at: http://localhost:8888

Testing CUDA

To verify GPU availability, run the included test script:

# Docker
docker run --gpus all jupyter-trixie:latest python3 /home/jupyter/test_cuda.py

# Podman
podman run --device nvidia.com/gpu=all jupyter-trixie:latest python3 /home/jupyter/test_cuda.py

Configuration

  • Port: 8888
  • User: jupyter (UID: 1000, GID: 1000) with sudo access
  • CUDA: Version 12.6 from NVIDIA Debian 13 repository
  • Working Directory: /home/jupyter/notebooks
  • Token/Password: Disabled by default (configure for production use)

GPU Support

This container includes CUDA Toolkit 12.6 from NVIDIA's official repository and uses the host's NVIDIA drivers via the NVIDIA Container Toolkit. PyTorch will automatically detect and use available GPUs when the container is run with GPU access.

Security Notes

For production use, especially in Kubernetes:

  1. Enable authentication by setting a token or password
  2. Use HTTPS/TLS termination at the ingress level
  3. Mount persistent volumes for noteb
  4. Consider removing sudo access or restricting it as needed
  5. Ensure proper GPU resource limits in Kubernetes pod specsook storage
  6. Apply appropriate network policies

Customization

To add more Python packages, modify requirements.txt and rebuild the image.