No description
- Dockerfile 55%
- Shell 45%
|
|
||
|---|---|---|
| .forgejo/workflows | ||
| .dockerignore | ||
| Dockerfile | ||
| entrypoint.sh | ||
| README.md | ||
| requirements.txt | ||
Jupyter on Debian Trixie
A Debian Trixie-based Docker container running Jupyter Lab with a non-root jupyter user, suitable for Kubernetes deployment.
Features
- Based on Debian Trixie (slim variant)
- Runs as non-root user
jupyter(UID/GID 1000) with sudo permissions - CUDA Toolkit 12.6 from NVIDIA's Debian 13 repository
- PyTorch with CUDA support
- Jupyter Lab pre-configured
- Common data science packages included (numpy, pandas, matplotlib, scipy, scikit-learn, seaborn)
- Health check endpoint configured
- Ready for Kubernetes deployment with GPU support
Building the Image
docker build -t jupyter-trixie:latest .
Running Locally
Basic run (CPU only):
docker run -p 8888:8888 jupyter-trixie:latest
With GPU support (requires NVIDIA Container Toolkit):
# Docker
docker run --gpus all -p 8888:8888 jupyter-trixie:latest
# Podman
podman run --device nvidia.com/gpu=all -p 8888:8888 jupyter-trixie:latest
Access Jupyter Lab at: http://localhost:8888
Testing CUDA
To verify GPU availability, run the included test script:
# Docker
docker run --gpus all jupyter-trixie:latest python3 /home/jupyter/test_cuda.py
# Podman
podman run --device nvidia.com/gpu=all jupyter-trixie:latest python3 /home/jupyter/test_cuda.py
Configuration
- Port: 8888
- User: jupyter (UID: 1000, GID: 1000) with sudo access
- CUDA: Version 12.6 from NVIDIA Debian 13 repository
- Working Directory:
/home/jupyter/notebooks - Token/Password: Disabled by default (configure for production use)
GPU Support
This container includes CUDA Toolkit 12.6 from NVIDIA's official repository and uses the host's NVIDIA drivers via the NVIDIA Container Toolkit. PyTorch will automatically detect and use available GPUs when the container is run with GPU access.
Security Notes
For production use, especially in Kubernetes:
- Enable authentication by setting a token or password
- Use HTTPS/TLS termination at the ingress level
- Mount persistent volumes for noteb
- Consider removing sudo access or restricting it as needed
- Ensure proper GPU resource limits in Kubernetes pod specsook storage
- Apply appropriate network policies
Customization
To add more Python packages, modify requirements.txt and rebuild the image.