Living Document: This post serves as my personal knowledge base for JupyterLab. It is updated regularly with new configurations, extension compatibilities, and deployment scripts.

1. Infrastructure & Deployment

Moving from “just running a script” to a production-ready environment.

1.1 The “Speed” Stack: Conda + uv (v4.5.0)

Using uv inside Conda provides a significant speed boost for dependency resolution.

# 1. Create isolated environment  
conda create -n my_env python=3.11

# 2. Install Core (Pinning version 4.5.0)  
conda install -c conda-forge jupyterlab=4.5.0 uv=0.9.10

# 3. Prepare Data Directory (Adjust path as needed)  
mkdir -p /run/media/MyThings/ServiceData/jupyterlab_rgsoft-data  
chmod -R 777 /run/media/MyThings/ServiceData/jupyterlab_rgsoft-data

1.2 Configuration as Code

Instead of passing a dozen flags to the CLI every time, generating a jupyter_lab_config.py is the professional way to manage settings.

Step 1: Generate the file

jupyter lab --generate-config

Step 2: Edit ~/.jupyter/jupyter_lab_config.py Map the CLI arguments to this Python config:

c = get_config()

# --- Network & Security ---  
c.ServerApp.ip = '0.0.0.0'  
c.ServerApp.port = 8888  
c.ServerApp.allow_remote_access = True

# Security: Disable tokens for internal trusted networks (Use with caution)  
c.ServerApp.token = ''  
c.ServerApp.password = ''  
c.ServerApp.disable_check_xsrf = True

# Content Security Policy (Allow embedding in iFrames)  
c.ServerApp.tornado_settings = {  
    'headers': {  
        'Content-Security-Policy': "frame-ancestors *"  
    }  
}  
c.ServerApp.trust_xheaders = True

# --- Paths & Browser ---  
c.ServerApp.root_dir = '/run/media/MyThings/ServiceData/jupyterlab_rgsoft-data'  
c.ServerApp.default_url = '/tree'  
c.ServerApp.open_browser = False

1.3 Service Daemonization (Systemd)

To ensure JupyterLab starts on boot and restarts on failure, run it as a systemd service.

File: /etc/systemd/system/jupyterlab.service

[Unit]  
Description=JupyterLab Service  
After=network.target

[Service]  
# Replace with your actual username  
User=root  
Group=root

# Path to your working directory  
WorkingDirectory=/run/media/MyThings/ServiceData/jupyterlab_rgsoft-data

# Point to the python executable inside your conda env  
ExecStart=/path/to/miniconda3/envs/my_env/bin/python -m jupyter lab --config=/root/.jupyter/jupyter_lab_config.py

# Auto-restart if it crashes  
Restart=always  
RestartSec=10

[Install]  
WantedBy=multi-user.target

Commands:

sudo systemctl daemon-reload  
sudo systemctl enable jupyterlab  
sudo systemctl start jupyterlab

2. Kernel Management

By default, JupyterLab only “sees” the environment it is installed in. Here is how to expose other Conda environments.

The nb_conda_kernels extension automatically detects any Conda environment that has ipykernel installed.

  1. Install extension in Jupyter’s env:
conda install -n my_env nb_conda_kernels
  1. Install kernel in target env:
conda install -n target_env_name ipykernel

2.2 Approach B: Manual Registration (Cleanest)

If you want to keep your launcher clean and only add specific production environments.

# 1. Activate the target env  
conda activate target_env

# 2. Install ipykernel  
pip install ipykernel

# 3. Register it to Jupyter  
python -m ipykernel install --user --name=my_project_env --display-name "Python (My Project)"

3. Multimedia & Interactivity

JupyterLab uses IPython.display to render rich media.

3.1 Audio Playback

Scenario 1: Standard File Playback Works if the file is within the Jupyter root directory.

from IPython.display import Audio  
Audio("assets/music.mp3")

Scenario 2: Binary Stream (Permission/Path Issues) Use this if the file resides outside the root_dir or on a mounted drive that static serving cannot reach.

# Read file into memory first  
with open("/abs/path/to/external/music.mp3", "rb") as f:  
    audio_data = f.read()

Audio(audio_data)

Scenario 3: Numpy Arrays (AI Generation) Ideal for verifying audio processing or generation models (e.g., Librosa output).

import numpy as np  
from IPython.display import Audio

sr = 44100  
t = np.linspace(0, 1, sr)  
data = np.sin(2 * np.pi * 440 * t) # 440Hz Sine Wave

Audio(data, rate=sr)

3.2 Video

from IPython.display import Video  
# embed=True allows the video to persist in the notebook file  
Video("simulation_result.mp4", embed=True)

4. Extensions Ecosystem

4.1 The Data Science Stack

Verified versions using uv for stability.

py3.11 + jupyterlab 4.5

uv pip install voila==0.5.11 --dry-run
uv pip install ipywidgets numpy pandas matplotlib scikit-learn pyarrow fastparquet
  • Mito (Excel-like Sheet):
uv pip install mitosheet==0.2.52 mito-ai==0.1.52 --dry-run
  • Elyra (Visual Pipelines):
uv pip install "elyra[all]==4.0.0" protobuf==3.20.3 --dry-run

4.2 Dependency Conflict: Elyra vs. jupyter-ai

⚠️ WARNING: As of Nov 2025, there is a hard conflict between Elyra and Jupyter-AI.

  • Issue: Elyra 4.x strictly requires protobuf==3.20.3.
  • Conflict: Modern jupyter-ai versions require newer Protobuf versions.
  • Resolution:
    1. Prioritize Elyra: Do not install jupyter-ai. Elyra has some built-in AI snippets.
    2. Prioritize Jupyter-AI: You must uninstall Elyra.
    3. Forced Coexistence (Not Recommended): You must downgrade jupyter-ai significantly:
    uv pip install jupyter-ai==1.15.0 jupyter-ai-magics==2.15.0 pydantic==2.12.4 langchain-core==0.1.52 langchain-community==0.0.38 langchain-openai==0.1.7 langchain-nvidia-ai-endpoints==0.0.17 langchain-anthropic==0.1.13 langchain-google-genai==1.0.4 protobuf==3.20.3 --dry-run 
    

5. Alternative: Docker Stacks

For quick migration or testing without polluting the host system.

Official Docs: Jupyter Docker Stacks

Equivalent Run Command: Maps the same data folder and port as the Conda setup above.

docker run -d   
  -p 8888:8888   \
  -v /run/media/MyThings/ServiceData/jupyterlab_rgsoft-data:/home/jovyan/work   \
  --name jupyter_lab   \
  --restart always   \
  quay.io/jupyter/datascience-notebook:latest   \
  start-notebook.py --NotebookApp.token='' --NotebookApp.password=''