Setting up a reliable deep learning environment requires tight alignment between GPU hardware, kernel modules, and NVIDIA’s software stack. This guide documents a known-stable configuration for RTX 3060 (LHR) on Ubuntu 22.04 LTS using CUDA 11.6 and cuDNN 8.8.
🧩 System Baseline & Prerequisites #
Confirm the GPU model and ensure the system sees the device correctly.
lspci -vnn | grep VGA
# Expected: NVIDIA Corporation GA106 [GeForce RTX 3060 Lite Hash Rate]
This guide assumes:
- Ubuntu 22.04 LTS (standard kernel)
- Clean system or previously purged NVIDIA drivers
- Secure Boot disabled (recommended for simplicity)
🚫 Disable the Nouveau Driver #
The open-source nouveau driver conflicts with NVIDIA’s proprietary driver and does not support CUDA.
Create a blacklist file:
sudo vim /etc/modprobe.d/blacklist-nouveau.conf
Add the following:
blacklist nouveau
options nouveau modeset=0
Update initramfs and reboot:
sudo update-initramfs -u
sudo reboot
Verify Nouveau is disabled after reboot:
lsmod | grep nouveau
# No output means success
🧠 Install NVIDIA Driver (510 Series) #
CUDA 11.6 pairs best with the NVIDIA 510 driver series, offering long-term stability.
Check the recommended driver:
ubuntu-drivers devices
Install the driver:
sudo apt install nvidia-driver-510 -y
sudo reboot
Verify installation:
nvidia-smi
You should see:
- GPU: RTX 3060
- Driver Version: 510.xx
- CUDA Version: 11.6
⚙️ Install CUDA Toolkit 11.6 #
The CUDA Toolkit provides nvcc, runtime libraries, and developer tools.
Download the local runfile from NVIDIA’s CUDA archive to avoid apt conflicts.
Run the installer:
sudo ./cuda_11.6.0_510.39.01_linux.run
During installation:
- Uncheck NVIDIA Driver (already installed)
- Install Toolkit only
Environment Variables #
Append to ~/.bashrc:
export PATH=/usr/local/cuda-11.6/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64:$LD_LIBRARY_PATH
Apply changes:
source ~/.bashrc
nvcc --version
🧠 Install cuDNN 8.8 #
cuDNN accelerates deep learning primitives for frameworks like PyTorch and TensorFlow.
Install via NVIDIA’s local repository package for Ubuntu 22.04:
sudo dpkg -i cudnn-local-repo-ubuntu2204-8.8.1.3_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-*/cudnn-local-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get install libcudnn8 libcudnn8-dev libcudnn8-samples
Verify headers and libraries:
ls /usr/include/cudnn*.h
ls /usr/lib/x86_64-linux-gnu/libcudnn*
🚀 Validation & Performance Test #
Use a minimal Numba CUDA kernel to confirm GPU execution.
import numpy as np
from numba import cuda
@cuda.jit
def increment_kernel(arr):
idx = cuda.grid(1)
if idx < arr.size:
arr[idx] += 1
data = np.zeros(10_000_000, dtype=np.int32)
d_data = cuda.to_device(data)
increment_kernel[256, 256](d_data)
cuda.synchronize()
While running, check GPU activity:
nvidia-smi
You should see Python consuming GPU memory and compute.
🛠️ Troubleshooting FAQ #
| Issue | Resolution |
|---|---|
| Broken packages after install | Purge old drivers: sudo apt remove --purge nvidia* |
nvidia-driver-530-open installed |
Avoid open drivers; use nvidia-driver-510 |
| DKMS build failure | Ensure matching kernel headers are installed |
nvvp display error |
Requires local desktop or ssh -X |
| CUDA not found | Recheck PATH and LD_LIBRARY_PATH |
🧩 Final Notes #
This RTX 3060 + CUDA 11.6 + cuDNN 8.8 stack is a conservative, production-proven configuration. While newer CUDA releases exist, this pairing prioritizes driver stability, framework compatibility, and reproducibility, making it well-suited for long-running training workloads on Ubuntu 22.04.