A standard operating procedure for adding new SSDs to a Linux server, including UUID configuration, fstab dump/pass mechanics, and an explanation of storage capacity discrepancies (TB vs GiB) and ext4 reserved blocks.
This is a technical memorandum documenting the process of adding a new SSD to a Linux server. It covers the standard operating procedure (SOP) for partitioning and mounting, followed by a deep dive into the mechanics of fstab, unit conversion, and filesystem reservation strategies.
Critical Safety Warning
WARNING: Never rely on device names like
/dev/sdbor/dev/sdcalone. These identifiers can change between reboots if hardware configurations change.
Before performing any destructive operations (wipefs,gdisk,mkfs), always cross-reference the device usinglsblk -f.
Checklist:
- Mount Points: Ensure the target is NOT mounted on critical paths like
/,/boot, or/boot/efi. - Capacity: Verify the size matches your expectation (e.g., a “1TB” drive should appear as roughly 931G).
SOP: Add New Disk (ext4)
This workflow assumes a new unformatted disk or a disk to be repurposed.
1. Identify and Unmount
First, identify the target disk and ensure it is not currently in use.
lsblk -f
# If mounted, unmount it first:
sudo umount /dev/sdx*
2. Wipe Partition Table
Remove all existing filesystem, raid, or partition table signatures to ensure a clean slate.
sudo wipefs -a /dev/sdx
3. Create New GPT Partition
Use gdisk to create a standard GPT partition table.
sudo gdisk /dev/sdx
Interactive keystrokes sequence:
o->Y: Create a new empty GPT partition table.n: Add a new partition.1: Partition number (default).Enter: First sector (default start).Enter: Last sector (default end/full size).w->Y: Write changes to disk and exit.
4. Format Filesystem
Format the new partition (sdx1) as ext4.
sudo mkfs.ext4 /dev/sdx1
5. Create Mount Point
Create the directory where the drive will be accessed.
sudo mkdir /mnt/MyNewStorage
6. Persistent Mounting (UUID)
Retrieve the universally unique identifier (UUID) to ensure stable mounting across reboots.
sudo blkid /dev/sdx1
# Example Output: UUID="550e8400-e29b-41d4-a716-446655440000"
Edit /etc/fstab:
sudo vim /etc/fstab
Add the following line:
UUID="<YOUR-UUID-HERE>" /mnt/MyNewStorage ext4 defaults 0 2
7. Verification
Test the configuration without rebooting to prevent boot failures due to typos.
sudo mount -a
df -h /mnt/MyNewStorage
Technical Deep Dive
1. Understanding fstab Configuration
The last two fields in an fstab entry (0 2) are often copied without understanding. They control dump and pass.
- Field 5:
- 0 (Disabled): Most common. Tells the dump backup utility to ignore this filesystem.
- 1 (Enabled): Rarely used in modern setups.
- Field 6: <pass> (fsck order)
- Controls the order in which fsck checks filesystems at boot.
- 0: Do not check. (Use for swap, removable media, or non-critical partitions).
- 1: Highest priority. Reserved strictly for the root (/) filesystem.
- 2: Secondary priority. The system checks these partitions sequentially after the root filesystem is verified.
Why 0 2?
Using 2 ensures that if the server suffers an unclean shutdown (power loss), the system will automatically attempt to repair filesystem inconsistencies on this data drive during the next boot, ensuring data integrity.
2. The “Missing” Storage: Decimal vs. Binary
A common confusion is why a “1TB” drive appears as only “931G” in Linux.
- Manufacturers (Decimal / Base-10):
1TB = 1,000,000,000,000 bytes - Operating Systems (Binary / Base-2):
1GiB = 1024^3 bytes = 1,073,741,824 bytes
Linux tools like lsblk and df default to binary prefixes (GiB), hence the discrepancy.
3. ext4 Default 5% Root Reservation
Even after accounting for unit conversion, a 931G volume might show only ~885G as “Available”. This is due to ext4 reserved blocks.
By default, mkfs.ext4 reserves 5% of the total blocks for the super-user (root).
Purpose: This is a safety mechanism designed for the system drive (/). If a user or runaway process fills the disk to 100%, the system could become unresponsive (unable to write logs or create temp files). The 5% buffer ensures that root can still login to fix the issue.
Optimization for Data Drives: For a purely data storage drive (non-boot), 5% can be excessive (e.g., 50GB on a 1TB drive).
Note: While keeping the default is safer, you can reduce this reservation to 1% or 0% using tune2fs if you strictly need the space:
# Reduce reservation to 1%
sudo tune2fs -m 1 /dev/sdx1