There is a PCIe3.0 x 4 interface on the YY3588 development board, which can be used to insert an M.2 M-KEY 2280 SSD that supports the NVME protocol, as shown in the figure:
// TO DO IMG
// TO DO
# View
$ sudo fdisk -l
...
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: CF600 512GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x3a6ba98f Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 1000215215 1000213168 476.9G 83 Linux ... # mount mkdir -p /mnt/ssd sudo mount /dev/nvme0n1p1 /mnt/ssd ```
### 2.2 Automatic Mounting
#### 2.2.1 Get the UUID of the Device
```bash
### Check the UUID of the Device
$ sudo blkid /dev/nvme0n1
$
### If no UUID is output, it means that no partition is performed (if yes, you can directly jump to `Check the UUID of the Device Again`)
### Check the partition status
$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: CF600 512GB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
### If this information is output, there is no partition like /dev/nvme0n1p1, which means there is no partition
### Create a partition (here we create a primary partition for demonstration)
$ sudo fdisk /dev/nvme0n1
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS (MBR) disklabel with disk identifier 0x3a6ba98f.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-1000215215, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-1000215215, default 1000215215):
Created a new partition 1 of type 'Linux' and of size 476.9 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
### Format the partition just created to ext4 format
$ sudo mkfs.ext4 /dev/nvme0n1p1
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 125026646 4k blocks and 31260672 inodes
Filesystem UUID: d068a5fd-b7bb-4f80-86dd-aa6a036f86d9 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done
### Check the device UUID again
$ sudo blkid /dev/nvme0n1p1
/dev/nvme0n1p1: UUID="d068a5fd-b7bb-4f80-86dd-aa6a036f86d9" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="3a6ba98f-01"
/etc/fstab
file$ cat /etc/fstab
...
UUID="d068a5fd-b7bb-4f80-86dd-aa6a036f86d9" /mnt/ssd ext4 defaults 0 2
### Mount
$ sudo mount -a
mount: (hint) your fstab has been modified, but systemd still uses
the old version; use 'systemctl daemon-reload' to reload.
### Verify (you can see that it has been mounted successfully)
$ df -h
File system Size Used Available Used% Mount point
...
/dev/nvme0n1p1 469G 28K 445G 1% /mnt/ssd
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...
nvme0n1 259:0 0 476.9G 0 disk
└─nvme0n1p1 259:1 0 476.9G 0 part /mnt/ssd
$ cat /proc/mounts | grep nvme
/dev/nvme0n1p1 /mnt/ssd ext4 rw,relatime 0 0
### Restart and verify
Protocol\Slot | x1 | x2 | x4 | x8 | x16 |
---|---|---|---|---|---|
PCIe 1.0 | 250MB/s | 500MB/s | 1GB/s | 2GB/s | 4GB/s |
PCIe 2.0 | 500MB/s | 1GB/s | 2GB/s | 4GB/s | 8GB/s |
PCIe 3.0 | 1GB/s | 2GB/s | 4GB/s | 8GB/s | 16GB/s |
PCIe 4.0 | 2GB/s | 4GB/s | 8GB/s | 16GB/s | 32GB/s |
From the table, we can see that PCIe3.0 x 4 The transfer rate is up to 4 GB/s
dd
command to perform read and write tests### Write 1GB of data to the disk
$ sudo dd if=/dev/zero of=/dev/nvme0n1p1 bs=1M count=1024 oflag=direct
Input 1024+0 block records
Output 1024+0 block records
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.930404 s, 1.2 GB/s
### Read 1GB file from disk
$ sudo dd if=/dev/nvme0n1p1 of=/dev/null bs=1M count=1024 iflag=direct
Input 1024+0 block records
Output 1024+0 block records
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.513896 s, 2.1 GB/s
fio
command to perform read and write tests$ sudo apt update
$ sudo apt install fio
### Sequential write performance test
$ fio --name=seqwrite --ioengine=sync --rw=write --bs=1M --numjobs=1 --size=10G --runtime=60s --time_based --output-format=normal --direct=1 --filename=/dev/nvme0n1p1
seqwrite: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=sync, iode
pth=1
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][w=1265MiB/s][w=1265 IOPS][eta 00m:00s] seqwrite: (groupid=0, jobs=1): err= 0: pid=4154: Tue Dec 10 03:34:39 2024 write: IOPS=1273, BW=1273MiB/s (1335MB/s)(74.6GiB/60001msec); 0 zone resets clat (usec): min=545, max=3320, avg=742.53, stdev=127.38 lat (usec): min=566, max=3353, avg=780.76, stdev=127.60 clat percentiles (usec): | 1.00th=[ 594], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 644], | 30.00th=[ 652], 40.00th=[ 660], 50.00th=[ 676], 60.00th=[ 717], | 70.00th=[ 775], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 979], | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1139], 99.95th=[ 1172], | 99.99th=[2008] bw (MiB/s): min= 1239, max= 1301, per=100.00%, avg=1274.90, stdev=10.18, samples=119 iops: min= 1239, max= 1301, avg=1274.77, stdev=10.25, samples=119 lat (usec): 750=67.43%, 1000=28.95% lat (msec): 2=3.60%, 4=0.01% cpu: usr=5.89%, sys=18.73%, ctx=76982, majf=0, minf=12 IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,76397,0,0 short=0,0,0,0 dropped=0,0,0,0 latency: target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): WRITE: bw=1273MiB/s (1335MB/s), 1273MiB/s-1273MiB/s (1335MB/s-1335MB/s), io=74.6GiB (80.1GB), run=60001-60001m sec Disk stats (read/write): nvme0n1: ios=32/610111, merge=0/0, ticks=3/215014, in_queue=215016, util=100.00% ### Sequential read performance test $ fio --name=seqread --ioengine=sync --rw=read --bs=1M --numjobs=1 --size=10G --runtime=60s --time_based --output-format=normal --direct=1 --filename=/dev/nvme0n1p1 seqread: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=sync, iodept h=1 fio-3.33 Starting 1 process Jobs: 1 (f=1): [R(1)][100.0%][r=2134MiB/s][r=2134 IOPS][eta 00m:00s] seqread: (groupid=0, jobs=1): err= 0: pid=4193: Tue Dec 10 03:36:20 2024 read: IOPS=2110, BW=2110MiB/s (2213MB/s)(124GiB/60001msec) clat (usec): min=362, max=2523, avg=470.19, stdev=42.26 lat (usec): min=362, max=2526, avg=470.56, stdev=42.35 clat percentiles (usec): | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 445], | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 465], 60.00th=[ 474], | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 510], 95.00th=[ 537], | 99.00th=[ 594], 99.50th=[ 635], 99.90th=[ 807], 99.95th=[ 848], | 99.99th=[1020] bw (MiB/s): min= 1962, max= 2305, per=100.00%, avg=2113.48, stdev=96.21, samples=119 iops: min= 1962, max= 2305, avg=2113.33, stdev=96.24, samples=119 lat (usec): 500=84.38%, 750=15.38%, 1000=0.22% lat (msec): 2=0.01%, 4=0.01% cpu: usr=1.29%, sys=29.90%, ctx=127638, majf=0, minf=266 IO depths: 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete: 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=126620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency: target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=2110MiB/s (2213MB/s), 2110MiB/s-2110MiB/s (2213MB/s-2213MB/s), io=124GiB (133GB), run=60001-60001mse c Disk stats (read/write):
nvme0n1: ios=1011295/0, merge=0/0, ticks=219401/0, in_queue=219401, util=100.00%
### Random write performance test (the following test results are omitted)
$ fio --name=randwrite --ioengine=sync --rw=randwrite --bs=4k --numjobs=8 --size=10G --runtime=60s --time_based --output-format=normal --direct=1 --filename=/dev/nvme0n1p1
### Random read performance test
$ fio --name=randread --ioengine=sync --rw=randread --bs=4k --numjobs=8 --size=10G --runtime=60s --time_based --output-format=normal --direct=1 --filename=/dev/nvme0n1p1
### Test multithreaded performance
$ fio --name=multiwrite --ioengine=sync --rw=write --bs=1M --numjobs=16 --size=10G --runtime=60s --time_based --output-format=normal --direct=1 --filename=/dev/nvme0n1p1
### Mixed read and write test (mixed test of 50% read and 50% write)
$ fio --name=mixedreadwrite --ioengine=sync --rw=randwrite --bs=4k --numjobs=4 --size=10G --runtime=60s --time_based --output-format=normal --direct=1 --filename=/dev/nvme0n1p1
--filename=/dev/nvme0n1p1
: specifies the test disk device as /dev/nvme0n1p1
.--direct=1
: enables direct I/O, bypasses the file system cache, and interacts directly with the disk.--size=10G
: the file size tested is 10GB.hdparm
command to perform read and write tests$ sudo apt install hdparm
### Test sequential read performance
$ sudo hdparm -t --direct /dev/nvme0n1p1
### Test cache read performance
$ sudo hdparm -T /dev/nvme0n1p1
ioping
command to perform read and write tests$ sudo apt install ioping
### Test disk read performance (ioping tests disk read latency by default.)
$ ioping -c 10 /dev/nvme0n1p1
### Test disk write performance (ioping can also be used to test disk write latency)
$ ioping -c 10 -W /dev/nvme0n1p1
### Test continuous I/O performance
$ ioping -c 1000 -t /dev/nvme0n1p1 ```
### 3.5 Use `stress-ng` tool for read and write test
```bash
$ sudo apt install stress-ng
### Test the sequential write performance of the disk
$ stress-ng --io 1 --blocksize 1M --timeout 60s --write --verify --fsync --name seqwrite --device /dev/nvme0n1p1
### Test the sequential read performance of the disk
$ stress-ng --io 1 --blocksize 1M --timeout 60s --read --verify --name seqread --device /dev/nvme0n1p1
### Test the random write performance of the disk
$ stress-ng --io 8 --blocksize 4k --timeout 60s --write --verify --random-ops --name randwrite --device /dev/nvme0n1p1
### Test the random read performance of the disk
$ stress-ng --io 8 --blocksize 4k --timeout 60s --read --verify --random-ops --name randread --device /dev/nvme0n1p1
### Test mixed load (mixed read and write)
$ stress-ng --io 8 --blocksize 4k --timeout 60s --verify --name mixedreadwrite --random-ops --mix 50 --device /dev/nvme0n1p1
### Test concurrent I/O performance of multiple threads
$ stress-ng --io 16 --blocksize 1M --timeout 60s --write --verify --name multiwrite --device /dev/nvme0n1p1
bonnie++
tool for read and write tests$ sudo apt install bonnie++
### Sequential write performance test
$ bonnie++ -d /mnt/ssd -s 10G -r 5G -w
### Sequential read performance test
$ bonnie++ -d /mnt/ssd -s 10G -r 5G -r
### Random write performance test
$ bonnie++ -d /mnt/ssd -s 10G -r 5G -n 128 -w
### Random read performance test
$ bonnie++ -d /mnt/ssd -s 10G -r 5G -n 128 -r
### Comprehensive test (sequential and random read and write)
$ bonnie++ -d /mnt/ssd -s 10G -r 5G
#!/bin/bash
# Extract the remaining space of the root partition
disk_size=$(df /dev/nvme0n1p1 | awk '/\//{print $4}')
# Note that the size of the disk space extracted is in Kb
if [ $disk_size -lt 512000 -a $mem_size -lt 1024000 ]
then
echo "Insufficient resources" | mail -s Warning mailbox
fi
# Add to crontab (execute every ten minutes)
*/10 * * * * /path/to/nameOfScript