Clarification: Samsung 990 PRO 4TB - Model MZ-V9P4T0B/AM
SSD Compatibility List (will be continuously updated)
Re: SSD Compatibility List (will be continuously updated)
HI
I confirm the correct operation of the following SSD and Memory in a F4-424
Crucial SSDx2 1TB CT1000P3SSD8
Crucial DDR5-4800 SODIMM 1.1V VL40 16 GB CTC16G48C40S5
I confirm the correct operation of the following SSD and Memory in a F4-424
Crucial SSDx2 1TB CT1000P3SSD8
Crucial DDR5-4800 SODIMM 1.1V VL40 16 GB CTC16G48C40S5
Re: SSD Compatibility List (will be continuously updated)
Hi,
I've been running these two NVME drives for a week without issue:
- Silicon Power A60 256GB - SP256GBP34A60M28
- Silicon Power US75 1TB - SP01KGBP44US7505
I've been running these two NVME drives for a week without issue:
- Silicon Power A60 256GB - SP256GBP34A60M28
- Silicon Power US75 1TB - SP01KGBP44US7505
TNAS F4-424 Pro
3X WD Ultrastar DC HC520 12T (RAIDZ1)
1X SP 1TB NVME (Cache)
1X SP 256GB NVME (OS)
3X WD Ultrastar DC HC520 12T (RAIDZ1)
1X SP 1TB NVME (Cache)
1X SP 256GB NVME (OS)
Re: SSD Compatibility List (will be continuously updated)
TEAMGROUP MP33PRO is NOT recommended
These are slow and randomly fail when installed in the F8 SSD Plus
These are slow and randomly fail when installed in the F8 SSD Plus
Re: SSD Compatibility List (will be continuously updated)
I am glad to see the SSD models confirmed to work by other users; this gives other users more choices.
- crisisacting
- Silver Member
- Posts: 462
- Joined: 20 Jan 2022, 16:42
Re: SSD Compatibility List (will be continuously updated)
In which slot was it installed?
The first two slots would go through an ASMedia controller which may be the cause of those issues.
Because the MP33PRO is DRAMless, it might require being right off the PCH to perform properly or more in line with how it's advertised.
Re: SSD Compatibility List (will be continuously updated)
I was using 4x LEXAR NQ790 LNQ790X004T-RNNNG 4TB and I was running these for a while. (I never checked with dmesg to see if there were errors, and I didn't push the system very hard.) so it seemed to work fine. Then I mounted 2x 4TB WD Blue 5, and added one to the raid. and I got a ton of io errors and multiple of the nq790 failing, I lost the volume, and it took me days and Linux skills to try to repair, I even moved them to another F8 and tested it wouldn't work there either.. (I managed to repair most, but I had backups of everything, except F8 Settings, but I was OK with setting it up again).
At the end I removed all drives, I now have 4x (Samsung 990Pro 4TB MZ-V9P4T0BW) + 4x (WD Blue SN5000 4TB WDS400T4B0E-00BKY0).
I actually added just 4 Samsung, then installed the system, added one of the WD Blue SN5000, removed it and added the 2nd WD Blue SN5000, did a repair using the recently added, then finally added back the WD Blue I took out, just to test so there were no timeouts, or IO errors. So for Me these have worked fine.
FYI. You can open the terminal and type:
dmesg
..to see if you have io errors on your nvme's. (this is not reported in the web ui, but it's good to know if you have timeouts in the communication with the nvme's)
It will look something like this at the end (see below) if it's bad:
[ 1595.974973] nvme nvme2: I/O 522 (I/O Cmd) QID 5 timeout, aborting
[ 1595.975010] nvme nvme2: I/O 523 (I/O Cmd) QID 5 timeout, aborting
[ 1595.975013] nvme nvme2: I/O 524 (I/O Cmd) QID 5 timeout, aborting
[ 1598.867285] nvme nvme2: Abort status: 0x0
[ 1599.367027] nvme nvme2: I/O 641 (I/O Cmd) QID 5 timeout, aborting
Here are a couple of other commands that were useful: (copy and paste with ctrl+shift+v) into the terminal), break list with ctrl+c
iostat -x 2
..this will update and show every 2sec the read write speed and wait times and loads of other useful info when rebuilding, re-distributing, or just using the system.
Another one I used to see progress of repair was:
watch -n1 cat /proc/mdstat
..this will show the progress and repair speed and one important thing that is very useful. How long will it take. You can use this when reshaping repairing or adding storage. Here is an example:
Every 1.0s: cat /proc/mdstat f8plus: Mon Apr 7 10:54:22 2025
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid5 sdze4[6] sdzc4[0] sdzf4[5] sdzb4[3] sdza4[2] sdzd4[1]
15583299328 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[===>.................] reshape = 18.4% (718511872/3895824832) finish=266.3min speed=198839K/sec
md8 : active raid1 sdzb3[0] sdza3[1]
1997824 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md9 : active raid1 sdzb2[2] sdza2[1]
7995392 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
I hope this helps someone.
At the end I removed all drives, I now have 4x (Samsung 990Pro 4TB MZ-V9P4T0BW) + 4x (WD Blue SN5000 4TB WDS400T4B0E-00BKY0).
I actually added just 4 Samsung, then installed the system, added one of the WD Blue SN5000, removed it and added the 2nd WD Blue SN5000, did a repair using the recently added, then finally added back the WD Blue I took out, just to test so there were no timeouts, or IO errors. So for Me these have worked fine.
FYI. You can open the terminal and type:
dmesg
..to see if you have io errors on your nvme's. (this is not reported in the web ui, but it's good to know if you have timeouts in the communication with the nvme's)
It will look something like this at the end (see below) if it's bad:
[ 1595.974973] nvme nvme2: I/O 522 (I/O Cmd) QID 5 timeout, aborting
[ 1595.975010] nvme nvme2: I/O 523 (I/O Cmd) QID 5 timeout, aborting
[ 1595.975013] nvme nvme2: I/O 524 (I/O Cmd) QID 5 timeout, aborting
[ 1598.867285] nvme nvme2: Abort status: 0x0
[ 1599.367027] nvme nvme2: I/O 641 (I/O Cmd) QID 5 timeout, aborting
Here are a couple of other commands that were useful: (copy and paste with ctrl+shift+v) into the terminal), break list with ctrl+c
iostat -x 2
..this will update and show every 2sec the read write speed and wait times and loads of other useful info when rebuilding, re-distributing, or just using the system.
Another one I used to see progress of repair was:
watch -n1 cat /proc/mdstat
..this will show the progress and repair speed and one important thing that is very useful. How long will it take. You can use this when reshaping repairing or adding storage. Here is an example:
Every 1.0s: cat /proc/mdstat f8plus: Mon Apr 7 10:54:22 2025
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid5 sdze4[6] sdzc4[0] sdzf4[5] sdzb4[3] sdza4[2] sdzd4[1]
15583299328 blocks super 1.2 level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
[===>.................] reshape = 18.4% (718511872/3895824832) finish=266.3min speed=198839K/sec
md8 : active raid1 sdzb3[0] sdza3[1]
1997824 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md9 : active raid1 sdzb2[2] sdza2[1]
7995392 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
I hope this helps someone.
Last edited by fredisco on 08 Apr 2025, 01:44, edited 1 time in total.
- FredMutter
- Posts: 64
- Joined: 17 Jan 2025, 20:11

Re: SSD Compatibility List (will be continuously updated)
interessant!
I obtain the following messages
command: dmesg|grep 'nvme'
returns:
[ 0.334936] nvme 0000:03:00.0: platform quirk: setting simple suspend
[ 0.334980] nvme nvme0: pci function 0000:03:00.0
[ 0.335001] nvme 0000:04:00.0: platform quirk: setting simple suspend
[ 0.335030] nvme nvme1: pci function 0000:04:00.0
[ 0.362540] nvme nvme0: missing or invalid SUBNQN field.
[ 0.363531] nvme nvme1: missing or invalid SUBNQN field.
[ 0.367466] nvme nvme0: allocated 32 MiB host memory buffer.
[ 0.368611] nvme nvme1: allocated 32 MiB host memory buffer.
[ 0.405580] nvme nvme0: 4/0/0 default/read/poll queues
[ 0.406588] nvme nvme1: 4/0/0 default/read/poll queues
[ 0.408360] nvme nvme0: Ignoring bogus Namespace Identifiers
[ 0.409378] nvme nvme1: Ignoring bogus Namespace Identifiers
I will try to get more info about missing or invalid SUBNQN field and Ignoring bogus Namespace Identifiers
Model: F2-424 with TOS 6.0.783
you can discuss with me in French, German and English
you can discuss with me in French, German and English
Re: SSD Compatibility List (will be continuously updated)
Hello, just wondering if it's okay to use 2 x Crucial P3 2280 500GB on a F4-424 Pro. I am guessing one will be used as OS and the 2nd one as caching. Is 500GB each sufficient? Where should I install the apps?
Re: SSD Compatibility List (will be continuously updated)
I have been running 2 of those in Traid on basic F2-424 (For os/apps only) for several months. They work.
F5-221 TOS6.0.794 - 4x4TB Traid (TNAS UPS Server
Broken
)
F2-424 TOS7.0.0392 {BETA} - 2x500GB nvme (P3) Traid, 2x6T HDD Traid
F2-221 TOS7.0.0364 - 1x3TB Ext4, 1x4TB Ext4 [Test system]
Gremlin is in 'Listening' mode
F2-424 TOS7.0.0392 {BETA} - 2x500GB nvme (P3) Traid, 2x6T HDD Traid
F2-221 TOS7.0.0364 - 1x3TB Ext4, 1x4TB Ext4 [Test system]
Gremlin is in 'Listening' mode






