Help: Reboot causes initialization

Initialization of newly purchased TNAS or re-installation of your TNAS
Locked
dean wu
Posts: 3
Joined: 20 Apr 2020, 10:47

Help: Reboot causes initialization

Post by dean wu »

My F4-421 was configured with 4 * 2TB SSD which were configured as Raid 5 Array. Everything was normal until yesterday, I found I couldn't create folders from app or computer file browser. However there's no issue on reading files, and I could login to the web interface. After reboot the NAS from web console, it directly went to initialization interface!! :o Tried to restart the NAS but no luck.

If someone knows how this can be resolved, particularly what I can do to avoid data loss, please reply.
Really appreciate your help!
User avatar
TMSupport
TerraMaster Team
Posts: 2314
Joined: 13 Dec 2019, 15:15

Re: Help: Reboot causes initialization

Post by TMSupport »

We have received your email at support(a)terra-master.com. It may require a remotely check to mount the RAID. Please confirm the time for remote session with email.
To contact our team, please send email to following addresses, remember to replace (at) with @
Technical team: support(at)terra-master.com (for technical support)
Service team: service(at)terra-master.com (for purchasing, return, replacement, RMA service)
dean wu
Posts: 3
Joined: 20 Apr 2020, 10:47

Re: Help: Reboot causes initialization

Post by dean wu »

I'm available from 8am - 9pm (UCT+8). Can some one help to check it?

I got a local Terra support engineer checking my NAS just now, but it's not fixed. According to the engineer, the issue can't be fixed. I need to look for professional recovery company to try to recover my data. It's really unbelievable to me. How could a NAS be so fragile? I did nothing more than a reboot. And even I couldn't create files before the reboot, I still could read all the existing files and folder. Now after a reboot it seems my data was gone. How could it be possible??

Below is the log that engineer checked for me:
****************************************************************************************************************
Welcome to Tnas!
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multi path]
md0 : active raid5 sda4[0] sde4[3] sdc4[2] sdb4[1]
5615849472 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/14 pages [0KB], 65536KB chunk

md8 : active raid1 sde3[74] sdc3[73] sdb3[72] sda3[0]
999872 blocks super 1.2 [72/4] [UUUU______________________________________ ______________________________]

md9 : active raid1 sde2[73] sdc2[72] sdb2[1] sda2[0]
1998848 blocks super 1.2 [72/4] [UUUU_____________________________________ _______________________________]

unused devices: <none>
[root@TNAS421 ~]#
[root@TNAS421 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
|-sda1 8:1 0 285M 0 part
|-sda2 8:2 0 1.9G 0 part
| `-md9 9:9 0 1.9G 0 raid1 /
|-sda3 8:3 0 977M 0 part
| `-md8 9:8 0 976.4M 0 raid1
`-sda4 8:4 0 1.8T 0 part
`-md0 9:0 0 5.2T 0 raid5
`-vg0-lv0 251:0 0 5.2T 0 lvm
sdb 8:16 0 1.8T 0 disk
|-sdb1 8:17 0 285M 0 part
|-sdb2 8:18 0 1.9G 0 part
| `-md9 9:9 0 1.9G 0 raid1 /
|-sdb3 8:19 0 977M 0 part
| `-md8 9:8 0 976.4M 0 raid1
`-sdb4 8:20 0 1.8T 0 part
`-md0 9:0 0 5.2T 0 raid5
`-vg0-lv0 251:0 0 5.2T 0 lvm
sdc 8:32 0 1.8T 0 disk
|-sdc1 8:33 0 285M 0 part
|-sdc2 8:34 0 1.9G 0 part
| `-md9 9:9 0 1.9G 0 raid1 /
|-sdc3 8:35 0 977M 0 part
| `-md8 9:8 0 976.4M 0 raid1
`-sdc4 8:36 0 1.8T 0 part
`-md0 9:0 0 5.2T 0 raid5
`-vg0-lv0 251:0 0 5.2T 0 lvm
sde 8:64 0 1.8T 0 disk
|-sde1 8:65 0 285M 0 part
|-sde2 8:66 0 1.9G 0 part
| `-md9 9:9 0 1.9G 0 raid1 /
|-sde3 8:67 0 977M 0 part
| `-md8 9:8 0 976.4M 0 raid1
`-sde4 8:68 0 1.8T 0 part
`-md0 9:0 0 5.2T 0 raid5
`-vg0-lv0 251:0 0 5.2T 0 lvm
[root@TNAS421 ~]# blkid
/dev/md0: UUID="Qc1P2Z-bt2w-lj8i-MuWc-Ca9o-Y7i8-9MwV9h" TYPE="LVM2_member"
/dev/mapper/vg0-lv0: UUID="7b638180-3e6e-49d6-9613-eae60b48a4fb" UUID_SUB="2c85b 2e9-bcb3-4a5c-ae6f-451c9e1c22cf" TYPE="btrfs"
/dev/sda1: LABEL="UTOSDISK-X86-S64" UUID="e6ab5ccd-65da-47a6-a9d9-01acead13499" TYPE="ext4" PARTLABEL="primary" PARTUUID="d474263e-6b1b-4380-84e6-edd49d4e638a"
/dev/sda2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="8c9dfc66-2871-7 75b-520e-09107ddfef59" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="d769f0de-483a-4ff6-96f9-edd729411f51"
/dev/sda3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="6d5acde3-774e-1 2dd-f916-a2fa5f019b8f" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="09b33dd1-b972-4e43-90e0-936a04b75841"
/dev/sda4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="299a9f59-948f-f d9c-9096-56ff65cac792" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="1ff2c3b8-c957-4d95-bc01-03d597dbb75e"
/dev/sdb1: LABEL="UTOSDISK-X86-S64" UUID="43ddf15e-0211-4ed0-9f66-e7a692a79af8" TYPE="ext4" PARTLABEL="primary" PARTUUID="e3f6a6bf-f3bd-4f8f-aacb-1204d01ee536"
/dev/sdb2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="622ecf6d-48bb-f 741-b2e9-06e8c0d11096" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="7caa99dc-bb3a-4e0f-809a-f45f2f1df496"
/dev/sdb3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="743f05c0-f01f-f 627-1a9f-a486091087ba" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="0ae0d86e-1134-4bcb-8055-a09b80e50581"
/dev/sdb4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="de3d3131-5cb3-f e2c-c721-0da8a0e27292" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="98f579b0-ff9a-4387-88ba-3341be923a1f"
/dev/sdc1: LABEL="UTOSDISK-X86-S64" UUID="f6acc233-9a19-40cf-8b99-81e03c4fee11" TYPE="ext4" PARTLABEL="primary" PARTUUID="d32f82fc-3300-41d3-973d-db2de68c4254"
/dev/sdc2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="e2ea2fc5-7727-1 ffb-08cf-a34d7bbda03e" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="0b111df6-794d-4dfe-b332-b4dbd26b479f"
/dev/sdc3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="73f3a3cc-bb82-f 5ee-f0b1-9d666fea1168" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="07f4a9e4-13e0-4e65-913c-11b81dad5d99"
/dev/sdc4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="b4be4558-f070-7 363-d197-214792f783af" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="e9474a11-7b3e-4dd4-867c-44e75b0893ec"
/dev/sde1: LABEL="UTOSDISK-X86-S64" UUID="40b6991b-f82b-48a5-baa6-ede0682d214f" TYPE="ext4" PARTLABEL="primary" PARTUUID="abf70ea6-dbef-43fd-ae55-c12d274b3e69"
/dev/sde2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="67b816f1-d442-7 b41-7caf-10f10e83beae" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="c29135df-8beb-4685-a429-4328f7323e2f"
/dev/sde3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="b6da9198-4b69-2 a51-d541-091b71366bc0" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="da83097b-c678-4377-8299-48dc20127797"
/dev/sde4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="33de3b5a-03a2-2 32a-f5af-e8b08360110c" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="f2158daf-430c-4f36-8627-db86b4c14384"
/dev/md9: UUID="6de6860f-4a9b-4acd-bd73-13dcc60d4360" TYPE="ext4"
/dev/md8: UUID="41e08155-0a7c-4bcd-b931-b829b9ff985f" TYPE="swap"
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]# b
-bash: b: command not found
[root@TNAS421 ~]# btrfs
btrfs btrfs-image btrfsck
btrfs-convert btrfs-map-logical btrfstune
btrfs-debug-tree btrfs-select-super
btrfs-find-root btrfs-zero-log
[root@TNAS421 ~]# btrfs
btrfs btrfs-image btrfsck
btrfs-convert btrfs-map-logical btrfstune
btrfs-debug-tree btrfs-select-super
btrfs-find-root btrfs-zero-log
[root@TNAS421 ~]# btrfs
btrfs btrfs-image btrfsck
btrfs-convert btrfs-map-logical btrfstune
btrfs-debug-tree btrfs-select-super
btrfs-find-root btrfs-zero-log
[root@TNAS421 ~]# btrfs
btrfs btrfs-image btrfsck
btrfs-convert btrfs-map-logical btrfstune
btrfs-debug-tree btrfs-select-super
btrfs-find-root btrfs-zero-log
[root@TNAS421 ~]# mount
/dev/md9 on / type ext4 (rw,relatime,data=ordered)
devtmpfs on /dev type devtmpfs (rw,relatime,size=3886280k,nr_inodes=971570,mode= 755)
proc on /proc type proc (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /tmp type tmpfs (rw,relatime)
tmpfs on /run type tmpfs (ro,relatime,mode=755)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /opt/var type tmpfs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
[root@TNAS421 ~]#
[root@TNAS421 ~]#
Last edited by dean wu on 20 Apr 2020, 18:03, edited 1 time in total.
dean wu
Posts: 3
Joined: 20 Apr 2020, 10:47

Re: Help: Reboot causes initialization

Post by dean wu »

[root@TNAS421 ~]# df -h
Filesystem Size Used Available Use% Mounted on
/dev/md9 1.8G 505.8M 1.2G 28% /
devtmpfs 3.7G 0 3.7G 0% /dev
tmpfs 3.7G 400.0K 3.7G 0% /tmp
tmpfs 3.7G 232.0K 3.7G 0% /run
tmpfs 3.7G 2.6M 3.7G 0% /opt/var
[root@TNAS421 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Nov 14 21:11:11 2019
Raid Level : raid5
Array Size : 5615849472 (5355.69 GiB 5750.63 GB)
Used Dev Size : 1871949824 (1785.23 GiB 1916.88 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Apr 15 03:11:49 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 128K

Name : TNAS-00F4BC:UTOSUSER-X86-S64
UUID : 11e33f52:20036ac7:a8becea0:57d582d2
Events : 14

Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4
3 8 68 3 active sync /dev/sde4
[root@TNAS421 ~]#
[root@TNAS421 ~]# mdadm -S /dev/md0
mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounte d filesystem or active volume group?
[root@TNAS421 ~]# mdadm -S /dev/
Display all 181 possibilities? (y or n)
[root@TNAS421 ~]# mdadm -S /dev/md
md0 md8 md9
[root@TNAS421 ~]# mdadm -S /dev/md0
mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounte d filesystem or active volume group?
[root@TNAS421 ~]# bl
blkdeactivate blkid blkmapd blkrawverify blkzone
blkdiscard blkiomon blkparse blktrace blockdev
[root@TNAS421 ~]# bl
blkdeactivate blkid blkmapd blkrawverify blkzone
blkdiscard blkiomon blkparse blktrace blockdev
[root@TNAS421 ~]# bl
blkdeactivate blkid blkmapd blkrawverify blkzone
blkdiscard blkiomon blkparse blktrace blockdev
[root@TNAS421 ~]# blk
blkdeactivate blkid blkmapd blkrawverify blkzone
blkdiscard blkiomon blkparse blktrace
[root@TNAS421 ~]# blkid
/dev/md0: UUID="Qc1P2Z-bt2w-lj8i-MuWc-Ca9o-Y7i8-9MwV9h" TYPE="LVM2_member"
/dev/mapper/vg0-lv0: UUID="7b638180-3e6e-49d6-9613-eae60b48a4fb" UUID_SUB="2c85b 2e9-bcb3-4a5c-ae6f-451c9e1c22cf" TYPE="btrfs"
/dev/sda1: LABEL="UTOSDISK-X86-S64" UUID="e6ab5ccd-65da-47a6-a9d9-01acead13499" TYPE="ext4" PARTLABEL="primary" PARTUUID="d474263e-6b1b-4380-84e6-edd49d4e638a"
/dev/sda2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="8c9dfc66-2871-7 75b-520e-09107ddfef59" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="d769f0de-483a-4ff6-96f9-edd729411f51"
/dev/sda3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="6d5acde3-774e-1 2dd-f916-a2fa5f019b8f" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="09b33dd1-b972-4e43-90e0-936a04b75841"
/dev/sda4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="299a9f59-948f-f d9c-9096-56ff65cac792" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="1ff2c3b8-c957-4d95-bc01-03d597dbb75e"
/dev/sdb1: LABEL="UTOSDISK-X86-S64" UUID="43ddf15e-0211-4ed0-9f66-e7a692a79af8" TYPE="ext4" PARTLABEL="primary" PARTUUID="e3f6a6bf-f3bd-4f8f-aacb-1204d01ee536"
/dev/sdb2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="622ecf6d-48bb-f 741-b2e9-06e8c0d11096" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="7caa99dc-bb3a-4e0f-809a-f45f2f1df496"
/dev/sdb3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="743f05c0-f01f-f 627-1a9f-a486091087ba" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="0ae0d86e-1134-4bcb-8055-a09b80e50581"
/dev/sdb4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="de3d3131-5cb3-f e2c-c721-0da8a0e27292" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="98f579b0-ff9a-4387-88ba-3341be923a1f"
/dev/sdc1: LABEL="UTOSDISK-X86-S64" UUID="f6acc233-9a19-40cf-8b99-81e03c4fee11" TYPE="ext4" PARTLABEL="primary" PARTUUID="d32f82fc-3300-41d3-973d-db2de68c4254"
/dev/sdc2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="e2ea2fc5-7727-1 ffb-08cf-a34d7bbda03e" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="0b111df6-794d-4dfe-b332-b4dbd26b479f"
/dev/sdc3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="73f3a3cc-bb82-f 5ee-f0b1-9d666fea1168" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="07f4a9e4-13e0-4e65-913c-11b81dad5d99"
/dev/sdc4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="b4be4558-f070-7 363-d197-214792f783af" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="e9474a11-7b3e-4dd4-867c-44e75b0893ec"
/dev/sde1: LABEL="UTOSDISK-X86-S64" UUID="40b6991b-f82b-48a5-baa6-ede0682d214f" TYPE="ext4" PARTLABEL="primary" PARTUUID="abf70ea6-dbef-43fd-ae55-c12d274b3e69"
/dev/sde2: UUID="201aeaa5-ae0b-24d8-68df-1a29dc5a3a66" UUID_SUB="67b816f1-d442-7 b41-7caf-10f10e83beae" LABEL="TNAS:UTOSCORE-X86-S64" TYPE="linux_raid_member" PA RTLABEL="primary" PARTUUID="c29135df-8beb-4685-a429-4328f7323e2f"
/dev/sde3: UUID="e6e152d4-5b20-0eb1-f09f-02a37996490a" UUID_SUB="b6da9198-4b69-2 a51-d541-091b71366bc0" LABEL="TNAS-00F4BC:UTOSSWAP-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="da83097b-c678-4377-8299-48dc20127797"
/dev/sde4: UUID="11e33f52-2003-6ac7-a8be-cea057d582d2" UUID_SUB="33de3b5a-03a2-2 32a-f5af-e8b08360110c" LABEL="TNAS-00F4BC:UTOSUSER-X86-S64" TYPE="linux_raid_mem ber" PARTLABEL="primary" PARTUUID="f2158daf-430c-4f36-8627-db86b4c14384"
/dev/md9: UUID="6de6860f-4a9b-4acd-bd73-13dcc60d4360" TYPE="ext4"
/dev/md8: UUID="41e08155-0a7c-4bcd-b931-b829b9ff985f" TYPE="swap"
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]# mount
/dev/md9 on / type ext4 (rw,relatime,data=ordered)
devtmpfs on /dev type devtmpfs (rw,relatime,size=3886280k,nr_inodes=971570,mode= 755)
proc on /proc type proc (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /tmp type tmpfs (rw,relatime)
tmpfs on /run type tmpfs (ro,relatime,mode=755)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /opt/var type tmpfs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
[root@TNAS421 ~]#
[root@TNAS421 ~]# df -h
Filesystem Size Used Available Use% Mounted on
/dev/md9 1.8G 505.8M 1.2G 28% /
devtmpfs 3.7G 0 3.7G 0% /dev
tmpfs 3.7G 400.0K 3.7G 0% /tmp
tmpfs 3.7G 232.0K 3.7G 0% /run
tmpfs 3.7G 2.6M 3.7G 0% /opt/var
[root@TNAS421 ~]#
[root@TNAS421 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 sda4[0] sde4[3] sdc4[2] sdb4[1]
5615849472 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/14 pages [0KB], 65536KB chunk

md8 : active raid1 sde3[74] sdc3[73] sdb3[72] sda3[0]
999872 blocks super 1.2 [72/4] [UUUU____________________________________________________________________]

md9 : active raid1 sde2[73] sdc2[72] sdb2[1] sda2[0]
1998848 blocks super 1.2 [72/4] [UUUU____________________________________________________________________]

unused devices: <none>
[root@TNAS421 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 vg0 lvm2 a-- 5.23t 0
[root@TNAS421 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 1 0 wz--n- 5.23t 0
[root@TNAS421 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv0 vg0 -wi-a----- 5.23t
[root@TNAS421 ~]# blkid /dev/mapper/vg0-lv0
/dev/mapper/vg0-lv0: UUID="7b638180-3e6e-49d6-9613-eae60b48a4fb" UUID_SUB="2c85b2e9-bcb3-4a5c-ae6f-451c9e1c22cf" TYPE="btrfs"
[root@TNAS421 ~]# mount /dev/mapper/vg0-lv0 /mnt/md0/
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv0, missing codepage or helper program, or other error.
[root@TNAS421 ~]#
[root@TNAS421 ~]# mount /dev/mapper/vg0-lv0 /mnt/md0
mount: /mnt/md0: wrong fs type, bad option, bad superblock on /dev/mapper/vg0-lv0, missing codepage or helper program, or other error.
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]# blkid /dev/mapper/vg0-lv0
/dev/mapper/vg0-lv0: UUID="7b638180-3e6e-49d6-9613-eae60b48a4fb" UUID_SUB="2c85b2e9-bcb3-4a5c-ae6f-451c9e1c22cf" TYPE="btrfs"
[root@TNAS421 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv0 vg0 -wi-a----- 5.23t
[root@TNAS421 ~]# lvdisplay
--- Logical volume ---
LV Path /dev/vg0/lv0
LV Name lv0
VG Name vg0
LV UUID 3s7l2D-JbXb-Lfee-NaOm-s6T8-nDVt-fhT9S5
LV Write Access read/write
LV Creation host, time TNAS-00F4BC, 2019-11-15 11:25:32 +0800
LV Status available
# open 0
LV Size 5.23 TiB
Current LE 1371056
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1536
Block device 251:0

[root@TNAS421 ~]# vgdisplay
--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 5.23 TiB
PE Size 4.00 MiB
Total PE 1371056
Alloc PE / Size 1371056 / 5.23 TiB
Free PE / Size 0 / 0
VG UUID A2alvk-KmSa-69TJ-Z8nR-EYHx-vCu0-BY0iF7

[root@TNAS421 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name vg0
PV Size 5.23 TiB / not usable 3.62 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1371056
Free PE 0
Allocated PE 1371056
PV UUID Qc1P2Z-bt2w-lj8i-MuWc-Ca9o-Y7i8-9MwV9h

[root@TNAS421 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 vg0 lvm2 a-- 5.23t 0
[root@TNAS421 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 1 0 wz--n- 5.23t 0
[root@TNAS421 ~]# blkid /dev/mapper/vg0-lv0
/dev/mapper/vg0-lv0: UUID="7b638180-3e6e-49d6-9613-eae60b48a4fb" UUID_SUB="2c85b2e9-bcb3-4a5c-ae6f-451c9e1c22cf" TYPE="btrfs"
[root@TNAS421 ~]# btrfs-
btrfs-convert btrfs-find-root btrfs-map-logical btrfs-zero-log
btrfs-debug-tree btrfs-image btrfs-select-super
[root@TNAS421 ~]# btrfs-
btrfs-convert btrfs-find-root btrfs-map-logical btrfs-zero-log
btrfs-debug-tree btrfs-image btrfs-select-super
[root@TNAS421 ~]# btrfsck -h
btrfs check: invalid option -- 'h'
usage: btrfs check [options] <device>

Check structural integrity of a filesystem (unmounted).

Check structural integrity of an unmounted filesystem. Verify internal
trees' consistency and item connectivity. In the repair mode try to
fix the problems found.
WARNING: the repair mode is considered dangerous

-s|--super <superblock> use this superblock copy
-b|--backup use the first valid backup root copy
--force skip mount checks, repair is not possible
--repair try to repair the filesystem
--readonly run in read-only mode (default)
--init-csum-tree create a new CRC tree
--init-extent-tree create a new extent tree
--mode <MODE> allows choice of memory/IO trade-offs
where MODE is one of:
original - read inodes and extents to memory (requires
more memory, does less IO)
lowmem - try to use less memory but read blocks again
when needed
--check-data-csum verify checksums of data blocks
-Q|--qgroup-report print a report on qgroup consistency
-E|--subvol-extents <subvolid>
print subvolume extents and sharing state
-r|--tree-root <bytenr> use the given bytenr for the tree root
--chunk-root <bytenr> use the given bytenr for the chunk tree root
-p|--progress indicate progress
--clear-space-cache v1|v2 clear space cache for v1 or v2

[root@TNAS421 ~]# btrfsck --repair /dev/mapper/vg0-lv0
enabling repair mode
couldn't open RDWR because of unsupported option features (3).
ERROR: cannot open file system
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]#
[root@TNAS421 ~]# df -h
Filesystem Size Used Available Use% Mounted on
/dev/md9 1.8G 505.8M 1.2G 28% /
devtmpfs 3.7G 0 3.7G 0% /dev
tmpfs 3.7G 400.0K 3.7G 0% /tmp
tmpfs 3.7G 232.0K 3.7G 0% /run
tmpfs 3.7G 2.6M 3.7G 0% /opt/var
powerQ
Posts: 65
Joined: 03 Dec 2019, 19:06

Re: Help: Reboot causes initialization

Post by powerQ »

It seams your file system crashed.
When file system crash, your drives or RAID array will not able to be rebuild. File system crashed does not mean data lost, your data may still on your disks. Do not try any method to rescue your data by your own. go to professional service and ask for data recovery.
There are many reasons for such a file system crash. Maybe system failure, unexpected power failure, abnormal power off or reboot, or disks damage.
F4-221 TOS 5.1.34 (SAMSUNG 250 SSD x1, WD Red 8TB x 1, Single drive)
F2-423 TOS 5.1.34 RAID1(12TB IronWolf x 2)
User avatar
danielshaw007
Posts: 2
Joined: 24 Jun 2020, 19:26

Re: Help: Reboot causes initialization

Post by danielshaw007 »

This happened to me today after 24 hrs of copying customer data to it!

All was fine able to browse it in windows and read/write but would not let me login

Shut it down with the button at the front, powered up again and it wants to initialise again! Nightmware!
User avatar
TMSupport
TerraMaster Team
Posts: 2314
Joined: 13 Dec 2019, 15:15

Re: Help: Reboot causes initialization

Post by TMSupport »

What's your HDD model number and which RAID did you create?
To contact our team, please send email to following addresses, remember to replace (at) with @
Technical team: support(at)terra-master.com (for technical support)
Service team: service(at)terra-master.com (for purchasing, return, replacement, RMA service)
User avatar
johnmanager
Posts: 1
Joined: 02 Jul 2020, 20:41

Re: Help: Reboot causes initialization

Post by johnmanager »

This has just happened to me too.

F2-210 was working fine. I had transferred about 200Gb of files to it. I had to bounce a network device yesterday and when I tried to connect to the NAS today it said it was offline. Lights on the front panel suggested it was up and running.

I go through a power cycle and it now wants to re-initialise!!!!!!!
User avatar
TMSupport
TerraMaster Team
Posts: 2314
Joined: 13 Dec 2019, 15:15

Re: Help: Reboot causes initialization

Post by TMSupport »

by johnmanager » Yesterday, 20:46

This has just happened to me too.

F2-210 was working fine. I had transferred about 200Gb of files to it. I had to bounce a network device yesterday and when I tried to connect to the NAS today it said it was offline. Lights on the front panel suggested it was up and running.

I go through a power cycle and it now wants to re-initialise!!!!!!!
Please advise the HDD model number and created RAID mode. You can send your information to support(a)terra-master.com to ask the remote session.We will check whether it's the reason of a corrupted file system or unmounted HDD.
To contact our team, please send email to following addresses, remember to replace (at) with @
Technical team: support(at)terra-master.com (for technical support)
Service team: service(at)terra-master.com (for purchasing, return, replacement, RMA service)
Locked