QNAP NAS Data Recovery - PowerPoint PPT Presentation

About This Presentation
Title:

QNAP NAS Data Recovery

Description:

... raidtab is broken raidtab for RAID-1 and JBOD raidtab for RAID-5 and RAID-5+hot spare raidtab for RAID-5 +global spare and RAID-6 raidtab for RAID-10 Raid ... – PowerPoint PPT presentation

Number of Views:1102
Avg rating:3.0/5.0
Slides: 30
Provided by: Goo7243
Category:
Tags: nas | qnap | data | jbod | recovery

less

Transcript and Presenter's Notes

Title: QNAP NAS Data Recovery


1
QNAP NAS Data Recovery
  • Charley, Yufan, Alan
  • Note This guide apply to all TS/SS series NAS
    except TS-401T, TS-411U

2
Agenda
  • How does the QNAP NAS RAID work?
  • NAS is OK but cannot access data
  • raidtab is broken or missing
  • Check raid settings and configure right raidtab
  • HDD have no partitions
  • Use parted to recreate the partitions
  • Partitions have no MD superblock
  • mdadm -CfR --assume-clean
  • RAID array can't be assembled or status is
    inactive
  • check above and make sure every disks on raid
    exist
  • RAID array can't been mounted
  • e2fsck, e2fsck -b
  • Able to mount RAID but data is disappear
  • umount and e2fsck, if not work, try data recovery
  • RAID is in degraded, read-only
  • backup the data then mdadm -CfR, it not work,
    recreate the RAID
  • NAS fail
  • Mount HDD(s) with another QNAP NAS (System
    Migration)
  • Mount HDD(s) with PC ( R-Studio/ ext3/4 reader)
    (3rd party tool )

3
How does the QNAP NAS RAID work
  • Please check the following link for complete
    tutorial
  •           https//docs.google.com/document/d/1VmIH
    qIOrBG7s0ymqn46eDK1TmXwCJx685cpWMwF42KA/edit
  • Above guide include all the procedures on our NAS
    to create a RAID volume.  
  • Pre-requirement download losetup utility to a
    NAS.
  •         ftp//csdreadcsdread_at_ftp.qnap.com/NAS/uti
    lity/losetup-arm.tar
  •         ftp//csdreadcsdread_at_ftp.qnap.com/NAS/uti
    lity/losetup-x86.tar 
  • After download, use tar -xf to extract and run
    it. This utility is used to create virtual disk
    for simulating the disks on above tutorial. 

4
Introduction of mdadm command 
  •  
  • mdadm -E /dev/sda3 gt that will tell if it is md
    disk 
  • mdadm -Af /dev/md0 /dev/sda-d3 gt that will get
    available md disk into raid array 
  •  ------------- ----------------------
  • mdadm -CfR -l5 -n8 --assume-clean /dev/md0
    /dev/sda-h3 gt that will overwrite the mdstat on
    each disk
  • gt -CfR force to create the raid array   
  • gt -l5 raid 5array
  • gt -n8 available md disk
  • gt --assume-clean  without data partition syncing

5
Introduction of two scripts
  •   config_utilUsage config_util input   
    input0 Check if any HD existed.    input1
    Mirror ROOT partition.    input2 Mirror Swap
    Space (not yet).    input4 Mirror RFS_EXT
    partition.
  • gtgt usually we have config_util 1 to get the md9
    ready 
  •   storage_boot_initUsage storage_boot_init
    phase        phase1 mount ROOT
    partition.        phase2 mount DATA partition,
    create storage.conf and refresh disk.       
    phase3 Create_Disk_Storage_Conf.
  • gtgt usually we have storage_boot_init 1 to mount
    the md9

6
RAID Issue - raidtab is broken
  • raidtab is used to check if the disk is in RAID
    group or single and show the RAID information on
    web UI.
  • If the disk is in RAID but Web UI show it is
    single, or the RAID information is different to
    the actual disk RAID data ( checked by mdadm -E),
    then the raidtab should be corrupt. Then you need
    to manually edit the raidtab file to comply the
    actual RAID status. 
  • Check the following slides for raidtab contents

7
RAID Issue - raidtab is broken
  • RAID 0 Stripping
  • raiddev /dev/md0
  •         raid-level      0
  •         nr-raid-disks   2
  •         nr-spare-disks  0
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  • Single
  • No raidtab

8
raidtab for RAID-1 and JBOD
  • RAID-1 Mirror
  • raiddev /dev/md0
  •         raid-level      1
  •         nr-raid-disks   2
  •         nr-spare-disks  0
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  • JBOD Linear
  • raiddev /dev/md0
  •         raid-level      linear
  •         nr-raid-disks   3
  •         nr-spare-disks  0
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  •         device  /dev/sdc3
  •         raid-disk       2

9
raidtab for RAID-5 and RAID-5hot spare
  • RAID-5
  • raiddev /dev/md0
  •         raid-level      5
  •         nr-raid-disks   3
  •         nr-spare-disks  0
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  •         device  /dev/sdc3
  •         raid-disk       2
  • RAID-5 Hot spare
  • raiddev /dev/md0
  •         raid-level      5
  •         nr-raid-disks   3
  •         nr-spare-disks  1
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  •         device  /dev/sdc3
  •         raid-disk       2
  •         device  /dev/sdd3
  •         spare-disk      0

10
raidtab for RAID-5 global spare and RAID-6
  • RAID-5 Global Spare
  • raidtab is same as RAID-5
  • On uLinux.conf, add a line if global spare disk
    is disk 4
  • Storage 
  • GLOBAL_SPARE_DRIVE_4 TRUE
  • RAID-6
  • raiddev /dev/md0
  •         raid-level      6
  •         nr-raid-disks   4
  •         nr-spare-disks  0
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  •         device  /dev/sdc3
  •         raid-disk       2
  •         device  /dev/sdd3
  •         raid-disk       3

11
raidtab for RAID-10
  • RAID-10
  • raiddev /dev/md0
  •         raid-level      10
  •         nr-raid-disks   4
  •         nr-spare-disks  0
  •         chunk-size      4
  •         persistent-superblock   1
  •         device  /dev/sda3
  •         raid-disk       0
  •         device  /dev/sdb3
  •         raid-disk       1
  •         device  /dev/sdc3
  •         raid-disk       2
  •         device  /dev/sdd3
  •         raid-disk       3
  •  

12
Raid fail - HDDs have no partitions
  • When use the following commands to check the HDD,
    there is no partition or only one partition. 
  • parted /dev/sdx print
  • The following is sample.
  • blkid     this command show all partitions on
    the NAS
  • Note fdisk -l cannot show correct partition
    table for 3TB HDDs

13
RAID fail - HDDs have no partitions (cont.)
  • The following tool (x86 only) can help us to
    calculate correct partition size according to the
    HDD size. Please save it in your NAS (x86 model)
    and make sure the file size is 10,086 bytes. 
  • ftp//csdreadcsdread_at_ftp.qnap.com/NAS/utility/Cre
    ate_Partitions
  • 1. Get every disk size.
  • cat /sys/block/sda/size
  • 625142448
  • 2. Get the disk partition list. It should contain
    4 partitions if normal.
  • parted /dev/sda print
  • Model Seagate ST3320620AS (scsi)
  • Disk /dev/sda 320GB
  • Sector size (logical/physical) 512B/512B
  • Partition Table msdos
  • Number  Start   End     Size   Type     File
    system     Flags
  •  1      32.3kB  543MB   543MB  primary  ext3    
           boot
  •  2      543MB   1086MB  543MB  primary
     linux-swap(v1)
  •  3      1086MB  320GB   318GB  primary  ext3
  •  4      320GB   320GB   510MB  primary  ext3

14
RAID fail - HDDs have no partitions (cont.) 
  • 3. Run the tool in your NAS to get the recover
    commands.
  • Create_Partitions /dev/sda 625142448
  • /dev/sda size 305245
  • disk_size625142448
  • /usr/sbin/parted /dev/sda -s mkpart primary 40s
    1060289s
  • /usr/sbin/parted /dev/sda -s mkpart primary
    1060296s 2120579s
  • /usr/sbin/parted /dev/sda -s mkpart primary
    2120584s 624125249s
  • /usr/sbin/parted /dev/sda -s mkpart primary
    624125256s 625121279s
  • If the disk contains none partition, run the 4
    commands.
  • If the disk contains only 1 partition, run the
    last 3 commands.
  • If the disk contains only 2 partition, run the
    last 2 commands.
  • If the disk contains only 3 partition, run the
    last 1 commands.
  • 4. Run above partitions commands depends on
    existing partition number. 
  • 5. Check the disk partition after recover. And it
    should contain 4 partitions now. 
  • parted /dev/sda print
  • Model Seagate ST3320620AS (scsi)
  • Disk /dev/sda 320GB
  • Sector size (logical/physical) 512B/512B
  • Partition Table msdos

15
RAID fail - Partitions have no md superblock
  • If one or all HDD partitions are lost, or the
    partitions have no md superblock for unknown
    reason, use the mdadm -CfR command to recreate
    the RAID.  
  •          mdadm -CfR --assume-clean /dev/md0 -l 5
    -n 4 /dev/sda3...
  • Note 
  • Make sure the disk is in correct sequence.  Use
    "mdadm -E" or check raidtab to confirm
  • If one of the disk is missing or have problem,
    replace the disk with "missing".  For example 
  • mdadm -CfR --assume-clean /dev/md0 -l 5 -n 4
    /dev/sda3 missing /dev/sdc3 /dev/sdd3

16
RAID fail - RAID can't be assembled or status is
inactive
  1. Check partitions, md superblock status
  2. Check if there is any RAID disk missing / faulty
  3.  Use "mdadm -CfR --assume-clean" to recreate the
    RAID

17
no md0 for array , manually create the md0 with
mdadm -CfR
  •  

18
RAID fail - Cann't be mounted, status unmount
  • 1. Make sure the raid status is active (more
    /proc/mdstat)
  • 2. try manually mount
  •     mount /dev/md0 /share/MD0_DATA -t ext3
  •      mount /dev/md0 /share/MD0_DATA -t ext4
  •     mount /dev/md0 /share/MD0_DATA -o ro (read
    only)
  • 3. use e2fsck / e2fsck_64 to check 
  •     e2fsck -ay /dev/md0 (auto and continue with
    yes)
  • 4. If there are many errors when check, memory
    may not enough, need to create more swap space as
    the procedure on next slide.

19
RAID fail - Cann't be mount, status unmount
(cont.)
  • Use the following command to create more swap
    space
  • more /proc/mdstat
  • .......
  • md8 active raid1 sdh22(S) sdg23(S)
    sdf24(S) sde25(S) sdd26(S) sdc27(S)
    sdb21 sda20
  •       530048 blocks 2/2 UU
  • ..........
  • swapoff /dev/md8
  • mdadm -S /dev/md8
  • mdadm stopped /dev/md8
  • mkswap /dev/sda2
  • Setting up swapspace version 1, size 542859 kB
  • no label, UUID7194e0a9-be7a-43ac-829f-fd2d55e07d6
    2
  • mkswap /dev/sdb2
  • Setting up swapspace version 1, size 542859 kB
  • no label, UUID0af8fcdd-8ed1-4fca-8f53-0349d86f947
    4
  • mkswap /dev/sdc2
  • Setting up swapspace version 1, size 542859 kB
  • no label, UUIDf40bd836-3798-4c71-b8ff-9c1e9fbff6b
    f
  • mkswap /dev/sdd2

20
RAID fail - Cann't be mount, status unmount
(cont.)
  • If there is no file system superblock or the
    check fail, you can try backup superblcok. 
  • 1. Use the following command to find backup
    superblock location
  •      /usr/local/sbin/dumpe2fs /dev/md0 grep
    superblock
  •     Sample output
  •     Primary superblock at 0, Group descriptors at
    1-6
  •     Backup superblock at 32768, Group descriptors
    at 32769-32774
  •     Backup superblock at 98304, Group descriptors
    at 98305-98310
  •     ..163840...229376...294912...819200...884736..
    .1605632...2654208...4096000... 7962624...
    11239424... 20480000... 
  •      23887872...71663616...78675968..102400000..21
    499084
  • 8..512000000...550731776...644972544
  • 2. Now check and repair a Linux file system using
    alternate superblock 32768
  •      e2fsck -b 32768 /dev/md0
  •     Sample output
  •     fsck 1.40.2 (12-Jul-2007)
  •     e2fsck 1.40.2 (12-Jul-2007)
  •     /dev/sda2 was not cleanly unmounted, check
    forced.
  •     Pass 1 Checking inodes, blocks, and sizescf
  •     .......

21
RAID fail - able to mount but data disappear
  • If the mount is OK, but data is disappear,
    unmount the RAID and run e2fsck again (can try
    backup superblock)
  • If still fail, try data recovery program
    (photorec, R-Studio) or contact data recovery
    company

22
RAID fail - RAID is degraded, read-only
  • When degraded, read-only status, there is more
    disk failure than the raid can support, need to
    help the user to check which disks are faulty if
    Web UI isn't helpful
  •         - Check klog or dmesg to find the faulty
    disks
  • Ask user to backup the data first
  • If disks looks OK, after backup, try "mdadm -CfR
    --assume-clean" to recreate the RAID
  • If above doesn't work, recreate the RAID

23
Degraded mode (read only) Failed drive (X)
  •  

24
NAS fail - Mount HDD(s) with another QNAP NAS
  • User can plug the HDD(s) to another same model
    name NAS to access the data
  • User can plug the HDD(s) to other model name NAS
    to access the data by perform system migration
  • http//docs.qnap.com/nas/en/index.html?system_migr
    ation.htm
  • note TS-101/201/109/209/409/409U series doesn't
    support system migration
  • Since the firmware is also stored on the HDD(s),
    its firmware version may be different to the
    firmware on NAS.  Firmware upgrade may be
    required required after above operation

25
NAS fail - Access HDD(s) data with windows PC 
  • For single or RAID-1 HDDs, user can plug one of
    the HDD to a PC (by USB, SATA or eSATA) and
    access the data through 3rd party software
    (ext2fsd, explore2fs, etc).  Check the following
    for detail.
  • http//www.soluvas.com/read-browse-explore-open-ex
    t2-ext3-ext4-partition-filesystem-from-windows-7/
  • Note The file/folder name is in unicode(utf8)
  • TS-109/209 use non-standard ext3, need to use
    QNAP live CD to access the data
  • Procedures ftp//csdreadcsdread_at_ftp.qnap.com/NAS
    /live_cd/TS109-209_data_recovery_with_Live_CD.pdf
  •  Live CD ISO  ftp//csdreadcsdread_at_ftp.qnap.com/
    NAS/live_cd/Data_Recover_live-cd_2009-01-15_TS109-
    209.iso
  • For other RAID configuration, user can use
    R-studio to mount the RAID and access the data.
    Check the following link for sample of RAID-0/5
  •         https//docs.google.com/open?id0B8u8qWRYV
    hv0ZTk4OTEzYWQtY2ZiOC00NmZjLWE1OWUtNTJhNDE3OGQ5ZDY
    w

26
NAS cannot boot correctly with HDD installed
  • If NAS cannot boot correctly with HDD installed.
     With HDD, NAS boot without any problem.  This
    problem could be caused by faulty HDD, IO error
    on some blocks of HDD or corrupt
    configuration/system files.  
  • If user want to quickly access his data, we can
    try the following procedures 
  • 1. Power on the NAS without HDD installed
  • 2. hot-plug HDDs into the NAS
  • 3. Assemble the RAID
  • 4. Copy data by winscp or backup to external
    drive
  • NOTE arm-based NAS doesn't support sftp if boot
    without HDD.  Have to connect external drive for
    backup.  Find the following slide for procedures
    to mount a NTFS external drive

27
Mount NTFS/HPFS volume on ARM-based NAS when boot
without HDD 
  • Following is the procedure to mount NTFS/HPFS
    volume on an ARM platform NAS without initial
    HDD.
  • 1. Download the following two files to the NAS.
  • ftp//csdreadcsdread_at_ftp.qnap.com/NAS/temp/nls_ut
    f8.ko 
  • ftp//csdreadcsdread_at_ftp.qnap.com/NAS/temp/ufsd.k
  • 2. Put nls_utf8.ko to /lib/modules/others
  • 3. Put ufsd.ko to /lib/modules/misc
  • 4. insmod nls_utf8.ko and insmod ufsd.ko
  • 5. mount -t ufsd /dev/sdya1 /share/esata -o
    iocharsetutf8,dmask0000,fmask0111,force
  • NOTE If the disk is larger than 2TB, the 1st
    partition may be GPT, so we have to mount the 2nd
    partition.

28
Data are deleted accidentally by
user/administrator
  • 1. User delete folders / files
  • Use photorec/R-studio/data recovery software to
    recovery data, Check the following link for using
    R-Studio
  • https//docs.google.com/open?id0B8u8qWRYVhv0ZTk4O
    TEzYWQtY2ZiOC00NmZjLWE1OWUtNTJhNDE3OGQ5ZDYw
  • 2. User remove the RAID volume
  • - see next slide
  • 3. User format the RAID volume
  • - User photorec/data recovery software to recover
    data 
  • 4. User perform Restore to Factory Default
  • - It will format RAID and reset all settings,
    same as 3.
  • 5. User remove HDD(s) and cause RAID volume fail
  • - "mdadm -CfR --assume-clean" should work

29
User remove the RAID volume
  • more /proc/mdstat   
  •     Check if the RAID is really removed
  • mdadm -E /dev/sda3   
  •      Check if the MD superblock is really
    removed
  • mdadm -CfR --assume-clean /dev/md0 -l 5 -n 3
    /dev/sda3 /dev/sdb3 /dev/sdc3   
  •     Create the RAID, assume it is 3 HDDs raid-5
  • e2fsck -y /dev/md0      
  •     check file system, Assume "yes" to all
    questions.  If 64-bit, e2fsck_64
  • mount /dev/md0 /share/MD0_DATA -t ext4    
  •      mount the RAID back
  • vi raidtab   
  •      manually create the raid table
  • rm /etc/storage.conf
  •     reflash the web UI volume display
  • reboot
  •     Need to add the removed network share(s)
    back after reboot
  • Note Only TS-x79, TS-809 series, and D510/D525
    models with 5 or more bays support 64-bit
    commands. 
Write a Comment
User Comments (0)
About PowerShow.com