Tuesday, 23 December 2014

How to Configuration Raid level 5 in CentOs 6.4/Rhel 6.4



Raid level 5 Configuration
If you wanted to configure Raid level 5 .You need 3 same size Partition.Here i create 1 extended Partition then create 3 Logical Partition.After create 3 partition i am convert 3 partition into 'fd' for Linux raid Auto detect. 

[root@amir /]# fdisk -l
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20480000   83  Linux
/dev/sda2            2550        2805     2048000   82  Linux swap / Solaris

Creating Extended Partition
[root@amir /]# fdisk /dev/sda
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Partition number (1-4): 3
First cylinder (2805-26108, default 2805):
Using default value 2805
Last cylinder, +cylinders or +size{K,M,G} (2805-26108, default 26108):
Using default value 26108

Command (m for help): p
Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001bba2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20480000   83  Linux
/dev/sda2            2550        2805     2048000   82  Linux swap / Solaris
/dev/sda3            2805       26108   187183486    5  Extended  {created new partition}
Command (m for help): w
The partition table has been altered!

[root@amir /]# init 6 {reboot}

Create  3 Logical Partition into Extended Partition
[root@amir /]# fdisk –cu /dev/sda
Command (m for help): p
Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001bba2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20480000   83  Linux
/dev/sda2            2550        2805     2048000   82  Linux swap / Solaris
/dev/sda3            2805       26108   187183486    5  Extended

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (2805-26108, default 2805):
Using default value 2805
Last cylinder, +cylinders or +size{K,M,G} (2805-26108, default 26108): 5805

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (5806-26108, default 5806):
Using default value 5806
Last cylinder, +cylinders or +size{K,M,G} (5806-26108, default 26108): 8806

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First cylinder (8807-26108, default 8807):
Using default value 8807
Last cylinder, +cylinders or +size{K,M,G} (8807-26108, default 26108): 11807
Command (m for help): p

Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001bba2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20480000   83  Linux
/dev/sda2            2550        2805     2048000   82  Linux swap / Solaris
/dev/sda3            2805       26108   187183486    5  Extended
/dev/sda5            2805        5805    24099607   83  Linux     #Created logical volume
/dev/sda6            5806        8806    24105501   83  Linux    
#Created logical volume
/dev/sda7            8807       11807    24105501   83  Linux   
#Created logical volume


Command (m for help): t                                #t for changing partition id
Partition number (1-7): 5                                 # 5 for volume no
Hex code (type L to list codes): fd                 # fd for raid partition id
Changed system type of partition 5 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-7): 6
Hex code (type L to list codes): fd
Changed system type of partition 6 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-7): 7
Hex code (type L to list codes): fd
Changed system type of partition 7 to fd (Linux raid autodetect)

Command (m for help): p
Disk /dev/sda: 214.7 GB, 214748364800 bytes
255 heads, 63 sectors/track, 26108 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001bba2

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20480000   83  Linux
/dev/sda2            2550        2805     2048000   82  Linux swap / Solaris
/dev/sda3            2805       26108   187183486    5  Extended
/dev/sda5            2805        5805    24099607   fd  Linux raid autodetect
/dev/sda6            5806        8806    24105501   fd  Linux raid autodetect
/dev/sda7            8807       11807    24105501   fd  Linux raid autodetect
Command (m for help): w
The partition table has been altered!
[root@amir /]# fdisk -l                                           [check again your Partition Status]
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20480000   83  Linux
/dev/sda2            2550        2805     2048000   82  Linux swap / Solaris
/dev/sda3            2805       26108   187183486    5  Extended
/dev/sda5            2805        5805    24099607   fd  Linux raid autodetect
/dev/sda6            5806        8806    24105501   fd  Linux raid autodetect
/dev/sda7            8807       11807    24105501   fd  Linux raid autodetect


[root@amir /]#reboot
[root@amir /]# partprobe /dev/sda5
[root@amir /]# partprobe /dev/sda6
[root@amir /]# partprobe /dev/sda7

Creating Raid Level 5
Note:md0,md1,md2 is raid Device .
[root@amir /]# mdadm –create /dev/md0 –level=5 –raid-devices=3 /dev/sda{5,6,7}
mdadm: largest drive (/dev/sda7) exceeds size (24098304K) by more than 1%
Continue creating array?
Continue creating array? (y/n) y [press y]
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Check Raid Status
[root@amir /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda7[3] sda6[1] sda5[0]  [ Your Raid is Running ]
      48196608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [>………………..]  recovery =  0.6% (160332/24098304) finish=37.3min speed=10688K/sec
     
unused devices: <none>

[root@amir /]# mkfs.ext4 /dev/md0 { Format your '/dev/md0' device into 'ext4' }
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
3014656 inodes, 12049152 blocks
602457 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
368 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424

Writing inode tables: done                          
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@amir /]# mkdir /Raid5

[root@amir /]# df -h
[root@amir /]# mount /dev/md0 /Raid5/
[root@amir /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              20G  3.0G   16G  17% /
tmpfs                 250M  264K  250M   1% /dev/shm
/dev/md0               46G   52M   43G   1% /Raid5  {raid partition}

[root@amir /]# vim /etc/fstab              {editing for permanently Mount}
 

/dev/md0            /Raid5           ext4                defaults        0 0

[root@amir /]# vim /etc/mdadm.conf
 

add the following line to automatically active raid device upon reboot  
ARRAY /dev/md0 level=raid5 num-devices=3 devices=/dev/sda5,/dev/sda6,/dev/sda7
  
            OR

[root@amir /]# mdadm –detail –scan >> /etc/mdadm.conf
[root@amir /]# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 spares=1 name=amir.server.com:0 UUID=5a879bae:78809be1:8da3


[root@amir /]# mdadm –detail /dev/md0      {Check the Raid Devices Status }
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jul 24 17:52:58 2013
     Raid Level : raid5
     Array Size : 48196608 (45.96 GiB 49.35 GB)
  Used Dev Size : 24098304 (22.98 GiB 24.68 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Jul 24 18:14:12 2013
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 44% complete [ this is your Rebuild Status when 100% the rebuild will done]
           Name : amir.server.com:0  (local to host amir.server.com)
           UUID : 5a879bae:78809be1:8da3ad40:1ab922de
         Events : 20

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       1       8        6        1      active sync   /dev/sda6
       3       8        7        2      spare rebuilding   /dev/sda7  [spare Rebuilding now]
 
After 15 minute check your Raid Devices Status again
[root@amir /]# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jul 24 17:52:58 2013
     Raid Level : raid5
     Array Size : 48196608 (45.96 GiB 49.35 GB)
  Used Dev Size : 24098304 (22.98 GiB 24.68 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Jul 24 18:33:57 2013
          State : clean
 Active Devices : 3  [Your Active Device is 3]
Working Devices : 3  [Working device 3]
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : amir.server.com:0  (local to host amir.server.com)
           UUID : 5a879bae:78809be1:8da3ad40:1ab922de
         Events : 44

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5 [working]
       1       8        6        1      active sync   /dev/sda6 [working]
       3       8        7        2      active sync   /dev/sda7 [working]


Stop  Raid
[root@amir /]# umount /Raid5
[root@amir /]# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
[root@amir /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]  [Your Raid is Not working]
unused devices: <none>

Starting the Raid Array
    (assuming its stopped)
[root@amir /]# mdadm –assemble –scan
mdadm: /dev/md0 has been started with 3 drives.
[root@amir /]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda5[0] sda7[3] sda6[1]                        [Your Raid Devices is working ]
      48196608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]    
unused devices: <none>


Raid 5 Configuration 'Partition Remove,add,DiskFailor Checking' Part-3
Note: Now i will 'test' Fault in Disk failor if one disk (/dev/sda5)' Failor my all data is reserved in '/dev/sda6 & /dev/sda7'.But 2 disk failor data completely lost.

[root@amir /]# cd /Raid5/
[root@station1 Raid5]# mkdir testing
[root@station1 Raid5]# ls
testing
[root@station1 Raid5]# mdadm /dev/md0 –fail /dev/sda5
mdadm: set /dev/sda5 faulty in /dev/md0
[root@station1 Raid5]# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jul 24 17:52:58 2013
     Raid Level : raid5
     Array Size : 48196608 (45.96 GiB 49.35 GB)
  Used Dev Size : 24098304 (22.98 GiB 24.68 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Jul 24 18:56:15 2013
          State : clean, degraded
 Active Devices : 2  
Working Devices : 2   [Working devices 2]
 Failed Devices : 1   [Faild Device ]
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : amir.server.com:0  (local to host amir.server.com)
           UUID : 5a879bae:78809be1:8da3ad40:1ab922de
         Events : 45

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8        6        1      active sync   /dev/sda6
       3       8        7        2      active sync   /dev/sda7

       0       8        5        –      faulty spare   /dev/sda5  [sda5 is Fault]
Note: Now reboot your pc & check your /Raid5 directory data…if data is live your Raid5 is working.
[root@amir /]# reboot
[root@amir /]# umount /Raid5/
[root@amir /]# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
[root@amir /]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>
[root@amir /]# mdadm –assemble –scan
mdadm: /dev/md0 has been started with 2 drives (out of 3).
[root@amir /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda6[1] sda7[3]
      48196608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
     
unused devices: <none>
[root@amir /]# cd /Raid5/
[root@station1 Raid5]# ls
testing              {Raid is Working}

Add one raid Partion '/dev/sda5'
[root@station1 Raid5]# cd
[root@amir /]# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jul 24 17:52:58 2013
     Raid Level : raid5
     Array Size : 48196608 (45.96 GiB 49.35 GB)
  Used Dev Size : 24098304 (22.98 GiB 24.68 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Wed Jul 24 19:08:40 2013
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : amir.server.com:0  (local to host amir.server.com)
           UUID : 5a879bae:78809be1:8da3ad40:1ab922de
         Events : 57

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8        6        1      active sync   /dev/sda6
       3       8        7        2      active sync   /dev/sda7
[root@amir /]# mdadm /dev/md0 –add /dev/sda5
mdadm: added /dev/sda5
[root@amir /]# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jul 24 17:52:58 2013
     Raid Level : raid5
     Array Size : 48196608 (45.96 GiB 49.35 GB)
  Used Dev Size : 24098304 (22.98 GiB 24.68 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Jul 24 19:13:47 2013
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1  [/dev/sda5 partition rebuilding]

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 2% complete
           Name : amir.server.com:0  (local to host amir.server.com)
           UUID : 5a879bae:78809be1:8da3ad40:1ab922de
         Events : 61

    Number   Major   Minor   RaidDevice State
       4       8        5        0      spare rebuilding   /dev/sda5  {added partition}
       1       8        6        1      active sync   /dev/sda6
       3       8        7        2      active sync   /dev/sda7

[root@amir /]# mdadm –detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed Jul 24 17:52:58 2013
     Raid Level : raid5
     Array Size : 48196608 (45.96 GiB 49.35 GB)
  Used Dev Size : 24098304 (22.98 GiB 24.68 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Jul 24 19:47:39 2013
          State : clean
 Active Devices : 3 [it’s working Fine]
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : amir.server.com:0  (local to host amir.server.com)
           UUID : 5a879bae:78809be1:8da3ad40:1ab922de
         Events : 124

    Number   Major   Minor   RaidDevice State
       4       8        5        0      active sync   /dev/sda5
       1       8        6        1      active sync   /dev/sda6
       3       8        7        2      active sync   /dev/sda7
[root@amir /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda5[4] sda6[1] sda7[3]
      48196608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
     
unused devices: <none>
[root@amir /]# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
[root@amir /]# mdadm –assemble –scan
mdadm: /dev/md0 has been started with 3 drives.

###################### Now we will destroy our 2 Raid Partition in this case all data are destroyed #################
[root@amir /]# mdadm /dev/md0 –fail /dev/sda6
mdadm: set /dev/sda6 faulty in /dev/md0
[root@amir /]# mdadm /dev/md0 –fail /dev/sda7
mdadm: set /dev/sda7 faulty in /dev/md0
[root@amir /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda5[4] sda7[3](F)
      48196608 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/1] [U__]
     
unused devices: <none>
[root@amir /]# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
[root@amir /]# mdadm –assemble –scan
mdadm: /dev/md0 assembled from 1 drive – not enough to start the array.
[root@amir /]# mdadm –detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
[root@amir /]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sda5[4](S) sda7[3](S) sda6[1](S)
      97955309 blocks super 1.2
     
unused devices: <none>


THE END

No comments:

Post a Comment