[rlug] Cum reactivez un raid 5 cu mdadm ?
Paul Lacatus (Personal)
paul at paul-lacatus.ro
Sat Dec 30 13:05:01 EET 2017
Am reasamblat aria ca /dev/md0 . La inceput nu a vrut sa adauge si
/dev/sda1 , cu --readd nu a vrut cu --add l-a adaugat si acum ii face
rebuilding
[root at datavault ~]# mdadm /dev/md0 --add /dev/sda1
mdadm: added /dev/sda1
[root at datavault ~]# mdadm -D /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Sat Dec 18 20:07:41 2010
Raid Level : raid5
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Dec 30 13:00:12 2017
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : resync
Rebuild Status : 0% complete
UUID : 188f7506:c03bd2ac:cdc3d78b:f6534f77
Events : 0.14849
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
4 8 1 3 spare rebuilding /dev/sda1
On 30-Dec-17 12:47, Paul Lacatus (Personal) wrote:
> La serverul meu de fisiere de acasa mi-a picat sursa . Pana sa ma
> prind ca e sursa de vina , facea probleme cand erau toate hdd
> alimentate , s-a dus si centos 6.9 care dadea kernel panic not
> tainted. Cu ocazia asta am reinstalat cu un centos 7.4
>
> treaba e ca acum raid-ul fost /dev/md127 a devenit inactiv :
>
>> [root at datavault ~]# mdadm --detail /dev/md127
>> /dev/md127:
>> Version : 0.90
>> Raid Level : raid0
>> Total Devices : 4
>> Preferred Minor : 0
>> Persistence : Superblock is persistent
>>
>> State : inactive
>>
>> UUID : 188f7506:c03bd2ac:cdc3d78b:f6534f77
>> Events : 0.14838
>>
>> Number Major Minor RaidDevice
>>
>> - 8 1 - /dev/sda1
>> - 8 17 - /dev/sdb1
>> - 8 33 - /dev/sdc1
>> - 8 49 - /dev/sdd1
>
> la examinare pe fiecare disc in parte datele par a fi ok dar aria nu
> pleaca din cauza testelor facute . Trebuie sa recreez aria sau se
> poate reporni ?
>
>> cat raid.status
>> /dev/sda1:
>> Magic : a92b4efc
>> Version : 0.90.00
>> UUID : 188f7506:c03bd2ac:cdc3d78b:f6534f77
>> Creation Time : Sat Dec 18 20:07:41 2010
>> Raid Level : raid5
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
>> Raid Devices : 4
>> Total Devices : 4
>> Preferred Minor : 127
>>
>> Update Time : Thu Dec 28 11:38:37 2017
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : 279913d8 - correct
>> Events : 14838
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 3 8 1 3 active sync /dev/sda1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 49 2 active sync /dev/sdd1
>> 3 3 8 1 3 active sync /dev/sda1
>> /dev/sdb1:
>> Magic : a92b4efc
>> Version : 0.90.00
>> UUID : 188f7506:c03bd2ac:cdc3d78b:f6534f77
>> Creation Time : Sat Dec 18 20:07:41 2010
>> Raid Level : raid5
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
>> Raid Devices : 4
>> Total Devices : 3
>> Preferred Minor : 127
>>
>> Update Time : Sat Dec 30 10:19:54 2017
>> State : clean
>> Active Devices : 3
>> Working Devices : 3
>> Failed Devices : 1
>> Spare Devices : 0
>> Checksum : 279ba477 - correct
>> Events : 14846
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 0 8 17 0 active sync /dev/sdb1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 49 2 active sync /dev/sdd1
>> 3 3 0 0 3 faulty removed
>> /dev/sdc1:
>> Magic : a92b4efc
>> Version : 0.90.00
>> UUID : 188f7506:c03bd2ac:cdc3d78b:f6534f77
>> Creation Time : Sat Dec 18 20:07:41 2010
>> Raid Level : raid5
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
>> Raid Devices : 4
>> Total Devices : 3
>> Preferred Minor : 127
>>
>> Update Time : Sat Dec 30 10:20:32 2017
>> State : active
>> Active Devices : 3
>> Working Devices : 3
>> Failed Devices : 1
>> Spare Devices : 0
>> Checksum : 279b6ab1 - correct
>> Events : 14847
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 1 8 33 1 active sync /dev/sdc1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 49 2 active sync /dev/sdd1
>> 3 3 0 0 3 faulty removed
>> /dev/sdd1:
>> Magic : a92b4efc
>> Version : 0.90.00
>> UUID : 188f7506:c03bd2ac:cdc3d78b:f6534f77
>> Creation Time : Sat Dec 18 20:07:41 2010
>> Raid Level : raid5
>> Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>> Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
>> Raid Devices : 4
>> Total Devices : 3
>> Preferred Minor : 127
>>
>> Update Time : Sat Dec 30 10:20:32 2017
>> State : active
>> Active Devices : 3
>> Working Devices : 3
>> Failed Devices : 1
>> Spare Devices : 0
>> Checksum : 279b6ac3 - correct
>> Events : 14847
>>
>> Layout : left-symmetric
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 2 8 49 2 active sync /dev/sdd1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 33 1 active sync /dev/sdc1
>> 2 2 8 49 2 active sync /dev/sdd1
>> 3 3 0 0 3 faulty removed
>> [root at datavault ~]#
>
>
>
> _______________________________________________
> RLUG mailing list
> RLUG at lists.lug.ro
> http://lists.lug.ro/mailman/listinfo/rlug_lists.lug.ro
More information about the RLUG
mailing list