Bug 21735

Summary: mdadm raid volumes names change according to the command used to display them
Product: Mageia Reporter: peter lawford <petlaw726>
Component: RPM PackagesAssignee: Mageia Bug Squad <bugsquad>
Status: RESOLVED INVALID QA Contact:
Severity: normal    
Priority: Normal CC: davidwhodgins
Version: 6   
Target Milestone: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Source RPM: mdadm-4.0-1.mga6 CVE:
Status comment:

Description peter lawford 2017-09-18 16:38:34 CEST
Description of problem:
the names of the raid volumes are not persistent and change wether we use such or such command:

[root@magaux alain4]# mdadm --detail --scan -v
ARRAY /dev/md127 level=raid0 num-devices=6 metadata=1.2 name=swap:127 UUID=72b3d8de:8f9fe8d1:3b3f1675:8249dd9d
   devices=/dev/sde3,/dev/sdf3,/dev/sdg3,/dev/sdh4,/dev/sdi3,/dev/sdj3
ARRAY /dev/md126 level=raid6 num-devices=6 metadata=1.2 name=stock:80 UUID=5c975406:2510297d:dd872bf3:733648b6
   devices=/dev/sde8,/dev/sdf8,/dev/sdg8,/dev/sdh5,/dev/sdi8,/dev/sdj8
ARRAY /dev/md125 level=raid5 num-devices=4 metadata=0.90 UUID=30b3de45:455285d8:0037686d:9fe88fa7
   devices=/dev/sda5,/dev/sdb5,/dev/sdc5,/dev/sdd5
ARRAY /dev/md124 level=raid5 num-devices=4 metadata=0.90 UUID=6d8457a0:23583d9e:217904e6:da58c8b3
   devices=/dev/sda9,/dev/sdb9,/dev/sdc9,/dev/sdd9

[root@magaux alain4]# mdadm --examine --scan -v
ARRAY /dev/md/127  level=raid0 metadata=1.2 num-devices=6 UUID=72b3d8de:8f9fe8d1:3b3f1675:8249dd9d name=swap:127
   devices=/dev/sde3,/dev/sdf3,/dev/sdg3,/dev/sdh4,/dev/sdi3,/dev/sdj3
ARRAY /dev/md/80  level=raid6 metadata=1.2 num-devices=6 UUID=5c975406:2510297d:dd872bf3:733648b6 name=stock:80
   devices=/dev/sde8,/dev/sdf8,/dev/sdg8,/dev/sdh5,/dev/sdi8,/dev/sdj8
ARRAY /dev/md5 level=raid5 num-devices=4 UUID=30b3de45:455285d8:0037686d:9fe88fa7
   devices=/dev/sda5,/dev/sdb5,/dev/sdc5,/dev/sdd5
ARRAY /dev/md124 level=raid5 num-devices=4 UUID=6d8457a0:23583d9e:217904e6:da58c8b3
   devices=/dev/sda9,/dev/sdb9,/dev/sdc9,/dev/sdd9

[root@magaux alain4]# cat /proc/mdstat 
Personalities : [raid0] [raid6] [raid5] [raid4] 
md124 : active raid5 sdd9[3] sda9[0] sdc9[2] sdb9[1]
      928256256 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
      
md125 : active raid5 sdd5[3] sdc5[2] sda5[0] sdb5[1]
      125906112 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md126 : active raid6 sdf8[1] sde8[0] sdh5[6] sdg8[2] sdj8[5] sdi8[4]
      3273324544 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 0/7 pages [0KB], 65536KB chunk

md127 : active raid0 sdf3[1] sde3[0] sdj3[4] sdi3[3] sdg3[2] sdh4[5]
      6257664 blocks super 1.2 512k chunks
      
unused devices: <none>

it may be led to confusions between the different raid volumes




Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
Comment 1 peter lawford 2017-09-19 01:21:13 CEST
sorry, I think I am wrong and it's not a bug, since the options --examine and --detail refer to different things
Comment 2 Dave Hodgins 2017-09-19 03:51:08 CEST
Thanks Peter. Closing the bug as invalid.

Status: NEW => RESOLVED
CC: (none) => davidwhodgins
Resolution: (none) => INVALID