Bug 22704 - since the last update of mageia6, attempting to mount a device based on a level 6 mdadm raid volume blocks the system
Summary: since the last update of mageia6, attempting to mount a device based on a lev...
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: RPM Packages (show other bugs)
Version: 6
Hardware: x86_64 Linux
Priority: Normal major
Target Milestone: ---
Assignee: Kernel and Drivers maintainers
QA Contact:
URL:
Whiteboard:
Keywords:
Depends on: 22731
Blocks:
  Show dependency treegraph
 
Reported: 2018-03-04 23:33 CET by peter lawford
Modified: 2018-03-19 13:24 CET (History)
1 user (show)

See Also:
Source RPM:
CVE:
Status comment:


Attachments
return of dmesg (104.44 KB, text/plain)
2018-03-05 12:49 CET, peter lawford
Details
return of journalctl -b -0 (335.51 KB, text/plain)
2018-03-05 12:50 CET, peter lawford
Details

Description peter lawford 2018-03-04 23:33:10 CET
Description of problem:
I have 4 mdadm-raid volumes:

[alain4@mag6 ~]$ cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md126 : active raid5 sdd9[4] sda9[0] sdc9[2] sdb9[1]
      927862272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/3 pages [0KB], 65536KB chunk

md125 : active raid5 sdd5[4] sda5[0] sdc5[2] sdb5[1]
      125807616 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
md127 : active raid6 sdf8[1] sde8[0] sdi8[4] sdj8[5] sdg8[7] sdh5[6]
      3273324544 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      bitmap: 0/7 pages [0KB], 65536KB chunk

md123 : active raid5 sdc3[2] sdd3[4] sdb3[1] sda3[0]
      94291968 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>

as you can see, 3 are level5 volume, and one (md127) a level6 vol.
this latter one is clean and active:

[root@mag6 alain4]# mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Fri Apr 24 22:43:57 2015
     Raid Level : raid6
     Array Size : 3273324544 (3121.69 GiB 3351.88 GB)
  Used Dev Size : 818331136 (780.42 GiB 837.97 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Mar  4 22:52:54 2018
          State : clean 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : mageia:stock
           UUID : 5c975406:2510297d:dd872bf3:733648b6
         Events : 14867

    Number   Major   Minor   RaidDevice State
       0       8       72        0      active sync   /dev/sde8
       1       8       88        1      active sync   /dev/sdf8
       7       8      104        2      active sync   /dev/sdg8
       6       8      117        3      active sync   /dev/sdh5
       4       8      136        4      active sync   /dev/sdi8
       5       8      152        5      active sync   /dev/sdj8

md127 supports a VG called vgstock where I stock all my datas
here below the content of /etc/fstab:

[alain4@mag6 ~]$ cat /etc/fstab
/dev/vgmag6/lvrootmag6 / ext4 acl,relatime 1 1
/dev/vgmag6/lvbootmag6 /boot ext4 acl,relatime 1 2
/dev/vgmag6/lvhomemag6 /home ext4 acl,relatime 1 2
/dev/vghome/lvscratch /home/alain4/Téléchargements ext4 relatime 1 2
/dev/vghome/lvhomedoc /home/alain4/Documents ext4 relatime 1 2
/dev/vghome/lvhomemusic /home/alain4/Musique ext4 users,noauto 1 2
/dev/sr0 /media/cdrom auto umask=0,users,iocharset=utf8,noauto,ro,exec 0 0
/dev/sr1 /media/cdrom2 auto umask=0,users,iocharset=utf8,noauto,ro,exec 0 0
/dev/vgremote1/lvstockr1 /home/alain4/mnt/stockremote1 ext4 users,noauto,nofail 0 0
/dev/vgremote1/lvhdbckr1 /home/alain4/mnt/homedocbackupremote1 ext4 users,noauto,nofail 0 0
/dev/vgremote/lvstockr /home/alain4/mnt/stockremote ext4 users,noauto,nofail 0 0
/dev/vgremote/lvhdbckr /home/alain4/mnt/homedocbackupremote ext4 users,noauto,nofail 0 0
/dev/vgremote/lvstockmusicr /home/alain4/mnt/stockmusicremote ext4 users,noauto,nofail 0 0
/dev/vgstock/lvhomedocbackup /home/alain4/mnt/homedocbackup ext4 users,noauto 1 2
/dev/vgstock/lvstockmusic /home/alain4/mnt/stockmusic ext4 users,noauto 1 2
/dev/vgstock/lvstock /home/alain4/mnt/stock ext4 users,noauto 1 2
/dev/vgstock/lvtempo /mnt/temp ext4 noauto 1 2
none /proc proc defaults 0 0
none /tmp tmpfs defaults 0 0
# entry for swap: /dev/sde2
UUID=6014701d-de12-47ec-a588-61c334e4e8f3 swap swap defaults 0 0

/dev/vghome/lvhome<doc,music> are supported by md126 (level5) and can be mounted and umounted without any problem (lvhomedoc is automatically mounted at boot)

in contrast, if I mount /dev/vgstock/lvhomedocbackup (supported by md127 level 6):
[alain4@mag6 ~]mount mnt/homedocbackup
the following command:
[alain4@mag6 ~]ls mnt/homedocbackup
gives no return and blocks the system; idem if I mount as root:
[root@mag6 alain4]# /dev/vgstock/lvhomedocbackup /home/alain4/mnt/homedocbackup
command kill is ineffective
after typing:
[root@mag6 alain4]# shutdown -r(-h) now
the splash begins by the line:
"A stop job is running forDisk Manager)
and ends by the line:
"system-journald[858]: Fail to send WATCHDOG=1 notification message: Transport endpoint is not connected"
indefinitely many times repeated
the only way to reboot or stop is to press the corresponding button on the desktop body

sorry for this message which is very long
I can't individually test what package of the update (more than 71 packages) is guilty






Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
Comment 1 peter lawford 2018-03-05 00:01:46 CET
I wish to add that if mageia6 is unable to manage level6 raid volumes, that makes it unusable for commercial servers, which massively use level6 raid
Comment 2 Thomas Backlund 2018-03-05 10:27:24 CET
raid6 should be supported without problems, I've been using that myself...

When you do the:
mount mnt/homedocbackup
ls mnt/homedocbackup

Can you check if there is any report in the logs about what fails..

either with dmesg or journalctl

You can also capture the log (as root) with:

 journalctl -b -0 >bug22704.log

and attach the "bug22704.log" to this report so I can see if there is info about what fails...

Assignee: bugsquad => kernel
CC: (none) => tmb

Comment 3 peter lawford 2018-03-05 12:49:46 CET
Created attachment 10024 [details]
return of dmesg
Comment 4 peter lawford 2018-03-05 12:50:47 CET
Created attachment 10025 [details]
return of journalctl -b -0
Comment 5 peter lawford 2018-03-05 12:55:19 CET
(In reply to Thomas Backlund from comment #2)
> raid6 should be supported without problems, I've been using that myself...
> 
> When you do the:
> mount mnt/homedocbackup
> ls mnt/homedocbackup
> 
> Can you check if there is any report in the logs about what fails..
> 
> either with dmesg or journalctl
> 
> You can also capture the log (as root) with:
> 
>  journalctl -b -0 >bug22704.log
> 
> and attach the "bug22704.log" to this report so I can see if there is info
> about what fails...

here attached the files you ask me; see just before 12:40
it's very strange, because if I mount /dev/vgstock/lvdump, that causes no problem:

[root@mag6 alain4]# mount /dev/vgstock/lvdump /mnt/dump
[root@mag6 alain4]# ls /mnt/dump
archivesdump/  bootmga4-64_9     homemag6_0     homemga5-64_0  mdvfree64_1    rootmga4-64_7  usrmagaux_0   usrmga5-64_1  varmga4-64_2  varmga5-64_5
bootmag6_0     bootmga5-64_0     homemagaux_0   homemga5-64_1  one_0          rootmga4-64_8  usrmagaux_1   usrmga5-64_2  varmga4-64_3  varmga5-64_6
bootmagaux_0   bootmga5-64_1     homemagaux_1   homemga5-64_2  one_9          rootmga4-64_9  usrmga4-64_0  usrmga5-64_3  varmga4-64_4  varmga5-64_7
bootmagaux_1   bootmga5-64_2     homemga4-64_0  homemga5-64_3  rootmag6_0     rootmga5-64_0  usrmga4-64_1  usrmga5-64_4  varmga4-64_5  varmga5-64_8
bootmga4-64_0  bootmga5-64_3     homemga4-64_1  homemga5-64_4  rootmagaux_0   rootmga5-64_1  usrmga4-64_2  usrmga5-64_5  varmga4-64_6  varmga5-64_9
bootmga4-64_1  bootmga5-64_4     homemga4-64_2  homemga5-64_5  rootmagaux_1   rootmga5-64_2  usrmga4-64_3  usrmga5-64_6  varmga4-64_7
bootmga4-64_2  bootmga5-64_5     homemga4-64_3  homemga5-64_6  rootmga4-64_0  rootmga5-64_3  usrmga4-64_4  usrmga5-64_7  varmga4-64_8
bootmga4-64_3  bootmga5-64_6     homemga4-64_4  homemga5-64_7  rootmga4-64_1  rootmga5-64_4  usrmga4-64_5  usrmga5-64_8  varmga4-64_9
bootmga4-64_4  bootmga5-64_7     homemga4-64_5  homemga5-64_8  rootmga4-64_2  rootmga5-64_5  usrmga4-64_6  usrmga5-64_9  varmga5-64_0
bootmga4-64_5  bootmga5-64_8     homemga4-64_6  homemga5-64_9  rootmga4-64_3  rootmga5-64_6  usrmga4-64_7  varmagaux_0   varmga5-64_1
bootmga4-64_6  bootmga5-64_9     homemga4-64_7  lost+found/    rootmga4-64_4  rootmga5-64_7  usrmga4-64_8  varmagaux_1   varmga5-64_2
bootmga4-64_7  configbackup/     homemga4-64_8  mdvfree32_0    rootmga4-64_5  rootmga5-64_8  usrmga4-64_9  varmga4-64_0  varmga5-64_3
bootmga4-64_8  dumpdates.backup  homemga4-64_9  mdvfree64_0    rootmga4-64_6  rootmga5-64_9  usrmga5-64_0  varmga4-64_1  varmga5-64_4
[root@mag6 alain4]# umount /dev/vgstock/lvdump

and after:

[alain4@mag6 ~]$ mount mnt/homedocbackup
[alain4@mag6 ~]$ ls mnt/homedocbackup

with no return to the last command
Comment 6 peter lawford 2018-03-05 13:09:23 CET
(In reply to Thomas Backlund from comment #2)
> raid6 should be supported without problems, I've been using that myself...
> 
> When you do the:
> mount mnt/homedocbackup
> ls mnt/homedocbackup
> 
> Can you check if there is any report in the logs about what fails..
> 
> either with dmesg or journalctl
> 
> You can also capture the log (as root) with:
> 
>  journalctl -b -0 >bug22704.log
> 
> and attach the "bug22704.log" to this report so I can see if there is info
> about what fails...

I wish to add that BEFORE the last updates, the command "mount mnt/homedocbackup" worked very fine, and not only on mageia6 but also on mageia[12345]
fortunately, I've not yet updated my other mageia6 which allows me to perform all backup operations
Comment 7 Thomas Backlund 2018-03-05 14:07:11 CET
ok, 
looking on your logs, I note you have one WDC Blck 1TB disk that either has a cabling issue or might be failing for you:

This one:
mars 05 12:31:18 mag6 kernel: ata15: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
mars 05 12:31:18 mag6 kernel: ata15.00: ATA-8: WDC WD1002FAEX-00Z3A0, 05.01D05, max UDMA/133
mars 05 12:31:18 mag6 kernel: ata15.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
mars 05 12:31:18 mag6 kernel: ata15.00: configured for UDMA/133

Hits:
mars 05 12:36:14 mag6 kernel: ata15.00: exception Emask 0x10 SAct 0x80 SErr 0x400000 action 0x6 frozen
mars 05 12:36:14 mag6 kernel: ata15.00: irq_stat 0x08000000, interface fatal error
mars 05 12:36:14 mag6 kernel: ata15: SError: { Handshk }
mars 05 12:36:14 mag6 kernel: ata15.00: failed command: WRITE FPDMA QUEUED
mars 05 12:36:14 mag6 kernel: ata15.00: cmd 61/04:38:4b:db:de/00:00:12:00:00/40 tag 7 ncq dma 2048 out
                                       res 40/00:3c:4b:db:de/00:00:12:00:00/40 Emask 0x10 (ATA bus error)
mars 05 12:36:14 mag6 kernel: ata15.00: status: { DRDY }
mars 05 12:36:14 mag6 kernel: ata15: hard resetting link
mars 05 12:36:15 mag6 kernel: ata15: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
mars 05 12:36:15 mag6 kernel: ata15.00: configured for UDMA/133
mars 05 12:36:15 mag6 kernel: ata15: EH complete



and since that 1TB drive is one of the raid5 drives, you then get:

[  418.062080] ------------[ cut here ]------------
[  418.062093] WARNING: CPU: 6 PID: 0 at kernel/rcu/tree.c:2725 rcu_process_callbacks+0x4d6/0x4f0
[  418.062095] Modules linked in: ipt_IFWLOG ipt_psd xt_set ip_set_hash_ip ip_set xt_recent iptable_nat nf_nat_ipv4 xt_comment ipt_REJECT nf_reject_ipv4 xt_addrtype bridge stp llc xt_mark iptable_mangle xt_tcpudp xt_CT iptable_raw xt_multiport nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack xt_NFLOG nfnetlink_log xt_LOG nf_log_ipv4 nf_log_common nf_nat_tftp nf_nat_snmp_basic nf_conntrack_snmp nf_nat_sip nf_nat_pptp nf_nat_proto_gre nf_nat_irc nf_nat_h323 nf_nat_ftp nf_nat_amanda ts_kmp nf_conntrack_amanda nf_nat nf_conntrack_sane nf_conntrack_tftp nf_conntrack_sip nf_conntrack_pptp nf_conntrack_proto_gre nf_conntrack_netlink nfnetlink nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_irc nf_conntrack_h323 nf_conntrack_ftp nf_conntrack iptable_filter ip_tables x_tables af_packet binfmt_misc msr
[  418.062142]  vboxnetadp(O) vboxnetflt(O) vboxdrv(O) it87 hwmon_vid capi kernelcapi iTCO_wdt iTCO_vendor_support gpio_ich uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core intel_powerclamp videodev snd_usb_audio media snd_usbmidi_lib snd_rawmidi snd_seq_device coretemp kvm_intel kvm irqbypass usblp crc32c_intel intel_cstate intel_uncore input_leds i2c_i801 snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd r8169 soundcore mii shpchp lpc_ich i7core_edac acpi_cpufreq evdev sch_fq_codel ipv6 crc_ccitt autofs4 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx hid_generic usbhid hid uas usb_storage uhci_hcd serio_raw xhci_pci xhci_hcd firewire_ohci firewire_core ehci_pci ehci_hcd crc_itu_t sr_mod
[  418.062174]  usbcore usb_common nouveau button video mxm_wmi wmi i2c_algo_bit drm_kms_helper ttm drm dm_mirror dm_region_hash dm_log dm_mod ide_pci_generic jmicron ide_core ata_generic pata_acpi sata_sil pata_jmicron
[  418.062184] CPU: 6 PID: 0 Comm: swapper/6 Tainted: P          IO    4.14.20-server-1.mga6 #1
[  418.062186] Hardware name: Gigabyte Technology Co., Ltd. X58A-UD7/X58A-UD7, BIOS FB 08/24/2010
[  418.062187] task: ffff9ccb43b23780 task.stack: ffffbb67831b0000
[  418.062189] RIP: 0010:rcu_process_callbacks+0x4d6/0x4f0
[  418.062191] RSP: 0018:ffff9ccb47383f10 EFLAGS: 00010002
[  418.062192] RAX: 0000000000000000 RBX: ffff9ccb473a3180 RCX: 00000001802a0019
[  418.062193] RDX: ffffffffffffd801 RSI: ffff9ccb47383f20 RDI: ffff9ccb473a31b8
[  418.062195] RBP: ffffffffa6250380 R08: 0000000043414301 R09: 00000001802a0019
[  418.062196] R10: ffff9ccb47383e30 R11: 0000000000000000 R12: ffff9ccb473a31b8
[  418.062197] R13: ffff9ccb43b23780 R14: ffffffffffffffff R15: 7fffffffffffffff
[  418.062198] FS:  0000000000000000(0000) GS:ffff9ccb47380000(0000) knlGS:0000000000000000
[  418.062200] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  418.062201] CR2: 00007f8540ed9010 CR3: 000000040020a000 CR4: 00000000000006e0
[  418.062202] Call Trace:
[  418.062205]  <IRQ>
[  418.062209]  ? rebalance_domains+0x106/0x2b0
[  418.062213]  __do_softirq+0xf5/0x295
[  418.062216]  irq_exit+0xae/0xb0
[  418.062218]  smp_apic_timer_interrupt+0x70/0x130
[  418.062220]  apic_timer_interrupt+0x7d/0x90
[  418.062221]  </IRQ>
[  418.062224] RIP: 0010:cpuidle_enter_state+0xa1/0x300
[  418.062225] RSP: 0018:ffffbb67831b3ea8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff10
[  418.062226] RAX: ffff9ccb473a2480 RBX: 0000000000000004 RCX: 000000000000001f
[  418.062228] RDX: 0000000000000000 RSI: 0000000027863959 RDI: 0000000000000000
[  418.062229] RBP: ffffffffa62b8960 R08: 0000000000000101 R09: 0000000000000018
[  418.062230] R10: 0000000000000874 R11: 0000000000000b85 R12: ffff9ccb473aadc0
[  418.062231] R13: ffffffffa62b8af8 R14: 000000615669e291 R15: 000000615670cd24
[  418.062234]  ? cpuidle_enter_state+0x92/0x300
[  418.062236]  do_idle+0x185/0x1e0
[  418.062238]  cpu_startup_entry+0x6f/0x80
[  418.062241]  start_secondary+0x1a9/0x200
[  418.062244]  secondary_startup_64+0xa5/0xb0
[  418.062245] Code: 17 01 0f 8f 80 fd ff ff 48 8b 15 f6 25 17 01 48 89 93 b0 00 00 00 e9 6d fd ff ff 4c 89 f6 4c 89 e7 e8 5f 73 72 00 e9 eb fb ff ff <0f> 0b e9 9e fd ff ff 0f 0b e9 9d fc ff ff e8 e7 4f f9 ff 0f 1f 
[  418.062266] ---[ end trace 28a4b4c46c01f182 ]---


and then a loop of:
INFO: task md127_raid6:1287 blocked for more than 120 seconds.

wich explains why you cant access it...

So I suggest you check that drive with smartctl from smartmontools in case the disk is reporting errors...

Also, you can try to boot an older kernel to see if the 4.14.20 kernel is the one giving you trouble (if its not hw failing)
Comment 8 Thomas Backlund 2018-03-05 14:20:34 CET
Actually come to think of it...

one better kernel to test is the 4.14.24-1 that I have in Core Updates Testing.

I have backported a couple of upstream fixes to scsi block layer that adresses issues with rcu calls in scsi layer that landed in upstream 4.14.20
Comment 9 peter lawford 2018-03-05 15:08:05 CET
(In reply to Thomas Backlund from comment #7)
> ok, 
> looking on your logs, I note you have one WDC Blck 1TB disk that either has
> a cabling issue or might be failing for you:
> 
> This one:
> mars 05 12:31:18 mag6 kernel: ata15: SATA link up 6.0 Gbps (SStatus 133
> SControl 300)
> mars 05 12:31:18 mag6 kernel: ata15.00: ATA-8: WDC WD1002FAEX-00Z3A0,
> 05.01D05, max UDMA/133
> mars 05 12:31:18 mag6 kernel: ata15.00: 1953525168 sectors, multi 0: LBA48
> NCQ (depth 31/32), AA
> mars 05 12:31:18 mag6 kernel: ata15.00: configured for UDMA/133
> 
> Hits:
> mars 05 12:36:14 mag6 kernel: ata15.00: exception Emask 0x10 SAct 0x80 SErr
> 0x400000 action 0x6 frozen
> mars 05 12:36:14 mag6 kernel: ata15.00: irq_stat 0x08000000, interface fatal
> error
> mars 05 12:36:14 mag6 kernel: ata15: SError: { Handshk }
> mars 05 12:36:14 mag6 kernel: ata15.00: failed command: WRITE FPDMA QUEUED
> mars 05 12:36:14 mag6 kernel: ata15.00: cmd
> 61/04:38:4b:db:de/00:00:12:00:00/40 tag 7 ncq dma 2048 out
>                                        res
> 40/00:3c:4b:db:de/00:00:12:00:00/40 Emask 0x10 (ATA bus error)
> mars 05 12:36:14 mag6 kernel: ata15.00: status: { DRDY }
> mars 05 12:36:14 mag6 kernel: ata15: hard resetting link
> mars 05 12:36:15 mag6 kernel: ata15: SATA link up 6.0 Gbps (SStatus 133
> SControl 300)
> mars 05 12:36:15 mag6 kernel: ata15.00: configured for UDMA/133
> mars 05 12:36:15 mag6 kernel: ata15: EH complete
> 
> 
> 
> and since that 1TB drive is one of the raid5 drives, you then get:
> 
> [  418.062080] ------------[ cut here ]------------
> [  418.062093] WARNING: CPU: 6 PID: 0 at kernel/rcu/tree.c:2725
> rcu_process_callbacks+0x4d6/0x4f0
> [  418.062095] Modules linked in: ipt_IFWLOG ipt_psd xt_set ip_set_hash_ip
> ip_set xt_recent iptable_nat nf_nat_ipv4 xt_comment ipt_REJECT
> nf_reject_ipv4 xt_addrtype bridge stp llc xt_mark iptable_mangle xt_tcpudp
> xt_CT iptable_raw xt_multiport nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack
> xt_NFLOG nfnetlink_log xt_LOG nf_log_ipv4 nf_log_common nf_nat_tftp
> nf_nat_snmp_basic nf_conntrack_snmp nf_nat_sip nf_nat_pptp nf_nat_proto_gre
> nf_nat_irc nf_nat_h323 nf_nat_ftp nf_nat_amanda ts_kmp nf_conntrack_amanda
> nf_nat nf_conntrack_sane nf_conntrack_tftp nf_conntrack_sip
> nf_conntrack_pptp nf_conntrack_proto_gre nf_conntrack_netlink nfnetlink
> nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_irc
> nf_conntrack_h323 nf_conntrack_ftp nf_conntrack iptable_filter ip_tables
> x_tables af_packet binfmt_misc msr
> [  418.062142]  vboxnetadp(O) vboxnetflt(O) vboxdrv(O) it87 hwmon_vid capi
> kernelcapi iTCO_wdt iTCO_vendor_support gpio_ich uvcvideo videobuf2_vmalloc
> videobuf2_memops videobuf2_v4l2 videobuf2_core intel_powerclamp videodev
> snd_usb_audio media snd_usbmidi_lib snd_rawmidi snd_seq_device coretemp
> kvm_intel kvm irqbypass usblp crc32c_intel intel_cstate intel_uncore
> input_leds i2c_i801 snd_hda_codec_realtek snd_hda_codec_generic
> snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd
> r8169 soundcore mii shpchp lpc_ich i7core_edac acpi_cpufreq evdev
> sch_fq_codel ipv6 crc_ccitt autofs4 raid456 async_raid6_recov async_memcpy
> async_pq async_xor async_tx hid_generic usbhid hid uas usb_storage uhci_hcd
> serio_raw xhci_pci xhci_hcd firewire_ohci firewire_core ehci_pci ehci_hcd
> crc_itu_t sr_mod
> [  418.062174]  usbcore usb_common nouveau button video mxm_wmi wmi
> i2c_algo_bit drm_kms_helper ttm drm dm_mirror dm_region_hash dm_log dm_mod
> ide_pci_generic jmicron ide_core ata_generic pata_acpi sata_sil pata_jmicron
> [  418.062184] CPU: 6 PID: 0 Comm: swapper/6 Tainted: P          IO   
> 4.14.20-server-1.mga6 #1
> [  418.062186] Hardware name: Gigabyte Technology Co., Ltd.
> X58A-UD7/X58A-UD7, BIOS FB 08/24/2010
> [  418.062187] task: ffff9ccb43b23780 task.stack: ffffbb67831b0000
> [  418.062189] RIP: 0010:rcu_process_callbacks+0x4d6/0x4f0
> [  418.062191] RSP: 0018:ffff9ccb47383f10 EFLAGS: 00010002
> [  418.062192] RAX: 0000000000000000 RBX: ffff9ccb473a3180 RCX:
> 00000001802a0019
> [  418.062193] RDX: ffffffffffffd801 RSI: ffff9ccb47383f20 RDI:
> ffff9ccb473a31b8
> [  418.062195] RBP: ffffffffa6250380 R08: 0000000043414301 R09:
> 00000001802a0019
> [  418.062196] R10: ffff9ccb47383e30 R11: 0000000000000000 R12:
> ffff9ccb473a31b8
> [  418.062197] R13: ffff9ccb43b23780 R14: ffffffffffffffff R15:
> 7fffffffffffffff
> [  418.062198] FS:  0000000000000000(0000) GS:ffff9ccb47380000(0000)
> knlGS:0000000000000000
> [  418.062200] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  418.062201] CR2: 00007f8540ed9010 CR3: 000000040020a000 CR4:
> 00000000000006e0
> [  418.062202] Call Trace:
> [  418.062205]  <IRQ>
> [  418.062209]  ? rebalance_domains+0x106/0x2b0
> [  418.062213]  __do_softirq+0xf5/0x295
> [  418.062216]  irq_exit+0xae/0xb0
> [  418.062218]  smp_apic_timer_interrupt+0x70/0x130
> [  418.062220]  apic_timer_interrupt+0x7d/0x90
> [  418.062221]  </IRQ>
> [  418.062224] RIP: 0010:cpuidle_enter_state+0xa1/0x300
> [  418.062225] RSP: 0018:ffffbb67831b3ea8 EFLAGS: 00000246 ORIG_RAX:
> ffffffffffffff10
> [  418.062226] RAX: ffff9ccb473a2480 RBX: 0000000000000004 RCX:
> 000000000000001f
> [  418.062228] RDX: 0000000000000000 RSI: 0000000027863959 RDI:
> 0000000000000000
> [  418.062229] RBP: ffffffffa62b8960 R08: 0000000000000101 R09:
> 0000000000000018
> [  418.062230] R10: 0000000000000874 R11: 0000000000000b85 R12:
> ffff9ccb473aadc0
> [  418.062231] R13: ffffffffa62b8af8 R14: 000000615669e291 R15:
> 000000615670cd24
> [  418.062234]  ? cpuidle_enter_state+0x92/0x300
> [  418.062236]  do_idle+0x185/0x1e0
> [  418.062238]  cpu_startup_entry+0x6f/0x80
> [  418.062241]  start_secondary+0x1a9/0x200
> [  418.062244]  secondary_startup_64+0xa5/0xb0
> [  418.062245] Code: 17 01 0f 8f 80 fd ff ff 48 8b 15 f6 25 17 01 48 89 93
> b0 00 00 00 e9 6d fd ff ff 4c 89 f6 4c 89 e7 e8 5f 73 72 00 e9 eb fb ff ff
> <0f> 0b e9 9e fd ff ff 0f 0b e9 9d fc ff ff e8 e7 4f f9 ff 0f 1f 
> [  418.062266] ---[ end trace 28a4b4c46c01f182 ]---
> 
> 
> and then a loop of:
> INFO: task md127_raid6:1287 blocked for more than 120 seconds.
> 
> wich explains why you cant access it...
> 
> So I suggest you check that drive with smartctl from smartmontools in case
> the disk is reporting errors...
> 
> Also, you can try to boot an older kernel to see if the 4.14.20 kernel is
> the one giving you trouble (if its not hw failing)

the disk WDC black 1Tb you tell is a drive of raid6 and not raid5
here below, the list of my disk:

[alain4@mag6 ~]$ lsblk -S |sort
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk ATA      WDC WD5001AALS-0 3B01 sata
sdb  1:0:0:0    disk ATA      ST3500320AS      SD1A sata
sdc  2:0:0:0    disk ATA      ST3500320AS      SD15 sata
sdd  3:0:0:0    disk ATA      WDC WD5001AALS-0 0K05 sata
sde  4:0:0:0    disk ATA      ST31000528AS     CC38 sata
sdf  5:0:0:0    disk ATA      ST31000528AS     CC44 sata
sdg  6:0:0:0    disk ATA      ST31000528AS     CC44 sata
sdh  7:0:0:0    disk ATA      WDC WD1003FZEX-0 1A01 sata
sdi  14:0:0:0   disk ATA      WDC WD1002FAEX-0 1D05 sata
sdj  15:0:0:0   disk ATA      WDC WD1002FAEX-0 1D05 sata
sdk  22:0:0:0   disk TOSHIBA  External USB 3.0 0    usb
sdl  23:0:0:0   disk WD       10EARS External  1.75 usb
sr0  20:0:0:0   rom  PIONEER  BD-ROM  BDC-202  1.01 sata
sr1  21:0:0:0   rom  SONY     DVD RW DRU-865S  1.61 sata

as you can see I have 3 WDC blck 1Tb in md127: sd[hij]
and effectively:

[root@mag6 alain4]# smartctl -a /dev/sdi
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.14.20-server-1.mga6] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

has no return; but the day before yesterday, it had a normal return, and the bug already existed
furthermore (see comment 5) the command 

[root@mag6 alain4]# mount /dev/vgstock/lvdump /mnt/dump

normally works: I don't see why it works and if I replace lvdump by lvhomedocbackup it no longer works

last minute: I've just tried mount /dev/vgstock/lvdump /mnt/dump
and it hax no return too

I immediateley switch to the non-updated mageia6 and let you inform

since it's a raid6, I'll isolate /dev/sdi8
very soon to you
Comment 10 peter lawford 2018-03-05 15:29:43 CET
(In reply to Thomas Backlund from comment #7)
> ok, 
> looking on your logs, I note you have one WDC Blck 1TB disk that either has
> a cabling issue or might be failing for you:
> 
> This one:
> mars 05 12:31:18 mag6 kernel: ata15: SATA link up 6.0 Gbps (SStatus 133
> SControl 300)
> mars 05 12:31:18 mag6 kernel: ata15.00: ATA-8: WDC WD1002FAEX-00Z3A0,
> 05.01D05, max UDMA/133
> mars 05 12:31:18 mag6 kernel: ata15.00: 1953525168 sectors, multi 0: LBA48
> NCQ (depth 31/32), AA
> mars 05 12:31:18 mag6 kernel: ata15.00: configured for UDMA/133
> 
> Hits:
> mars 05 12:36:14 mag6 kernel: ata15.00: exception Emask 0x10 SAct 0x80 SErr
> 0x400000 action 0x6 frozen
> mars 05 12:36:14 mag6 kernel: ata15.00: irq_stat 0x08000000, interface fatal
> error
> mars 05 12:36:14 mag6 kernel: ata15: SError: { Handshk }
> mars 05 12:36:14 mag6 kernel: ata15.00: failed command: WRITE FPDMA QUEUED
> mars 05 12:36:14 mag6 kernel: ata15.00: cmd
> 61/04:38:4b:db:de/00:00:12:00:00/40 tag 7 ncq dma 2048 out
>                                        res
> 40/00:3c:4b:db:de/00:00:12:00:00/40 Emask 0x10 (ATA bus error)
> mars 05 12:36:14 mag6 kernel: ata15.00: status: { DRDY }
> mars 05 12:36:14 mag6 kernel: ata15: hard resetting link
> mars 05 12:36:15 mag6 kernel: ata15: SATA link up 6.0 Gbps (SStatus 133
> SControl 300)
> mars 05 12:36:15 mag6 kernel: ata15.00: configured for UDMA/133
> mars 05 12:36:15 mag6 kernel: ata15: EH complete
> 
> 
> 
> and since that 1TB drive is one of the raid5 drives, you then get:
> 
> [  418.062080] ------------[ cut here ]------------
> [  418.062093] WARNING: CPU: 6 PID: 0 at kernel/rcu/tree.c:2725
> rcu_process_callbacks+0x4d6/0x4f0
> [  418.062095] Modules linked in: ipt_IFWLOG ipt_psd xt_set ip_set_hash_ip
> ip_set xt_recent iptable_nat nf_nat_ipv4 xt_comment ipt_REJECT
> nf_reject_ipv4 xt_addrtype bridge stp llc xt_mark iptable_mangle xt_tcpudp
> xt_CT iptable_raw xt_multiport nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack
> xt_NFLOG nfnetlink_log xt_LOG nf_log_ipv4 nf_log_common nf_nat_tftp
> nf_nat_snmp_basic nf_conntrack_snmp nf_nat_sip nf_nat_pptp nf_nat_proto_gre
> nf_nat_irc nf_nat_h323 nf_nat_ftp nf_nat_amanda ts_kmp nf_conntrack_amanda
> nf_nat nf_conntrack_sane nf_conntrack_tftp nf_conntrack_sip
> nf_conntrack_pptp nf_conntrack_proto_gre nf_conntrack_netlink nfnetlink
> nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_irc
> nf_conntrack_h323 nf_conntrack_ftp nf_conntrack iptable_filter ip_tables
> x_tables af_packet binfmt_misc msr
> [  418.062142]  vboxnetadp(O) vboxnetflt(O) vboxdrv(O) it87 hwmon_vid capi
> kernelcapi iTCO_wdt iTCO_vendor_support gpio_ich uvcvideo videobuf2_vmalloc
> videobuf2_memops videobuf2_v4l2 videobuf2_core intel_powerclamp videodev
> snd_usb_audio media snd_usbmidi_lib snd_rawmidi snd_seq_device coretemp
> kvm_intel kvm irqbypass usblp crc32c_intel intel_cstate intel_uncore
> input_leds i2c_i801 snd_hda_codec_realtek snd_hda_codec_generic
> snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_timer snd
> r8169 soundcore mii shpchp lpc_ich i7core_edac acpi_cpufreq evdev
> sch_fq_codel ipv6 crc_ccitt autofs4 raid456 async_raid6_recov async_memcpy
> async_pq async_xor async_tx hid_generic usbhid hid uas usb_storage uhci_hcd
> serio_raw xhci_pci xhci_hcd firewire_ohci firewire_core ehci_pci ehci_hcd
> crc_itu_t sr_mod
> [  418.062174]  usbcore usb_common nouveau button video mxm_wmi wmi
> i2c_algo_bit drm_kms_helper ttm drm dm_mirror dm_region_hash dm_log dm_mod
> ide_pci_generic jmicron ide_core ata_generic pata_acpi sata_sil pata_jmicron
> [  418.062184] CPU: 6 PID: 0 Comm: swapper/6 Tainted: P          IO   
> 4.14.20-server-1.mga6 #1
> [  418.062186] Hardware name: Gigabyte Technology Co., Ltd.
> X58A-UD7/X58A-UD7, BIOS FB 08/24/2010
> [  418.062187] task: ffff9ccb43b23780 task.stack: ffffbb67831b0000
> [  418.062189] RIP: 0010:rcu_process_callbacks+0x4d6/0x4f0
> [  418.062191] RSP: 0018:ffff9ccb47383f10 EFLAGS: 00010002
> [  418.062192] RAX: 0000000000000000 RBX: ffff9ccb473a3180 RCX:
> 00000001802a0019
> [  418.062193] RDX: ffffffffffffd801 RSI: ffff9ccb47383f20 RDI:
> ffff9ccb473a31b8
> [  418.062195] RBP: ffffffffa6250380 R08: 0000000043414301 R09:
> 00000001802a0019
> [  418.062196] R10: ffff9ccb47383e30 R11: 0000000000000000 R12:
> ffff9ccb473a31b8
> [  418.062197] R13: ffff9ccb43b23780 R14: ffffffffffffffff R15:
> 7fffffffffffffff
> [  418.062198] FS:  0000000000000000(0000) GS:ffff9ccb47380000(0000)
> knlGS:0000000000000000
> [  418.062200] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [  418.062201] CR2: 00007f8540ed9010 CR3: 000000040020a000 CR4:
> 00000000000006e0
> [  418.062202] Call Trace:
> [  418.062205]  <IRQ>
> [  418.062209]  ? rebalance_domains+0x106/0x2b0
> [  418.062213]  __do_softirq+0xf5/0x295
> [  418.062216]  irq_exit+0xae/0xb0
> [  418.062218]  smp_apic_timer_interrupt+0x70/0x130
> [  418.062220]  apic_timer_interrupt+0x7d/0x90
> [  418.062221]  </IRQ>
> [  418.062224] RIP: 0010:cpuidle_enter_state+0xa1/0x300
> [  418.062225] RSP: 0018:ffffbb67831b3ea8 EFLAGS: 00000246 ORIG_RAX:
> ffffffffffffff10
> [  418.062226] RAX: ffff9ccb473a2480 RBX: 0000000000000004 RCX:
> 000000000000001f
> [  418.062228] RDX: 0000000000000000 RSI: 0000000027863959 RDI:
> 0000000000000000
> [  418.062229] RBP: ffffffffa62b8960 R08: 0000000000000101 R09:
> 0000000000000018
> [  418.062230] R10: 0000000000000874 R11: 0000000000000b85 R12:
> ffff9ccb473aadc0
> [  418.062231] R13: ffffffffa62b8af8 R14: 000000615669e291 R15:
> 000000615670cd24
> [  418.062234]  ? cpuidle_enter_state+0x92/0x300
> [  418.062236]  do_idle+0x185/0x1e0
> [  418.062238]  cpu_startup_entry+0x6f/0x80
> [  418.062241]  start_secondary+0x1a9/0x200
> [  418.062244]  secondary_startup_64+0xa5/0xb0
> [  418.062245] Code: 17 01 0f 8f 80 fd ff ff 48 8b 15 f6 25 17 01 48 89 93
> b0 00 00 00 e9 6d fd ff ff 4c 89 f6 4c 89 e7 e8 5f 73 72 00 e9 eb fb ff ff
> <0f> 0b e9 9e fd ff ff 0f 0b e9 9d fc ff ff e8 e7 4f f9 ff 0f 1f 
> [  418.062266] ---[ end trace 28a4b4c46c01f182 ]---
> 
> 
> and then a loop of:
> INFO: task md127_raid6:1287 blocked for more than 120 seconds.
> 
> wich explains why you cant access it...
> 
> So I suggest you check that drive with smartctl from smartmontools in case
> the disk is reporting errors...
> 
> Also, you can try to boot an older kernel to see if the 4.14.20 kernel is
> the one giving you trouble (if its not hw failing)


I am back, but on the non-udated mageia6 magaux (for mageia auxiliary, but in fact it is mageia main)
see that:

[root@magaux alain4]# smartctl -a /dev/sdi
smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.14.18-server-1.mga6] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Black
Device Model:     WDC WD1002FAEX-00Z3A0
Serial Number:    WD-WMATR0911245
LU WWN Device Id: 5 0014ee 2b19bb328
Firmware Version: 05.01D05
User Capacity:    1 000 204 886 016 bytes [1,00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Mar  5 15:18:13 2018 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (17280) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.                                                                                 
                                        Selective Self-test supported.                                                                                  
SMART capabilities:            (0x0003) Saves SMART data before entering                                                                                
                                        power-saving mode.                                                                                              
                                        Supports SMART auto save timer.                                                                                 
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 200) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x3037) SCT Status supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   176   174   021    Pre-fail  Always       -       4166
  4 Start_Stop_Count        0x0032   099   099   000    Old_age   Always       -       1650
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   100   253   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   078   078   000    Old_age   Always       -       16188
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       1569
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       194
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       1455
194 Temperature_Celsius     0x0022   118   105   000    Old_age   Always       -       29
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       12
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%      2108         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


I think a material issue doesn't depend on the system used

the different mount commands normally work
spot the mistake!
Comment 11 peter lawford 2018-03-05 15:31:34 CET
(In reply to Thomas Backlund from comment #8)
> Actually come to think of it...
> 
> one better kernel to test is the 4.14.24-1 that I have in Core Updates
> Testing.
> 
> I have backported a couple of upstream fixes to scsi block layer that
> adresses issues with rcu calls in scsi layer that landed in upstream 4.14.20

I look forward to the release of kernel 4.14.24-1!
Comment 12 Thomas Backlund 2018-03-05 16:20:50 CET
(In reply to peter lawford from comment #9)
> 
> the disk WDC black 1Tb you tell is a drive of raid6 and not raid5
> here below, the list of my disk:
> 

Yeah, that was a typo... I meant raid6 since that was the one you have trouble with...


Ok, looking at the smart data the disk seems ok, so you might want to check the cabling to that drive...


(In reply to peter lawford from comment #11)
> 
> I look forward to the release of kernel 4.14.24-1!


You can test/use it already by doing (as root):

urpmi.update ""

urpmi --media Testing kernel-server-latest kernel-server-devel-latest
Comment 13 peter lawford 2018-03-06 12:49:46 CET
(In reply to Thomas Backlund from comment #12)
> (In reply to peter lawford from comment #9)
> > 
> > the disk WDC black 1Tb you tell is a drive of raid6 and not raid5
> > here below, the list of my disk:
> > 
> 
> Yeah, that was a typo... I meant raid6 since that was the one you have
> trouble with...
> 
> 
> Ok, looking at the smart data the disk seems ok, so you might want to check
> the cabling to that drive...
> 
> 
> (In reply to peter lawford from comment #11)
> > 
> > I look forward to the release of kernel 4.14.24-1!
> 
> 
> You can test/use it already by doing (as root):
> 
> urpmi.update ""
> 
> urpmi --media Testing kernel-server-latest kernel-server-devel-latest

I have installed 

kernel-desktop-4.14.24-1.mga6-1-1.mga6.x86_64.rpm
kernel-desktop-devel-4.14.24-1.mga6-1-1.mga6.x86_64.rpm
kernel-desktop-devel-latest-4.14.24-1.mga6.x86_64.rpm
kernel-desktop-latest-4.14.24-1.mga6.x86_64.rpm

and everything seems to work normally, especially 
[alain4@mag6 ~]$ mount mnt/homedocbackup
[alain4@mag6 ~]$ ls mnt/homedocbackup
[alain4@mag6 ~]$ umount mnt/homedocbackup

work without issue, the same for

[root@mag6 alain4]# mount /dev/vgstock/lvdump /mnt/dump
[root@mag6 alain4]# ls /mnt/dump
[root@mag6 alain4]# umount /dev/vgstock/lvdump

[root@mag6 alain4]# smartctl -a /dev/sd[abcdefghij]
normally return

of course I haven't yet tested ALL applications on my system, but it's a good beginning

I'm expecting the release of corresponding nvidia-drivers and dkms, and the packages:

virtualbox-kernel-4.14.24-<desktop,server>-(latest)
vboxadditions-kernel-4.14.20-<desktop,server -(latest)>

best regards
Comment 14 peter lawford 2018-03-11 13:59:58 CET
(In reply to Thomas Backlund from comment #12)
> (In reply to peter lawford from comment #9)
> > 
> > the disk WDC black 1Tb you tell is a drive of raid6 and not raid5
> > here below, the list of my disk:
> > 
> 
> Yeah, that was a typo... I meant raid6 since that was the one you have
> trouble with...
> 
> 
> Ok, looking at the smart data the disk seems ok, so you might want to check
> the cabling to that drive...
> 
> 
> (In reply to peter lawford from comment #11)
> > 
> > I look forward to the release of kernel 4.14.24-1!
> 
> 
> You can test/use it already by doing (as root):
> 
> urpmi.update ""
> 
> urpmi --media Testing kernel-server-latest kernel-server-devel-latest

this bug doesn't happen with kernel-desktop-4.14.25 too; I didn't yet try with kernel-server-4.14.25
Comment 15 Thomas Backlund 2018-03-11 14:05:40 CET
(In reply to peter lawford from comment #14)

> this bug doesn't happen with kernel-desktop-4.14.25 too; I didn't yet try
> with kernel-server-4.14.25


Yeah, thanks for confirming the fix still works :)

since I have the fix in 4.14.24, I also have it in 4.14.25 that is currently going through QA validating in bug 22731, and if there is no problem with it, it vill get validated and pushed to mirrors soon-ish

Depends on: (none) => 22731

Comment 16 Thomas Backlund 2018-03-19 13:24:20 CET
An update for this issue has been pushed to the Mageia Updates repository.

https://advisories.mageia.org/MGASA-2018-0172.html

Status: NEW => RESOLVED
Resolution: (none) => FIXED


Note You need to log in before you can comment on or make changes to this bug.