Bug 27251 - Docker does not run get error cgroup mountpoint does not exist
Summary: Docker does not run get error cgroup mountpoint does not exist
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: RPM Packages (show other bugs)
Version: 8
Hardware: All Linux
Priority: High major
Target Milestone: ---
Assignee: QA Team
QA Contact:
URL:
Whiteboard: MGA8-64-OK
Keywords: advisory, validated_update
Depends on:
Blocks:
 
Reported: 2020-09-06 00:02 CEST by Mike Crecelius
Modified: 2021-06-13 23:34 CEST (History)
18 users (show)

See Also:
Source RPM: docker-19.03.11-2.mga8.src.rpm
CVE:
Status comment:


Attachments

Description Mike Crecelius 2020-09-06 00:02:29 CEST
Description of problem: Running Docker gives an error
docker: Error response from daemon: cgroups: cgroup mountpoint does not exist: unknown.
ERRO[0001] error waiting for container: context canceled


Version-Release number of selected component (if applicable):
Testing with Mageia 8 Beta 1 with Docker version 19.03.0-dev, build 42e35e6
installed from the Mageia repositories.

How reproducible:  always

Steps to Reproduce:
1.Install Docker and docker-containerd from the Cauldron repositories.
2.As root run: docker run hello-world
3.

There is a workaround observed from various other sites:
mkdir /sys/fs/cgroup/systemd
mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
Comment 1 Aurelien Oudelet 2020-09-06 00:14:07 CEST
Hi, thanks for reporting this bug.
This is strange behavior: 

ll /sys/fs/cgroup/
-r--r--r--  1 root root 0 sept.  5 20:46 cgroup.controllers
-rw-r--r--  1 root root 0 sept.  6 00:06 cgroup.max.depth
-rw-r--r--  1 root root 0 sept.  6 00:06 cgroup.max.descendants
-rw-r--r--  1 root root 0 sept.  5 20:46 cgroup.procs
-r--r--r--  1 root root 0 sept.  6 00:06 cgroup.stat
-rw-r--r--  1 root root 0 sept.  5 23:56 cgroup.subtree_control
-rw-r--r--  1 root root 0 sept.  6 00:06 cgroup.threads
-r--r--r--  1 root root 0 sept.  6 00:06 cpuset.cpus.effective
-r--r--r--  1 root root 0 sept.  6 00:06 cpuset.mems.effective
-r--r--r--  1 root root 0 sept.  6 00:06 cpu.stat
drwxr-xr-x  2 root root 0 sept.  5 23:13 init.scope/
-r--r--r--  1 root root 0 sept.  6 00:06 memory.stat
drwxr-xr-x 89 root root 0 sept.  6 00:02 system.slice/
drwxr-xr-x  3 root root 0 sept.  5 23:13 user.slice/

Seems there are all systemd related informations. These should be mounted in /sys/fs/cgroup/systemd instead for Docker compatibility?

Does this is Docker bug or wrong systemd config?

As there is no maintainer for systemd package I added the committers in CC.
(Packagers: Please set the status to 'assigned' if you are working on it)

CC: (none) => pkg-bugs, smelror
Assignee: bugsquad => bruno.cornec
Source RPM: (none) => docker-19.03.11-2.mga8.src.rpm systemd-246.4-1.mga8.src.rpm
Keywords: (none) => Triaged

Aurelien Oudelet 2020-09-07 13:38:13 CEST

Target Milestone: --- => Mageia 8

Comment 2 papoteur 2020-12-24 17:05:49 CET
Hello,
I confirm this problem.
It seems that systemd in Mageia 7 creates /sys/fs/cgroup/systemd but no more in cauldron.
In Mageia 7:

ll /sys/fs/cgroup/systemd
total 0
-rw-r--r--  1 root root 0 déc.  24 16:56 cgroup.clone_children
-rw-r--r--  1 root root 0 déc.  24 16:56 cgroup.procs
-r--r--r--  1 root root 0 déc.  24 16:56 cgroup.sane_behavior
drwxr-xr-x  2 root root 0 déc.  24 16:56 init.scope/
-rw-r--r--  1 root root 0 déc.  24 16:56 notify_on_release
-rw-r--r--  1 root root 0 déc.  24 16:56 release_agent
drwxr-xr-x 57 root root 0 déc.  24 16:56 system.slice/
-rw-r--r--  1 root root 0 déc.  24 16:56 tasks
drwxr-xr-x  3 root root 0 déc.  24 16:56 user.slice/

Adding joequant in CC

CC: (none) => joequant, yves.brungard_mageia

Comment 3 Joseph Wang 2020-12-25 06:21:12 CET
I've been using the workaround.  The issue is that the kernel and cgroups moved things around.  It's really something that upstream should address, but I haven't kept track of what's going on there.

Not sure what the right solution is to make it easy on users.

CC: (none) => joequant

Comment 4 Samuel Verschelde 2021-01-06 12:56:24 CET
I can confirm the issue.

Likely this bug : https://github.com/docker/for-linux/issues/219

See also https://www.linuxuprising.com/2019/11/how-to-install-and-use-docker-on-fedora.html regarding docker and compatibility with cgroups2 (not sure what version of cgroups we have).

Above references suggest upgrading docker could solve the issue.

I'm raising severity and priority here because a non-functional docker out of the box should be avoided.

Severity: minor => major
Priority: Normal => High

david Cossé 2021-02-28 21:24:25 CET

CC: (none) => saveurlinux

Comment 5 david Cossé 2021-03-16 11:23:40 CET
I agree with you ;)
The workaround temporally works but need to be redone after each reboot.
Comment 6 eric gerbier 2021-03-22 14:35:56 CET
I have the same problem with lxc containers, and the "systemd.unified_cgroup_hierarchy=0" option to kernel solves it also .

CC: (none) => eric.gerbier

Morgan Leijström 2021-03-24 13:16:58 CET

CC: (none) => fri

Comment 7 Matthieu Duchemin 2021-03-26 09:18:27 CET
To have support for cgroup2 in docker we need:
- Docker 20
- containerd 1.4
- runc 1.0-rc91.

$ runc -v                                                                                                                                                                                                                                                      
runc version 1.0.0-rc92
commit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
spec: 1.0.2-dev

$ rpm -qa |grep containerd                                                                                                                                                                                                                                     
docker-containerd-1.4.3-2.mga8

$ docker info | grep "Server Version"                                                                                                                                                                                                                          
 Server Version: 19.03.15

=> docker needs to be upgraded to version 20 or above.

ref:
- https://github.com/moby/moby/issues/40360#issuecomment-699867043
- the PR for cgroup2 support https://github.com/moby/moby/pull/40174

CC: (none) => alkahan

Comment 8 Bruno Cornec 2021-03-26 09:57:36 CET
I'm working on packaging docker 20.10 ATM but encountering issues (cf dev ML) should be able to push it during this week-end as I made progresses yesterday.

Status: NEW => ASSIGNED
CC: (none) => bruno

Comment 9 Bruno Cornec 2021-03-27 03:57:25 CET
docker 20.10.5 on its way to the buildsystem for mga9 and mga8 (update_testing).

Works here on mga8
Comment 10 david Cossé 2021-03-29 21:10:20 CEST
I've tested this update, it works !
Thanks for your work
Comment 11 david Cossé 2021-03-30 20:57:33 CEST
However, when turning off Mageia it takes very long time to stop docker service
Comment 12 Bronto Brontkevich 2021-04-01 11:08:57 CEST
I installed docker-20 from test repo.
Seems like it has some problems. I created pgadmin4 container:
CODE: SELECT ALL
docker run --name pgadmin4 -p 5050:80 -e PGADMIN_DEFAULT_EMAIL={my_maile} -e PGADMIN_DEFAULT_PASSWORD={my_password} -d dpage/pgadmin4

And open in browser "localhost:5050". But browser does not open it.. I suppose something on dockers network level..

CC: (none) => bronta

Comment 13 Bruno Cornec 2021-04-01 11:30:40 CEST
Not necessarily:

docker logs 685982d5ca41
NOTE: Configuring authentication for SERVER mode.

[2021-04-01 09:22:30 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2021-04-01 09:22:30 +0000] [1] [ERROR] Retrying in 1 second.
[2021-04-01 09:22:31 +0000] [1] [ERROR] Retrying in 1 second.
[2021-04-01 09:22:32 +0000] [1] [ERROR] Retrying in 1 second.
[2021-04-01 09:22:33 +0000] [1] [ERROR] Retrying in 1 second.
[2021-04-01 09:22:34 +0000] [1] [ERROR] Retrying in 1 second.
[2021-04-01 09:22:35 +0000] [1] [ERROR] Can't connect to ('::', 80)


docker exec -ti 685982d5ca41 id
uid=5050(pgadmin) gid=5050(pgadmin)


Seems that the launch inside the container is not done by root, which prevents binding to a priviledge port. At least that's what I get from a rapid test here. Which for me isn't a bug per se. But I don't know how this image works really so trying to guess here.
Comment 14 Bronto Brontkevich 2021-04-01 14:16:29 CEST
I used this image (dpage/pgadmin4) several times in Mageia 7 and in Mageia 8 in podman. It worked fine!
Comment 15 Bronto Brontkevich 2021-04-01 14:24:44 CEST
I have Mageia 7 installation which I use as work (I'm java/scala developer) environment (and which I still cannot change to Mageia 8).. I installed this image here few seconds ago with exactly the same command as I gave. It opens in browser!
Comment 16 Bronto Brontkevich 2021-04-09 10:53:25 CEST
I want to correct my previous sentence.
Version of docker in my Mageia 7 installation is "18.09.0-dev".. So the problem with pgamdin container apparently present in docker 20.10 and does not exist in docker 18.09...
So any news predictions?
Comment 17 david Cossé 2021-04-22 22:20:41 CEST
Docker failed to end up gracefully.
avril 22 22:07:08 linux.local dockerd[7476]: time="2021-04-22T22:07:08.221225039+02:00" level=info msg="Processing signal 'terminated'"
avril 22 22:07:08 linux.local dockerd[7476]: time="2021-04-22T22:07:08.221452931+02:00" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=plugins.moby
avril 22 22:07:08 linux.local dockerd[7476]: time="2021-04-22T22:07:08.221493032+02:00" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=plugins.moby
avril 22 22:07:08 linux.local dockerd[7476]: time="2021-04-22T22:07:08.221537831+02:00" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
avril 22 22:07:08 linux.local dockerd[7476]: time="2021-04-22T22:07:08.221712519+02:00" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=moby
avril 22 22:07:08 linux.local dockerd[7476]: time="2021-04-22T22:07:08.221744762+02:00" level=info msg="Waiting for containerd to be ready to restart event processing" module=libcontainerd namespace=moby
avril 22 22:07:08 linux.local systemd[1]: Stopping Docker Application Container Engine...
avril 22 22:07:09 linux.local dockerd[7476]: time="2021-04-22T22:07:09.221868680+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
avril 22 22:07:09 linux.local dockerd[7476]: time="2021-04-22T22:07:09.221891376+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
avril 22 22:07:09 linux.local dockerd[7476]: time="2021-04-22T22:07:09.221914375+02:00" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.221873914+02:00" level=info msg="Container failed to stop after sending signal 2 to the process, force killing"
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.222079541+02:00" level=error msg="failed to shut down container" container=f4f0ea22f72f8bf8c407dc418b6045daa0fef9ba12a938ef2229db36a751a923 error="Failed to stop container f4f0ea22f72f8bf8c407dc418b6045daa0fef9ba12a938ef2229db36a751a923 with error: Cannot kill container f4f0ea22f72f8bf8c407dc418b6045daa0fef9ba12a938ef2229db36a751a923: connection error: desc = \"transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout\": unavailable"
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.222990218+02:00" level=warning msg="Error while testing if containerd API is ready" error="rpc error: code = Canceled desc = grpc: the client connection is closing"
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.223307953+02:00" level=info msg="Daemon shutdown complete"
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.223354770+02:00" level=warning msg="Error while testing if containerd API is ready" error="rpc error: code = Canceled desc = latest balancer error: connection error: desc = \"transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout\""
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.223376407+02:00" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.223388178+02:00" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
avril 22 22:07:10 linux.local dockerd[7476]: time="2021-04-22T22:07:10.223368363+02:00" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
avril 22 22:08:40 linux.local systemd[1]: docker.service: State 'final-sigterm' timed out. Killing.
avril 22 22:08:40 linux.local systemd[1]: docker.service: Killing process 7691 (containerd-shim) with signal SIGKILL.
avril 22 22:08:40 linux.local systemd[1]: docker.service: Killing process 7697 (containerd-shim) with signal SIGKILL.
avril 22 22:08:40 linux.local systemd[1]: docker.service: Killing process 7699 (n/a) with signal SIGKILL.
avril 22 22:08:40 linux.local systemd[1]: docker.service: Failed with result 'timeout'.
avril 22 22:08:40 linux.local systemd[1]: Stopped Docker Application Container Engine.
avril 22 22:08:40 linux.local systemd[1]: docker.service: Consumed 1.122s CPU time.
Comment 19 Denis Bitouzé 2021-05-29 09:34:06 CEST
(In reply to Bruno Cornec from comment #9)
> docker 20.10.5 on its way to the buildsystem for mga9 and mga8
> (update_testing).
> 
> Works here on mga8

Where can I find it with mga8? I enabled "Tainted Updates Testing" repo but only docker 19.03.15-1.mga8 is provided.

CC: (none) => dbitouze

Comment 20 Morgan Leijström 2021-05-29 10:32:55 CEST
Version 20.10.5-1.mga8 is in Core Updates Testing
Rolf Pedersen 2021-05-29 14:53:18 CEST

CC: (none) => rolfpedersen

Comment 21 Denis Bitouzé 2021-05-29 17:09:00 CEST
(In reply to Morgan Leijström from comment #20)
> Version 20.10.5-1.mga8 is in Core Updates Testing

OK, thanks.
Comment 22 Denis Bitouzé 2021-06-08 23:14:38 CEST
(In reply to Morgan Leijström from comment #20)
> Version 20.10.5-1.mga8 is in Core Updates Testing

I tried on another computer. The file urpmi.cfg contains:

```
Core\ Updates\ Testing\ (distrib5)  {
  key-ids: 80420f66
  mirrorlist: $MIRRORLIST
  update
  with-dir: media/core/updates_testing
}
```

so I actually enabled Core Updates Testing but only docker 19.03.15-1.mga8 is available:

```
# urpmi docker
The package docker-19.03.15-1.mga8.x86_64 is already installed
```
Comment 23 Dave Hodgins 2021-06-09 00:28:41 CEST
It can be downloaded manually from a more up-to-date mirror such as
http://mirror.math.princeton.edu/pub/mageia/distrib/8/x86_64/media/core/updates_testing/docker-20.10.5-1.mga8.x86_64.rpm
and then installed with "urpmi ./docker-20.10.5-1.mga8.x86_64.rpm"

"urpmq --list-url" will show which mirror is being selected based on the
mirrolist.

https://mirrors.mageia.org/status shows the status of the various mirrors.

If you'd like to switch to a specific mirror, see
https://wiki.mageia.org/en/Installing_and_removing_software#Adding_a_specific_Media_Mirror

CC: (none) => davidwhodgins

Comment 24 Denis Bitouzé 2021-06-09 06:48:40 CEST
(In reply to Dave Hodgins from comment #23)

Does the trick, thanks!
Comment 25 Thomas Backlund 2021-06-09 19:05:07 CEST
@Bruno, do you intend to assing this one to QA or not ?
Comment 26 Rolf Pedersen 2021-06-09 21:00:42 CEST
[rolf@z170i ~]$ cat /etc/release
Mageia release 8 (Official) for x86_64
[rolf@z170i ~]$ rpm -q docker docker-containerd opencontainers-runc
docker-20.10.5-1.mga8
docker-containerd-1.4.3-2.mga8
opencontainers-runc-1.0.0-0.rc92.7.dev.gitff819c7.mga8
[rolf@z170i ~]$

In MGA8, in order to continue to run motioneye by Calin Crisan https://github.com/ccrisan/motioneye 
(not yet fully migrated to Python3), I've had to use docker, unfamiliar to me, not my first choice, but the first thing that I can get to work with my limited capabilities:

docker run --network=host ccrisan/motioneye:master-amd64

Apart from having to import camera settings and re-link the proper localtime zoneinfo timezone file after every reboot/restart, I encountered the initial showstopper error reported in this BR and discovered the workaround, also needed every restart:

sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

After installing docker-20.10.5-1.mga8 from updates/testing, 

[rolf@z170i ~]$ mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
[rolf@z170i ~]$

and my motioneye container loads without the previous interventions, mkdir;mount.  Tested once; not crazy about re-doing the other workarounds unnecessarily.
Thanks.
Comment 27 Bruno Cornec 2021-06-10 02:14:14 CEST
(In reply to Thomas Backlund from comment #25)
> @Bruno, do you intend to assing this one to QA or not ?

yes was waiting for some confirmation/infirmation of the fix. Done now

Assignee: bruno.cornec => qa-bugs

Comment 28 Len Lawrence 2021-06-10 09:40:18 CEST
Starting to test this for mga8 but immediately ran into the problem of missing cgroup.  Having totally forgotten about /sbin/groupadd I used drakconf to add it, which resulted in a group id > 1000.  With the correct /sys/fs directories in place and cgroup mounted the tests succeeded but should the user be responsible for adding cgroup?  It has been noted that the correct setup was automatic in mga7.

# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,relatime,name=systemd)

CC: (none) => tarazed25

Comment 29 Len Lawrence 2021-06-10 10:52:30 CEST
mga8, x64

Before update installed docker.
The daemon starts OK but the hello-world test fails.

$ docker version | grep Version
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version": dial unix /var/run/docker.sock: connect: permission denied
 Version:           19.03.0-dev

No cgroup, so added it and assigned user to docker and cgroup.
Created cgroup directories /sys/fs/cgroup and /sys/fs/cgroup/systemd.
From comment 26:
$ sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

Updated docker from testing and docker-containerd, restarted daemon.
$ docker version | grep Version
 Version:           unknown-version
  Version:          library-import
  Version:          
  Version:          1.0.0-rc92
  Version:          0.19.0
$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
$ docker run -it ubuntu bash
root@1281140e2948:/# exit
$ docker ps -a
CONTAINER ID   IMAGE         COMMAND    CREATED             STATUS                         PORTS     NAMES
1281140e2948   ubuntu        "bash"     4 minutes ago       Exited (100) 18 seconds ago              flamboyant_raman
b66ecf9d7119   hello-world   "/hello"   5 minutes ago       Exited (0) 5 minutes ago                 blissful_engelbart
da43a300c972   ubuntu        "bash"     About an hour ago   Exited (0) 59 minutes ago                elastic_leavitt
6076c3d316a8   hello-world   "/hello"   About an hour ago   Exited (0) About an hour ago             awesome_morse
1793765e65f9   hello-world   "/hello"   2 hours ago         Created                                  focused_bohr
$ docker rm 1793765e65f9
1793765e65f9
$ docker inspect "blissful_engelbart" | grep Address
            "LinkLocalIPv6Address": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "GlobalIPv6Address": "",
            "IPAddress": "",
            "MacAddress": "",
                    "IPAddress": "",
                    "GlobalIPv6Address": "",
                    "MacAddress": "",
$ docker pull fedora
Using default tag: latest
latest: Pulling from library/fedora
b1495d80d526: Pull complete 
Digest: sha256:f534c437436eb44b7ac73646e642732fc055a75d84f900f07c3bbaa392007810
Status: Downloaded newer image for fedora:latest
docker.io/library/fedora:latest
$ docker run -ti fedora:latest /bin/bash
[root@7f0bfa98a83c /]# dnf install lua
....
Installing:
 lua           x86_64           5.4.3-1.fc34            updates           189 k
....
Installed:
  lua-5.4.3-1.fc34.x86_64                                                       
Complete!
[root@7f0bfa98a83c /]# lua
Lua 5.4.3  Copyright (C) 1994-2021 Lua.org, PUC-Rio
print( "This is lua calling from fedora" )
This is lua calling from fedora
> ^D
[root@7f0bfa98a83c /]#exit

Cleaned up by repeating
$ docker rm <ID>
$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

This is enough considering that these tests have been run with the same packages in bug 29003.

Whiteboard: (none) => MGA8-64-OK

Comment 30 Len Lawrence 2021-06-10 10:55:29 CEST
Scrub comment 29.  Subverted the whole point of the update by manaual intervention.
Comment 31 Len Lawrence 2021-06-10 10:59:07 CEST
Was unable to remove the cgroup branch from /sye/fs/
Retrying this update on another partition.

Whiteboard: MGA8-64-OK => (none)

Comment 32 Rolf Pedersen 2021-06-10 12:14:02 CEST
(In reply to Len Lawrence from comment #30)
> Scrub comment 29.  Subverted the whole point of the update by manaual
> intervention.

Your comments unearthed buried memories, victims of my "frantic hammering" IT method, of an additional intervention of my own that could be relevant to my success story:

[rolf@z170i ~]$ history | grep usermod
  236  sudo usermod -aG docker $(whoami)

and, now:

[rolf@z170i ~]$ grep docker /etc/group
docker:x:482:rolf

however:

[rolf@z170i ~]$ grep cgroup /etc/group
[rolf@z170i ~]$
=> no cgroup group

(In reply to Len Lawrence from comment #28)
...
> 
> # mount | grep cgroup
> cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
> cgroup on /sys/fs/cgroup/systemd type cgroup (rw,relatime,name=systemd)

Although my initial workaround included `mkdir /sys/fs/cgroup/systemd' and mounting cgroup at every boot, that is not required for my application, using the new docker:

[rolf@z170i ~]$ ls /sys/fs/cgroup/systemd
ls: cannot access '/sys/fs/cgroup/systemd': No such file or directory
[rolf@z170i ~]$ mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
[rolf@z170i ~]$

Thanks.
Thomas Backlund 2021-06-10 20:07:44 CEST

Version: Cauldron => 8

Comment 33 Len Lawrence 2021-06-12 16:19:27 CEST
Thanks Rolf.  Looks good here too.

Whiteboard: (none) => MGA8-64-OK

Comment 34 Thomas Andrews 2021-06-12 18:31:25 CEST
Validating. Lots of comments, but I don't see one to point to for an advisory.

CC: (none) => andrewsfarm, sysadmin-bugs
Keywords: Triaged => validated_update

Comment 35 Aurelien Oudelet 2021-06-12 21:51:27 CEST
(In reply to Bruno Cornec from comment #27)
> (In reply to Thomas Backlund from comment #25)
> > @Bruno, do you intend to assing this one to QA or not ?
> 
> yes was waiting for some confirmation/infirmation of the fix. Done now

Can you write an advisory for this and give us the updated SRPM, please?

CC: (none) => ouaurelien

Comment 36 Aurelien Oudelet 2021-06-13 22:05:31 CEST
Advisory:
========================

Updated docker packages fix an issue with cgroup mountpoint

An issue with cgroup prevents Mageia 8 shipped version of docker from 
running properly. An upgrade to docker-20.10.5 solves this issue.

References:
 - https://bugs.mageia.org/show_bug.cgi?id=27251
========================

Updated packages in core/updates_testing:
========================
docker-20.10.5-1.mga8
docker-devel-20.10.5-1.mga8
docker-fish-completion-20.10.5-1.mga8
docker-logrotate-20.10.5-1.mga8
docker-nano-20.10.5-1.mga8
docker-zsh-completion-20.10.5-1.mga8

from SRPM:
docker-20.10.5-1.mga8.src.rpm

Source RPM: docker-19.03.11-2.mga8.src.rpm systemd-246.4-1.mga8.src.rpm => docker-19.03.11-2.mga8.src.rpm
Target Milestone: Mageia 8 => ---
Keywords: (none) => advisory

Comment 37 Mageia Robot 2021-06-13 23:34:16 CEST
An update for this issue has been pushed to the Mageia Updates repository.

https://advisories.mageia.org/MGAA-2021-0130.html

Status: ASSIGNED => RESOLVED
Resolution: (none) => FIXED


Note You need to log in before you can comment on or make changes to this bug.