Bug 26298 - openmpi rebuild for gcc 8.4.0
Summary: openmpi rebuild for gcc 8.4.0
Status: RESOLVED FIXED
Alias: None
Product: Mageia
Classification: Unclassified
Component: RPM Packages (show other bugs)
Version: 7
Hardware: All Linux
Priority: Normal normal
Target Milestone: ---
Assignee: QA Team
QA Contact:
URL:
Whiteboard: MGA7-64-OK
Keywords: advisory, validated_update
Depends on: 26294
Blocks:
  Show dependency treegraph
 
Reported: 2020-03-05 22:10 CET by Chris Denice
Modified: 2020-03-08 23:38 CET (History)
4 users (show)

See Also:
Source RPM:
CVE:
Status comment:


Attachments

Description Chris Denice 2020-03-05 22:10:45 CET
Suggested advisory:
========================

Updated openmpi package to version 4.0.1-1.2 to match gcc upgrade to version 8.4.0


References:
========================
https://bugs.mageia.org/show_bug.cgi?id=26294

Updated packages in core/updates_testing:
========================
lib(64)openmpi40-4.0.1-1.2.mga7
lib(64)openmpi-devel-4.0.1-1.2.mga7
lib(64)openmpi-static-devel-4.0.1-1.2.mga7
openmpi-4.0.1-1.2.mga7


Source RPMs: 
openmpi-4.0.1-1.2.mga7.src.rpm
Thomas Backlund 2020-03-06 21:17:24 CET

CC: (none) => tmb
Keywords: (none) => advisory

Comment 1 Len Lawrence 2020-03-08 17:57:58 CET
mga7, x86_64

Updated the packages and followed Herman's lead in using bug 2787 as a reference.  Claire's link no longer works.  Copied files from /usr/share/doc/lib64openmpi-devel/examples/

$ mpirun --version
mpirun (Open MPI) 4.0.1
...
$ mpif90 --version
GNU Fortran (Mageia 8.4.0-1.mga7) 8.4.0
Copyright (C) 2018 Free Software Foundation, Inc.
...

$ make all
mpicc -g  hello_c.c  -o hello_c
mpicc -g  ring_c.c  -o ring_c
mpicc -g  connectivity_c.c  -o connectivity_c
mpicc -g  spc_example.c  -o spc_example
make[1]: Entering directory '/data/qa/openmpi/examples'
make[2]: Entering directory '/data/qa/openmpi/examples'
mpic++ -g  hello_cxx.cc  -o hello_cxx
mpic++ -g  ring_cxx.cc  -o ring_cxx
make[2]: Leaving directory '/data/qa/openmpi/examples'
make[2]: Entering directory '/data/qa/openmpi/examples'
mpifort -g  hello_mpifh.f  -o hello_mpifh
mpifort -g  ring_mpifh.f  -o ring_mpifh
make[2]: Leaving directory '/data/qa/openmpi/examples'
make[2]: Entering directory '/data/qa/openmpi/examples'
mpifort -g  hello_usempi.f90  -o hello_usempi
mpifort -g  ring_usempi.f90  -o ring_usempi
make[2]: Leaving directory '/data/qa/openmpi/examples'
make[2]: Entering directory '/data/qa/openmpi/examples'
mpifort -g  hello_usempif08.f90  -o hello_usempif08
mpifort -g  ring_usempif08.f90  -o ring_usempif08
make[2]: Leaving directory '/data/qa/openmpi/examples'
make[1]: Leaving directory '/data/qa/openmpi/examples'

$ ./ring_c
Process 0 sending 10 to 0, tag 201 (1 processes in ring)
Process 0 sent to 0
Process 0 decremented value: 9
Process 0 decremented value: 8
Process 0 decremented value: 7
Process 0 decremented value: 6
Process 0 decremented value: 5
Process 0 decremented value: 4
Process 0 decremented value: 3
Process 0 decremented value: 2
Process 0 decremented value: 1
Process 0 decremented value: 0
Process 0 exiting
$ ./hello_mpifh
Hello, world, I am  0 of  1: Open MPI v4.0.1, package: Open MPI iurt@rabbit.mageia.org Distribution, ident: 4.0.1, repo rev: v4.0.1, Mar 26, 2019               
$ ./hello_cxx
Hello, world!  I am 0 of 1(Open MPI v4.0.1, package: Open MPI iurt@rabbit.mageia.org Distribution, ident: 4.0.1, repo rev: v4.0.1, Mar 26, 2019, 117)
$ ./connectivity_c
Connectivity test on 1 processes PASSED.

$ ./spc_example
Usage: mpirun -np 2 --mca mpi_spc_attach all --mca mpi_spc_dump_enabled true ./spc_example [num_messages] [message_size]
$ mpirun -np 2 --mca mpi_spc_attach all --mca mpi_spc_dump_enabled true ./spc_example 5 32
ERROR: Couldn't find the appropriate SPC counter in the MPI_T pvars.
ERROR: Couldn't find the appropriate SPC counter in the MPI_T pvars.
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[difda:16019] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[difda:16019] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
$ mpirun -np 2 --mca mpi_spc_attach all --mca mpi_spc_dump_enabled true ./spc_example
Usage: mpirun -np 2 --mca mpi_spc_attach all --mca mpi_spc_dump_enabled true ./spc_example [num_messages] [message_size]
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
Usage: mpirun -np 2 --mca mpi_spc_attach all --mca mpi_spc_dump_enabled true ./spc_example [num_messages] [message_size]
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

  Process name: [[37684,1],0]
  Exit code:    255
--------------------------------------------------------------------------

Whatever!
Anyway, everything seems to work OK.

CC: (none) => tarazed25
Whiteboard: (none) => MGA7-64-OK

Comment 2 Thomas Andrews 2020-03-08 21:08:01 CET
Validating.

CC: (none) => andrewsfarm, sysadmin-bugs
Keywords: (none) => validated_update

Comment 3 Mageia Robot 2020-03-08 23:38:43 CET
An update for this issue has been pushed to the Mageia Updates repository.

https://advisories.mageia.org/MGAA-2020-0071.html

Resolution: (none) => FIXED
Status: NEW => RESOLVED


Note You need to log in before you can comment on or make changes to this bug.