Hello amber users:
I am starting to use a station with 3GPUs, I used the following command :
nohup mpirun -np 3 pmemd.cuda.MPI -O -i nvt.in -o nvt.out -r...
It did work ,but it performed even slower than ran on only one GPU, and the
output information is as follows:
[[9951,1],2]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:
Module: OpenFabrics (openib)
Host: hj191
Another transport will be used instead, although this may result in
lower performance.
NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------
[hj191:04235] 2 more processes have sent help message help-mpi-btl-base.txt
/ btl:no-nics
[hj191:04235] Set MCA parameter "orte_base_help_aggregate" to 0 to see all
help / error messages
Note: The following floating-point exceptions are signalling:
IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
Note: The following floating-point exceptions are signalling:
IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
Note: The following floating-point exceptions are signalling:
IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
So what's the problem? what should I do to get a proper performance?
Any suggestions will be appreciated!
Thank you!
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jun 22 2018 - 05:00:02 PDT