Re: [AMBER] problem with using amber

From: David A Case <dacase1.gmail.com>
Date: Thu, 31 Mar 2022 08:06:13 -0400

On Thu, Mar 31, 2022, Feng Su wrote:
>
>I found some problem when using Amber with MPI in Slurm GPU Cluster.
>We deployed the Docker service in our Computing nodes.
>The new interface docker0 was add before default interface ens3 and it caused the failure.
>
>-----------------------------------------------------------------
>docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
> inet6 fe80::42:75ff:fecd:fbaa prefixlen 64 scopeid 0x20<link>
>...
>ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
> inet 192.168.100.16 netmask 255.255.255.0 broadcast 192.168.100.255
> inet6 fe80::f652:14ff:fe89:24d0 prefixlen 64 scopeid 0x20<link>
>...
>-----------------------------------------------------------------
>
>When we disable the interface docker0, it back to normal.
>It seem amber use “172.17.0.1” as the communication IP instead of “192.168.100.16”.

I'm pretty sure that this is related to your MPI stack, and not to any code in
Amber. Amber relies on MPI to handle communications between nodes. Can you
run other MPI jobs (e.g. just some of the MPI examples) in the docker mode?

Others on the list may have ideas, but this problem seems pretty specific to
your setup, and (probably) not related to any particular code in Amber itself.

....good luck....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 31 2022 - 05:30:03 PDT
Custom Search