Re: [AMBER] Regarding the MPI error in parallel running of 3D-RISM

From: PRITI ROY <priitii.roy.gmail.com>
Date: Mon, 23 Jul 2018 16:30:40 +0530

Hii all,
Regretted for the long delay in get back to you.

I tried my system( 5550 atoms and TIP3P) with 48 to 144 cores with many
type of combinations and ended up with no output for long run time
(~3days). I think this problem is not due to memory problem or this
calculation may stuck in infinite loop (might be I am wrong).

I am sharing hardware information of our HPC which is as follows:

1. Master Node with Storage Node-DELL PowerEdge R730xd Server X 1
2. CPU only Node - DELL Power Edge R430 Server X 6
3. GPU only Node - Dell Power Edge R730 Server X 3
4. 18 Ports Infiniband Switch -Mellanox SX6015 56Gb/s 18-port infiniband
Switch with RAIL KIT and MELLANOX passive copper cable. VPI, upto 56 Gb/s,
QSFP, 2M 10NOs X 1
5. 24 Port Gigabit Ethernet Switch- D-link X 1
6. 17 inch KVM display - ATEM/OXCA Make x 1
7. 16 port KVM switch X 1

Looking forward of your suggestions.

Thanks,
Priti
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jul 23 2018 - 04:30:01 PDT
Custom Search