Try to check nohup output in you folder to see what it says.
Hai
> On Aug 12, 2016, at 11:15 AM, Albert <mailmd2011.gmail.com> wrote:
>
> Hello David:
>
> Thanks a lot for the information.
>
> I've run all the testing job including: pmemd, pmemd.MPI, pmemd.cuda and
> pmemd.cuda.MPI
>
> All of them passed successfully...
>
> I don't have the "suspended" problem before until I compiled the
> Amber-16 with Intel compiler these days. Similar things also happen to
> my Gromacs....
>
> I am quite confused about what's happening.....
>
> regards
>
>
>
>> On 08/12/2016 05:11 PM, David A Case wrote:
>>> On Fri, Aug 12, 2016, Albert wrote:
>>> I wrote all steps for Amber MD simulation in a .tcsh script. Then I
>>> submitted the job into the local GPU workstation by command line:
>>>
>>> nohup ./job.tcsh &
>>>
>>> However, I obtained the following messages immediately:
>>>
>>> nohup ./job.csh &
>>> [1] 13433
>> This is expected, and would be the message you would get from any program
>> that was put into the background via the "&" argument.
>>>
>>> No any steps is running after I submit the job.
>> We essentially have no information here. Be sure that if you do the same
>> thing for one of the test cases (e.g. in $AMBERHOME/test/cuda/dhfr/Run.dhfr)
>> that you get the expected output. This will help distinguish between problems
>> with your installation and problems with your specific inputs.
>>
>> Generally, to debug, just try a very short run first, with ntpr=1. You won't
>> need to put such jobs in the background.
>>
>> ...dac
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Aug 12 2016 - 08:30:05 PDT