If you are working for Schrodinger, then you can skip all my comments in
the mail list since you will find countless explanations, or excuses,
for the unexpected results.
(1) I wish I could repeat the JACS paper result by myself which is not
feasible for my. (a) FEP+ must run in the same machine and it requires
at least 4 powerful Tesla GPU workstation to do this. I only have 1 such
kind of machine which gives me 0.5 compounds/day. In the JACS benchmark,
it refers to hundred of compounds which means it will take 1 year to be
done (b) checking the quality of the publication, it is the duty of the
JACS editor instead of mine.
(2) Of course Schrodinger is not stupid, because they are too SMART to
do many things including cheating. I know Schrodinger treat many
scientist as their cheap labors and them they labeled themselves as
scientist. Unfortunately, this won't change the businessman's
personality. Making money is the only goal they care and to achieve it,
anything possible could happen. So, many original free open source
software is no longer free after in-cooperated into Schrodinger.
However, this doesn't necessarily mean it becomes better after
commercialization. One typical example is Modeller in DiscoveryStudio:
the efficiency become much slower than the open source version and even
sometimes we find some problems of accuracy.
(3) Unfortunately, my colleague and I working on Schrodinger so many
years and we never have such good luck that Glide scoring function never
works fine in ranking compounds affinity.
(4) In the JACS paper and recent FEP+ online Webex, Schrodinger showed
many different targets and all of them looks perfect. It is a kind of
promise to the users that this tool can guarantee a good results for
nowadays many popular system. I am lucky, the target working on belongs
to the one of the perfect class they showed in the benmark. But I am not
lucky enough, since FEP+ from Schrodinger doesn't work at all for me.
Of course you can claim in that way. For the same target, you tell
people sometimes it works and some times it doesn't. This only ruin this
tool to be not working at all, since before new compounds synthesizing
out, this tool is not competent to predict whether it is going to work
or not.
For my case, there are 5 class of compounds for the same target and,
unfortunately, none of them works by Schrodinger FEP+. However, I
obtained very impressive results from inexpensive methods MM/PBSA which
gives me R^2=0.5-0.8.
regards
Albert
On 06/21/2015 03:56 AM, Jason Swails wrote:
> On Sat, Jun 20, 2015 at 3:07 PM, Albert<mailmd2011.gmail.com> wrote:
>
>> >Well, well, well. Actually people from Schrodinger work together with me
>> >very closely these days and they also would like to make it work.
>> >Unfortunately, after trying additional 2 months, nothing significant
>> >happen towards the accuracy for my system.
>> >
> (1)No method works for every system. Have you tried reproducing their
> results on their paper? That's usually a good place to start. If you
> can't reproduce their results with their program using their systems they
> used in the original paper, then either, (a) they made a fortuitous mistake
> in their work, (b) they flat-out lied and falsified their data, or (c) you
> are doing something different than they did.
>
> (2)I find (b) unlikely, because nobody I know at Schrodinger is that stupid,
> and their methods are developed by scientists, not businessmen. Between
> (a) and (c), I find (c) to be more likely -- the people that write the
> software that implements their method are less likely to make a mistake
> when using it than anybody else.
>
> I only believe my own seeing, no matter what kind of paper work was
>> >published. I am always concerned about any results in any paper.
>> >
>> >As far as I learn from my friends from industrial area, 60% of nowadays
>> >published scientific work are not repeatable! Considering Schrodinger
>> >would like to make big money from other people, I won't be surprised
>> >that they even risk to treat JACS paper. For instance, all the compounds
>> >activity data are already published before their calculation and they
>> >can definitely specify them in their initial input file. All the so
>> >called FEP Mapper to do is somehow use some function to MAKE the finial
>> >results correlate to original input values. This is definitely NOT
>> >prediction, but post-prediction.
>> >
> There are many models that are based on these kinds of bioinformatics
> principles. Take SHIFTX2, for instance, which is the best empirical
> chemical shift predictor out there. They get such good agreement with
> experiment because they include experimental data directly in their model.
> It's still a useful model for many applications (but not as useful for
> others). This isn't "cheating" -- there are costs associated with this
> kind of approach with respect to the quality of the model (and obviously
> benefits as well).
>
> What's more is that few days ago Schordinger organized a online Webex
>> >concering FEP+ and they also give a benchemark on the Glide docking
>> >results, as well as MMGBSA in Schrodinger, correlate with experimental
>> >data. They also show some promising pictures. According to my own
>> >several years experiences with various project, Glide docking score and
>> >the MMGBSA never, ever have any acceptable correlation with experimental
>> >data at all. Several of my friends from both academy and pharmaceutical
>> >industry share the same experience.
>> >
> (3)And I have colleagues that have the opposite experience. In general GLIDE
> outperforms other docking suites out there based on data I've seen more
> often than not.
>
>
>
>> >You see, how bad it is when people would like to make money. They can
>> >promise you anything that you want. But what's the reality?
>> >
>> >More interesting, in the Schrodinger FEP+ Webex days ago: in one slide
>> >they show that the correlation for HSP90 is only 0.2, but on other slide
>> >in the same presentation, they show the R^2 was high as 0.70!!! How
>> >could this possible?
>> >
> (4)Different systems. It's rather naive to think that a method must be
> either always good or always bad (when in fact we know that most models are
> typically a bit of both given the right circumstances). The fact that they
> would show an R^2 of 0.2 for some applications would suggest to me that
> they're actually*showing* some of the poorer results. Quite at odds with
> what you're accusing them of.
>
> --Jason
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Jun 21 2015 - 00:30:02 PDT