On Sat, Jun 20, 2015 at 3:07 PM, Albert <mailmd2011.gmail.com> wrote:
> Well, well, well. Actually people from Schrodinger work together with me
> very closely these days and they also would like to make it work.
> Unfortunately, after trying additional 2 months, nothing significant
> happen towards the accuracy for my system.
>
No method works for every system. Have you tried reproducing their
results on their paper? That's usually a good place to start. If you
can't reproduce their results with their program using their systems they
used in the original paper, then either, (a) they made a fortuitous mistake
in their work, (b) they flat-out lied and falsified their data, or (c) you
are doing something different than they did.
I find (b) unlikely, because nobody I know at Schrodinger is that stupid,
and their methods are developed by scientists, not businessmen. Between
(a) and (c), I find (c) to be more likely -- the people that write the
software that implements their method are less likely to make a mistake
when using it than anybody else.
I only believe my own seeing, no matter what kind of paper work was
> published. I am always concerned about any results in any paper.
>
> As far as I learn from my friends from industrial area, 60% of nowadays
> published scientific work are not repeatable! Considering Schrodinger
> would like to make big money from other people, I won't be surprised
> that they even risk to treat JACS paper. For instance, all the compounds
> activity data are already published before their calculation and they
> can definitely specify them in their initial input file. All the so
> called FEP Mapper to do is somehow use some function to MAKE the finial
> results correlate to original input values. This is definitely NOT
> prediction, but post-prediction.
>
There are many models that are based on these kinds of bioinformatics
principles. Take SHIFTX2, for instance, which is the best empirical
chemical shift predictor out there. They get such good agreement with
experiment because they include experimental data directly in their model.
It's still a useful model for many applications (but not as useful for
others). This isn't "cheating" -- there are costs associated with this
kind of approach with respect to the quality of the model (and obviously
benefits as well).
What's more is that few days ago Schordinger organized a online Webex
> concering FEP+ and they also give a benchemark on the Glide docking
> results, as well as MMGBSA in Schrodinger, correlate with experimental
> data. They also show some promising pictures. According to my own
> several years experiences with various project, Glide docking score and
> the MMGBSA never, ever have any acceptable correlation with experimental
> data at all. Several of my friends from both academy and pharmaceutical
> industry share the same experience.
>
And I have colleagues that have the opposite experience. In general GLIDE
outperforms other docking suites out there based on data I've seen more
often than not.
> You see, how bad it is when people would like to make money. They can
> promise you anything that you want. But what's the reality?
>
> More interesting, in the Schrodinger FEP+ Webex days ago: in one slide
> they show that the correlation for HSP90 is only 0.2, but on other slide
> in the same presentation, they show the R^2 was high as 0.70!!! How
> could this possible?
>
Different systems. It's rather naive to think that a method must be
either always good or always bad (when in fact we know that most models are
typically a bit of both given the right circumstances). The fact that they
would show an R^2 of 0.2 for some applications would suggest to me that
they're actually *showing* some of the poorer results. Quite at odds with
what you're accusing them of.
--Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Jun 20 2015 - 19:00:02 PDT