Monday, April 22, 2013

MoAR: a few quick ones, as he's been away.

after a week spent hiking and then a weekend attending a wedding (or some approximation thereof) it seems like this week's roundup will be a little sparse.

so: more to come, next week. until then ..


have's and have-not's?

i'd like to agree with the following excerpt .. especially as it would justify why i've been unsuccessful with my NSF grant applications. the truth is that i don't know how it all works, exactly.

what does seem likely is that the pool of NSF grant winners are self-selective, whicgh means that research fashions can be a real issue ..
"Who decides which problems are sexy (and therefore publishable)? I'll tell you: it's the 30-some-odd people who serve on the program committees of the top conferences in your area year after year. It is very rare for a faculty member to buck the trend of which topics are "hot" in their area, since they would run a significant risk of not being able to publish in the top venues. This can be absolutely disastrous for junior faculty who need a strong publication record to get tenure. I know of several faculty who were denied tenure specifically because they chose to work on problems outside of the mainstream, and were not able to publish enough top papers as a result. So, sure, they could work on "anything they wanted," but that ended up getting them fired."

~ from "The other side of "academic freedom"" @volatile&decentralised


algorithm, m.d.

what seems unfair about this article is the amount of effort it took physicians to gather data that was responsible for the various medical models out there.

perhaps it's a good point to make that, with enough data, the process of diagnosing patients can be automated for better accuracy .. but the model had to come from somewhere, right?
"Dr Oberije and her colleagues in The Netherlands used mathematical prediction models that had already been tested and published. The models use information from previous patients to create a statistical formula that can be used to predict the probability of outcome and responses to treatment using radiotherapy with or without chemotherapy for future patients.

The researchers plotted the results on a special graph [1] on which the area below the plotted line is used for measuring the accuracy of predictions; 1 represents a perfect prediction, while 0.5 represents predictions that were right in 50% of cases, i.e. the same as chance. They found that the model predictions at the first time point were 0.71 for two-year survival, 0.76 for dyspnea and 0.72 for dysphagia. In contrast, the doctors' predictions were 0.56, 0.59 and 0.52 respectively.
"

~ from "Mathematical Models Out-Perform Doctors in Predicting Cancer Patients' Responses to Treatment" @scidaily

No comments: