Saturday, November 03, 2012

mildly relevant: crowd-sourced research, and could it work for maths?

when applying for grants, most of the time i expect a panel consisting of more senior researchers, possibly even a few of my peers.

this possibility, on the other hand, changes the crowd quite a bit!

Crowdsοurcing curiοsity-driven biοmedical research

Fact: the average basic-research life scientist deals with an 80% grant rejection rate, and gets his or her first big government grant at age 42. Basic biomedical research uses advanced 21st century technology, but is still fueled by a clumsy, archaic government-grant funding model that even predates the Internet.

It’s time scientists experimented with the way we all experiment.

Today, there’s a glut of highly trained but underemployed scientists. Let’s harness their idealistic passion before they turn grey, using social networks and data sharing to create an open, interactive, dynamic model of basic life sciences research. That new foundation can serve as a platform on which others will build and improve. This is particularly vital for mental health research, so often stymied by misunderstandings and blind spots, both public and scholarly.

// more @ rockethub.

to state the obvious, the difficulty is to find a way to show a project's significance to those who have the funds.

as for who might have the funds, likely they consist of normal, upper-middle class people and due to self-selection, more likely a tech-friendly professional crowd; who else, after all, would pay attention to this kind of proposal outside of their daily life?

despite a tech-friendly crowd, though, this wouldn't be easy. i doubt it would work at all for pure mathematics, and most of you (mathematicians) probably agree already. the point, however, is to figure out why, and a few reasons come to mind.
  1. they might be more impatient, despite knowing more maths than the average person [0]. engineers and lab-based scientists may use maths more, yet may actually have a larger bias against theory. i imagine their tools to be the computational, immediately useful sort .. and "immediately" is the key word here.

    since they are aware that they know some maths .. but not the maths that you're doing .. then they will expect to understand what the pitch would be .. and when they won't, then the usual human quality of impatience sets in.

    suppose (you think that) you know how something works.
    wouldn't you be more impatient if it's not working like "how it should?"

    let's put this into the context of data compatibility: the more computational maths someone knows (without delving into greater theoretical generality) the more likely they are attached to the idea of fixed EucΙidean space and the premise that "every function is differentiable." [1] every lesson they've learned in that special case is a bias that you have to overcome in your explanation. (in other words, you're solving a harder compatibility problem than usual.)

    put into the related context of education .. if the student has an open mind, then (s)he may have an easier time with university-level calculus if (s)he hasn't taken a low-level calculus course in high school ..!

  2. they probably don't think mathematically. this is not a failing, of course, since they are scientists and are trained to think experimentally instead. therein lies a big difference, however: the mental framework of the experimentalist is inherently different, possibly smaller, than that of the theorist. [2]

    practical science is inherently data-driven; if you cannot show the phenomenon using an experiment, then there's no "proof." similarly if, trial after trial, no experiment shows that your pathological theoretical non-example ever shows up, then the experimentalist will adjust the hypotheses accordingly and stop worrying about it. in sum, their universe is their available data, which in turn determines the hypotheses.

    for the theorist, it's exactly the opposite: the hypotheses determine the universe and therefore all possible data, regardless of whether the data can be captured via experiment or not. (i think of these inaccessible objects like the "dark matter" of our mental consciousness.)

    the odd thing is: both the theorist and the experimentalist can say to each other, "you've got it backwards!" (-:
so between the possible donors (2) not getting your point and (1) not accustomed to being patient with these points, this crowd-sourcing thing will never take off for theorists ..
..
.. well, unless ..
..
.. unless the proposal encompasses a group of theorists with a wide range of related interests, some of which are easily explained in the sense that ..
  • it's easy to state one of the main problems,
  • it's clear why that problem is hard,
  • it's easy to see why that problem and its variants are meaningful,
  • it's believable why you and your group, of all people, can actually solve these problems.
put another way, if you're making a sales pitch, then you probably need a spokesperson to make a good pitch.




[0] to clarify, i'm not assert that this is definitely the case, but only the possibility that it may hold for a large subpopulation of techies .. and sufficiently large of a proportion that it will be problematic for crowd-sourcing. in particular, if they are the ones whose comments are rated highest in the webpage of the proposal, then more open-minded investors might be swayed by their rhetoric (vs. your logic).

[1] this is particularly troublesome if the scientist in question regularly solves linear differential equations, partial or ordinary. tell them that they can't just take fourier transforms or the right expansions, and they'll look at you strangely. 7-:

[2] my guess is that's why scientists think that we mathematicians have a habit of stating the obvious. our theorems are simply of a different nature from their scientific laws. their laws can be revised in light of new data, which corresponds to a change in a logical system (which in turn changes the propositions that are valid in that system). for us, our "data" was already fixed, it may have already accounted for the new experimental data, and some of it may consist of pathological examples that we were already thinking about. (that's what i mean about the experimentalist's framework being smaller; a venn diagram could work nicely here.)

we therefore do what we can. sometimes that does involve adding hypotheses to narrow down the system, just so we can prove something. in those cases, of course, it could be that the theorist's framework narrows down to something smaller than that of the experimentalist.
7-:

No comments: