Saturday, August 04, 2012

mildly relevant: yeah, more (but interesting!) news.

as you can imagine, i waste a lot of time on the internet, reading news and pseudo-news. these are the slightly-mathematically-relevant links that i found recently.



so i've been a fan of jonah lehrer for a while, especially his book proust was a neuroscientist.  it made an interesting, humanistic take on an otherwise confusing discipline (to me).

it happened, however, that he was caught self-plagiarising [0] and making up quotations in his new (and now unavailable) book, called imagine: how creativity works.

anyway.. from "the deception ratchet" @ oscillatory thoughts (but as initially found off hacker news):
The part that is most relevant to the discussion at hand, however, is on the ethical "slippery slope" that I'm calling "deception ratcheting":
the blogger further cites the original article by tenbrunsel and messick, which can be found here.  the relevant excerpt is below.
The second component of the slippery slope problem is what we call the “induction” mechanism. Induction in mathematics is as follows. If a statement is true for N = 1, and if the statement for N + 1 is true assuming the truth of N, then the statement is true for all N. The way this works in organizations is similar. If what we were doing in the past is OK and our current practice is almost identical, then it too must be OK. This mechanism uses the past practices of an organization as a benchmark for evaluating new practices. If the past practices were ethical and acceptable, then practices that are similar and not too different are also acceptable. If each step away from ethical and acceptable practices is sufficiently small, small enough not to appear qualitatively different, then a series of these small steps can lead to a journey of unethical and illegal activities.
*winces*

to be fair, this can be how errors propagate in academic literature, whether it be mathematics, physics, or economics.  just because an article has gone through peer review doesn't mean that it's been thoroughly checked, and one slip in one paper could mean that future papers that reference it will contain the same flaw ..

analogies aside, these authors just had to use mathematical induction to explain this, didn't they?

it strikes me as an unfair comparison, as if they are perfectly happy to slander mathematics.  perhaps these authors, being business faculty, are unused to the rigor that we mathematicians enjoy.

in mathematical induction, "almost identical" just doesn't cut it.  it either matches the case (or it doesn't) in which case the statement is true (or it isn't, at least by that method of proof).  that's the whole point of mathematical proof: if you accept the system and the rules of logic, then the outcome is a systematic outcome of these rules from previously accepted axioms.



from "science funding: duel to the death" @nature (but as initially found on /.):
The souring relationship between the EPSRC [Engineering and Physical Sciences Research Council] and parts of its constituency reached a conspicuously public nadir in May, when disaffected researchers launched the 'Science for the Future' campaign with the hearse stunt, which ended by delivering the coffin, signifying the death of British science, and a petition demanding the “immediate reform of the EPSRC's policies” to the prime minister in Downing Street. In a letter to The Daily Telegraph newspaper in support of the protestors, nine Nobel laureates in the United Kingdom and United States accused the EPSRC of “manipulating the process of peer review” and “establishing favouritism schemes”..
well ..!

i guess i may have been too optimistic about the u.k. in one of my earlier posts; i had thought that the philosophy of making public access available to scientific work would also translate to ensuring appropriate funding for the science involved.

maybe the u.k. suffers from the same problems as the u.s., that really .. it all boils down to money.



from "goodbye, IQ tests: brain imaging can reveal intelligence levels" @medicaldaily  (but again, as found on /.):
The research from Washington University targets the left prefrontal cortex, and the strength of neural connections that it has to the rest of the brain. They think that these differences account for 10 percent of differences in intelligence among people. The study is the first to connect those differences to intelligence in people.

Researchers took functional magnetic resonance imaging scans, or fMRIs, of participants while they rested passively. Their performance of tasks that tested their fluid intelligence (the ability to reason quickly and use abstract thinking) and cognitive control were conducted outside of the scanner, and researchers estimated on connectivity levels. The results of the tests were consistent with increased activity in the prefrontal cortex and higher levels of neural connectivity.
it's compelling and believable: more neural connectivity suggests that one can reach (correct?) conclusions more efficiently.

i wonder about the fine print, though: does this measure the potential for a high intelligence, or actual intelligence (in the sense of real-time functionality)?

also, the journalist who wrote the article seems to hold to the assumption that the number of such connections is fixed over time. on the other hand, brain plasticity is a very real phenomenon; if the brain can itself change, then what is to say that a person earns a greater intelligence a few years later, due to reinforcement techniques that build such connections?

there is something too simple about this kind of conclusion ..

.. also, from a logical viewpoint: saying that "the results are consistent" doesn't actually prove anything.   it's the fallacy of the consequent, all over again, and i blame this on the general opinion that dissent is necessarily a bad thing.

then again, it's probably an issue of wording: if they had written that "accounting for a control experiment, the obtained data contradicts the possibility of the null hypothesis, ergo [insert conclusion here]" then i would be perfectly satisfied.

i'm a mathematician;
details are my life, you know! (-;



from "human cycles: history as science" @nature (but found through hacker news):
To Peter Turchin, who studies population dynamics at the University of Connecticut in Storrs, the appearance of three peaks of political instability at roughly 50-year intervals is not a coincidence. For the past 15 years, Turchin has been taking the mathematical techniques that once allowed him to track predator–prey cycles in forest ecosystems, and applying them to human history. He has analysed historical records on economic activity, demographic trends and outbursts of violence in the United States, and has come to the conclusion that a new wave of internal strife is already on its way. The peak should occur in about 2020, he says, and will probably be at least as high as the one in around 1970. “I hope it won't be as bad as 1870,” he adds.
two thoughts come immediately to mind:
  1. like the so-called (technological) singularity, this sounds like a case of a researcher paying too much attention to data regression.  just because you build a model doesn't mean that it agrees with reality.

    if this "cliodynamics" can be taken seriously as science, then i trust that they will take a seat at the poker table of science, pay up the big blind, and see if their skills can go for the win.  in other words, why not refine your model, measure carefully the hypotheses, and apply it to predict a real-time phenomenon where there is not yet data to check. in some sense, that's the point of science: you have to be willing to gamble that you are wrong [1].

    otherwise it's no better than string theory: an elegant, untestable mathematical theory, but not quite science.

  2. wow: they are really trying to realise (asimovian) psychohistory!  i never thought i'd see the day when someone would take this seriously.
.
.
.
.

.
.
.


[0] to be honest, this sounds like an oxymoron: 'self-plagiarism?' how can you cheat yourself, or steal your own work .. without, say, being jamie madrox? according to the wiki, though, this sounds more like an act of fraud or breach of contract than what would be commonly called plagiarism.

[1] in that sense, pure mathematics isn't quite a science. sure we have our hypotheses, but our work is ultimately tautological. we don't predict anything so much as realise that it all works out the same. it's just that some tautologies (read: theorems) are rather deep ..

No comments: