Sunday 20 September 2009

Blog has moved

Hi everyone!

For your reference, this blog has moved to this address.

Happy perusing!

Links I've enjoyed this last week...

...but didn't have the time to post. For a number of reasons, including:
  • madness at work (in a good way) , which is only set to increase due to the upcoming launch (yay!!) of Sciblogs. We've gotten some good coverage for it over the last few weeks, starting with Ken Perrott of Open Parachute breaking the news, and since then Media7 and Mediawatch have been giving us some good airplay too :)
  • Today, I spent 4 hours rationalising my Gmail contacts because, well, I had synced my iPhone to them, which promptly uploaded 1000+ contacts onto my phone, many of which were ancient (i.e. any email address I've ever used in the last 6+ years), or duplicates, etc. So the only way ever to use my phone's contacts feature, was to sort it.



I am, happily, now done.

So yes, the links (bear in mind that many of them end up making their way into our weekly newsletter as well, and so don't appear here)...

Sneak test shows positive-paper bias
Published in Nature, this confirms what, yes, we already know. But still interesting, and good to see it testably confirmed. And I can understand where the bias comes from although it's hardly something I'm in favour of (particularly as the originator of some genuinely negative results myself).

The Secrets Inside Your Dog's Head

An awesome article by Carl Zimmer about dog behaviour and evolution and research and things. As the owner of a dog myself, I found it particulaly interesting, but I imagine that anyone who's encountered our four-legged friends might find it interesting.

The Briefly Series

Ep. 1: The Big Bang. An awesome science communication idea, the point is to take major concepts in science and explain them not only clearly and briefly, but really well visually as well. Kudos to all the people who given freely (literally) of their time to do this.

Futurity
A completely different type of outreach, Futurity takes the research releases of a number (some 35) top American universities and publishes them in one awesome website. It's great reading, but I have two small concerns: it is, in essence, a collection of press releases (not a bad thing, just something to be aware of when reading the articles) and it does not necessarily give an accurate idea of when the research was published etc. I've notcied a couple of things on the site that seem to be really new/recent, but which I know came out some weeks ago. Nonetheless, good site.

Interphone and the US
Or not, as it turns out. I don't know how I missed this, but it would seem that the US has not taken part in the Interphone survey. * significant pause * I know. Which means they're now considering, I dunno, replicating the work? Or something? It's a big pity, firstly because the data from the US would no doubt have been really good (a very large market who were early adopters of the tech), and it also seems a waste of money to even consider doing it in an American context. Of course, one also has to ask: if Interphone's results are good enough for everyone else, why wouldn't they be good enough for the States?

Tuesday 15 September 2009

Shrodinger's tobacco mosaic virus

I'm sure everyone is familiar with Shrodinger's Cat, the infamous quantum-mechanical thought experiment (apparently, it was first posited as a sort of laughing attempt to put in real terms some of quantum mechanics' more...interesting...theories).


Well, now scientists are attempting to do something...similar. With viruses. Only certain types of virus, including that mentioned in the title of this post, are suitable to the purpose, but they are hopeful it can be managed.

Of course, the interesting thing here is the definition of a virus as a living thing. When I was at university (admittedly, some years ago), I remember being told that the jury was out regarding that. It all depends, apparently, on how many features you feel something needs to display if is to to be considered 'living'.

Viruses are considered to be hovering on the edge, as while they display many of the characteristics associated with life - making more of themselves, having genes, evolving - they also lack some other father fundamental functions, such as metabolism. Or growth. Etc.

On the other hand, whether or not the experiment turns out to be useful, particularly for "the role of consciousness in quantum mechanics", but it will provide the material, no doubt, a number of humorous tshirts and webcomics.

Thursday 10 September 2009

Nature's data-sharing issue

Nature has published their special data-sharing issue, which can be found here. An extremely timely issue, I might add, give how topical the subject is (see here and here).


There's a lot of interesting stuff in here, including articles about what seems to be a growing (or at least increasingly loud) movement to open up the acecssibility of data, particularly in the Life Sciences. There's even been talk of scientists sharing prepublication data to try speed things along a little...

Of course, there's also analysis of what happened to the ideal of data sharing , and the problems range from difficulties with the technology of it, particularly in areas where good, strong databases (and sharing habits) are not common, to problems with formats, preconceptions, and, frankly, habits.

Tuesday 8 September 2009

Peer Review Survey 2009 - the system's worth keeping, but could use some work

Sense about Science, a UK not-for profit organisation dedicated to improving the understanding of science issues, has released the prelim findings of its Peer Review Survey (full details out in November). The results were released early this morning, NZ time, and are, all things told, quite interesting.
Rather than simply reprinting the results (which can be downloaded by clicking on the hyperlink in this page), I thought I'd also have a look at the results themselves, and see what comment, if any, I could furnish.

Right, then.

(Awesome image found here)

The questionnaire, which was conducted for 2 weeks, was sent out to some 40,000 researchers, all of whom were published authors. Of this 40,000, 4,037 (10%) completed the survey. Of that 10%, 89% classed themselves as 'reviewers'.

Of course, my first question at this point is whether this is representative of researchers as a whole...Do 89% of all (published) researchers review? Or was there some intrinisc bias towards the subject that meant that people who review were more likely to complete the questionnaire (and hence skew the findings...)

Moving on.

---

Satisfaction

In general, respondents (not reviewers) were fairly happy with peer review as a system - 61% were satisfied (although only 8% were very satisfied) with the process, and only 9% were dissatisfied. I would dearly loved to have been able to organise a discussion or two around that latter 9% or so, to figure out why (which is qualitative research is so important).

Reviewers generally felt that, although they enjoyed reviewing and would continue to do it, that there was a lack of guidance on how to review papers, and that some formal training would improve the quality of reviews. There was also the strong feeling that technology had made it easier to review than 5 years ago.

Improvement

Of those who agreed that the review process had improved their last paper (83% of respondents), 91% felt that the biggest area of improvement had been in the discussion, while only half felt that their paper's statistics had benefited. Of course, and as the survey rightly points out, this may because this includes those whose papers contained no such fiddlies...

Why review?

Happily, most reviewers (as opposed to respondents, which includes reviewers and the leftover authors) reviewed because they enjoyed playing theire part in the scientific community (90%) and because they enjoyed being able to improve a paper (85%). Very few (16%) did it as a result of hoping to increase future chances of their paper being accepted.
[On that point - is that last thought/stratagem valid? Does reviewing improve one's chances of paper acceptance in any way?]

Amongst reviewers, just over half (51%) felt that payment in kind by the journal would make them more likely to review for a journal - 41% wanted actual money. (then again, 43% didn't care). 40% also thought that acknowledgment by the journal would be nice, which is odd considering that the majority of reviewers favour the double blind system (see below), or even more strangely, the fact that 58% thought their report being published with the paper would disincentivise them, and 45% felt the same way about their names being published with the paper, as a reviewer. [Note:in the latter point, for example, that 45% does not mean that 55% would be incentivised - a large proportion of the remaining 55% is in fact taken up by ambivalence towards the matter]

Amongst those who wanted payment, the vast majority wanted it either from the funding body (65%) or from the publisher (94%). In a lovely case of either solidarity or self-interest, very few (16%) thought that the author should pay the fee. Looking at that 94% of authors who would want to be paid by the journals to review - where would this leave the open access movement? One of the major, and possibly most valid, criticisms currently being leveled against paid subscription journals is that their reviewing is done free of charge. Indeed, it might even lead to a rise in subscription prices, meaning even more researchers are unable to access paid content. Hmmmm.

To review, or not to review

The most-cited reasons for not reviewing were that the paper was outside their area of expertise (58%) or that they were too busy with their own work (30%). Of course, the first point does suggest that journals could, and should, do a better job of identifying which papers should go to whom...Certainly, this would be a brilliant place for a peer-review agency to start!

The mean number of invitation to review rejections over the last year was 2, but a little a third of respondents would be happy to review 3-5 a year, and a further third 6-10! Which suggests a great deal of underutilisation of the resources, especially considering that the primary reason for not accepting an invitation to review is that the reviewer and subject weren't properly matched.

Purpose

For this, I'd suggest the best thing to do is have a look at the graph below (which is from the report, again to be found here).

(c) Sense about Science, Peer Review Survey 2009: Preliminary Findings, September 2009, UK. Click on image to enlarge and make more legible.

What is interesting is that peer review is underperforming significantly on some of the functions felt to be most important, namely the identification of the best manuscripts for the journal and originality (assuming that by this they mean something along the lines of 'completely novel and ground breaking', then there's a very interesting discussion to be had around that, considering that PLoS specifically doesn't look to that, something I talk about in this post).

It's also performing poorly at the detection of plagiarism and fraud detection, although one wonders whether that is a realistic expectation (and, certainly, something that technology could perhaps be better harnessed to do).

Types of review

Most (76%) of reviewers felt that the double-blind system currently used is the most effective system - usage stats were thought effective by the fewest reviewers (15%).

Length of process

Gosh. This was interesting. So, despite the fact that some 75% or reviewers had spent no more than 10 hours on their last review, and 86% had returned it within a month of acceptance of invitation, 44% got first decision on the last paper they had submitted within the last 1-2 months, and a further 35% waited between 2 and 6+ months.

Of course, the waiting time for final acceptance scales up appropriately, as revision stages took 71% of respondents between 2 weeks and 2 months. So final acceptance for took 3-6 months for a third of respondents, and for a further third, could take anything between that and 6+ months.

Now, I can understand that the process is lengthy, but it does seem like there's a fair amount of slack built into the system, and it must have something of an impact on the pace of science, particularly in fast-moving fields. I don't have an answer, but Nature is just about to release some interesting papers discussing whether scientists should release their data before publication, in order to try prevent blockages...(UPDATE: Nature's special issue can be found here).

It's worth mentioning, despite whatever I may say, that slightly more respondents thought peer review lengths were acceptable (or better) than not. Comment, anyone?

Collaboration

The vast majority of reviewers (89%) had done the review by themselves, without the involvement of junior member of their research group, etc. While I can understand this, to some extent, I wonder whether it doesn't inform the feeling that some sort of training would be useful. Could a lot of that not be provided by involving younger scientists in the process?

Final details

Just over half (55%) of respondents have published more than 21 articles in their career, with 11% having published over 100. Not much comment there from me, other than O_o!!

89% of respondents had reviewed at least one article in the last year (already commented on this, above).

For those interested, most respondents were male,over 36 years old, and worked at universities or colleges.

While half worked in Europe or the US, 26% worked in Asia (to be honest, I was pleasantly surprised that the skew towards the western world wasn't larger), and the most well-represented fields were the biological sciences and medicine/health.

---

So yes! there you have it - a writeup that's a little longer than the exec summary, but a little shorter than the prelim findings themselves.

It does raise a number of interesting questions, particularly around the role of journals in the process, but it also confirms what I think we've all heard - that while science publishing is going through some interesting times, very few would dispute that peer review itself is anything other than very important to the process. Although it might need to do a little bit of changing if it's to remain as important as it is currently...

Monday 7 September 2009

Reflexology quickie

I don't have access to the full article, sadly (aaarg), so this tiny tidbit from the Medical Journal of Australia will have to do.


In essence, it says that the evidence for reflexology working is, well, not there. Yes, the foot rub is quite cool (apart from the excruciating pain and the need to spend some time thereafter limping), but really, one could just get a normal foot rub instead.

In fact, thinking about it: perhaps normal foot rubs are actually better, as they don't force one to tense every muscle of the body in agony, something which I battle to see as being conducive to one's health and wellbeing.

Name-dropping makes you obnoxious

"But", I hear you murmer, "we knew this already!"


And yes, intuitively, I guess many of us do. Despite this, it still doesn't stop many people doing it, on a regular basis. Some might even say that networking somewhat encourages it. Well, networking as seen in a very narrow, and unlikeable, light (happy to write more on that subject, should anyone wish).

So yes, some German researchers have finally looked into the phenomenon, and found the following:

If you meet someone, and find out that they share a birthday with someone likeable, or that they know someone interesting, then you may perhaps feel sunshinier towards them, but only if they didn't tell you directly. And context is also pivotal.

If, on the other hand, you drop into conversation that you are good mates (or whatever) with someone famous, particularly if you volunteer the information fairly directly, then people are less likely to like you. So not a good strategy. Boasting does not make you friends. Hear me on this.

Actually, thinking about it, perhaps that's some of the appeal of things like Linkedin (apart from the obvious) - people can see how very cool and amazing your connections are, without you ever having to vouchsafe the fact. Now, who has some amazing contacts me to link to?

The article about the research can be found on Scientific American's website, here.