Thursday, September 29, 2016

Cell Press, scientific fraud, replication, and retractions

As per Retraction Watch, Cell Press will not be retracting two papers that were flagged as problematic after one author claimed to have manipulated his data to fit the desired conclusion. That author, Dr. Yao-Yun Liang, is conveniently unavailable. The first author (Xia Lin) and the corresponding author (Dr. Xin-Hua Feng) already have a retraction under their belt for "inappropriate data manipulation" found in an earlier paper, but Baylor College of Medicine (where the work was carried out) conducted an investigation, and found no evidence supporting claims of fraud in this case. So far, so good.

Now for the weird part: Cell Press had Dr. Feng get some of his friends to attempt validate the results, which they did for the Cell paper in question, and now claims that means no fraud was committed (as does Baylor College of Medicine, which uses this result to bolster their claims from their investigation). There was an interesting discussion of this over at DrugMonkey's blog a couple of weeks ago, just after the first editorial note was issued by Cell (September 8) regarding this paper. Even weirder, apparently the validation results of the Molecular Cell paper in question were inconclusive, but Cell Press won't be doing anything anyway!

This is beyond bizarre. First of all, whether the results replicate has no bearing on whether fraud was committed. We all like to think we have good scientific intuition, and sometimes that is actually true. It doesn't mean we get to publish papers with data manipulated to support our good intuition. If there was fraud, the paper should be retracted, even if the conclusions end up being sound.

Second of all, if Cell Press is going to use the "if the data replicates, it isn't fraudulent" argument, they should at least be consistent! From my understanding this is what happened:

     1) Dr. Liang says he manipulated his data in these two papers.
     2) Dr. Feng denies the allegation, and says Dr. Liang is trying to to hurt his career with these lies.
     3) Baylor College of Medicine investigates, and finds that it is a "he said-he said" problem, and says there is no evidence of fraud.
     4) Cell Press decides that Dr. Feng should get some people to replicate the result to "prove" they were not fraudulent (WHAT!).
     5) It all works out in the end for Cell, so they say that since the results replicate, it doesn't matter.
     6) It all doesn't work out in the end for Molecular Cell, but they say it doesn't matter anyway.

Huh? Something is so off with this scenario. Setting aside the fact that manipulated data does not have to be inconsistent with actual experiments (it just has to be falsified), if replication is supposed to settle the issue, then why is Cell Press ignoring the "inconclusive results" for the data in question in the Molecular Cell paper?

To my mind, Cell Press had four options:
  • They could say they will use the results of the Baylor College of Medicine investigation, and not retract. 
  • The could say the Baylor College of Medicine investigation was not thorough enough, and do their own investigation, then make a decision.
  • They could believe Dr. Liang and retract.
  • They could not know who to believe, and retain the "Expression of Concern", keeping it attached to the papers, and putting in all the information about the confession, the denial, and the Baylor College of Medicine investigation.
Any of those things would have been reasonable (if possibly controversial) responses to the allegations of fraud. Instead we got false logic about how "if it replicates, it must have been real" which was ignored when that became inconvenient for Cell Press. And people wonder why Retraction Watch is so busy?




Sunday, September 25, 2016

If in doubt, just apply

I don't know what people are telling their students/postdocs these days, but our department has a TT search ongoing right now, and I am getting a surprising number of inquiries about whether someone should apply for the position or not, especially since I am not a contact person for the position, nor am I the search chair. And these are all from people who seemingly have a fairly decent overlap with the listed areas of interest in the ad.

I suppose these may be veiled requests for more details on what we are looking for, but still, if in doubt just apply. The worst that will happen is that you will not get the job, which is the default without submitting an application. This is good advice for anything, really. If you overlap with the selection criteria, just apply and let them reject you if you aren't what they are looking for.

Thursday, September 22, 2016

A self-indulgent look at this blog

I was flipping through the stats Blogger automatically keeps on the blog, and it made me feel really proud of the writing I do here. This blog has probably been read more than anything else I have ever written, which is an odd feeling, considering that few people know that I do it. My most read posts have almost 28,000 views between them.

Two of the three most popular posts are about non-academic jobs (my first post on this, and my link aggregator page).  The third is a Mendeley review from 2011, which is still fairly popular (and actually still reasonably representative of my thoughts on Mendeley, which I still use 5 years later). Rounding out the top 5 are posts on how I got my National Lab job (hint: don't expect what I did to work for you--it was luck!) and how search committees sift through applications (still true). After my top 5, views per post drop pretty steeply (50% drop between #5 and #6, for example). Interestingly, my posts on job searching in various forms are more popular than anything else. After that, it is actually academic misconduct posts which get a lot of views, followed by more detailed interviewing advice in the top 10. From that, I would guess that most of my audience consists of students and/or postdocs.

The most likely path to my blog is through Google, which sends 5X the traffic of the next most common entry point, which is Xykademiqz's old blog (Thanks!).

Most of my audience looks like it comes from the US, with Switzerland a surprising second (at least surprising to me, since I write in English about issues primarily of interest to North American readers). Only 58% of my audience is using Windows, with 26% using Macs. I wonder how that has changed over time. My readers primarily use Firefox and Chrome, which is not too surprising.

The blog averaged about 2300 page views per month while I was on hiatus, which is just astounding. I pretty much started this blog (and keep it going) for my own entertainment. I tend to do advice posts and commentary, since I would have liked to see that sort of thing when I was struggling with the issues I discuss. It is actually really cool and quite surprising to me that so many other people have found it interesting/helpful.

Tuesday, September 20, 2016

Things I wish I knew before I started mentoring students

As newly minted scientists, we are not trained in how to be effective mentors. Nevertheless, in almost every research environment, mentoring becomes a significant portion of the job. This is especially true of TT positions in higher ed. Alas, my only training in being an effective mentor came on the job. I was thinking about this recently, since it is student recruiting season at ProdigalU. So here are some things I wish I knew when I started:

  1. So many of my early decisions would be reactions to things I did not like about the way I was mentored. Sometimes consciously, sometimes not.
  2. Even though #1 is true, it was hard not to replicate aspects of my advisors' mentoring styles, since that is all I knew when I started.
  3. Although I knew (and was reminded by many, many people) that Prodigal as a student is not a good model to use in deciding how to mentor, it is really, really hard to act on this knowledge.
  4. My first students (brave as they were to pick an advisor with no track record and no one to ask about) would have a HUGE impact on my mentoring style. (So recruit wisely).
  5. Even though few people ever really have an idea of how to set up a group culture (and I was certainly in that boat), one will form anyway (even without my input) and it will stick around a long time. I really lucked out that my first set of students were serious, hard working, and easy going. They set the tone for the next rounds of students and on to today. I should have paid more attention to this, though I am happy with how things turned out in the end.
  6. My students are my best recruitment tool (luckily for me, since I think I have great students!)
  7. Just as my students will be linked to me forever, I will also be linked to them (see #5--hooray for great students). 
  8. Mentoring is much harder than it looks. It takes quite a bit of experience to figure out the best way to mentor a particular person, and no one strategy works for everyone. I am still learning. Some personality types will never click, and that is no one's fault. If that happens, it is even harder to be a good mentor.
  9. People need what they need, and sometimes that isn't me as a mentor, no matter what their science abilities are. If the mentor-mentee relationship is not working, it is best for all concerned to resolve the situation quickly. It does NO ONE any favors to pretend things are OK when they aren't, or that someone will get a PhD when they won't (at least not with me). I let things go on for way too long when this happened in my group.
  10. Recruiting is just as much making sure I can work with the student as it is attracting students to my group. 
  11. I ended up doing nearly as much mentoring about things outside the lab as inside (and not just career stuff either). I hadn't expected this.
  12. That I would be so excited to hear from group alumni (undergrads too). I wish I stayed in better contact with my own mentors, now that I see how nice it is to hear about how things are going with my former students. I also wish I had told more of my students this when they were in my lab.

Friday, September 16, 2016

What can we teach our students?

There is an interesting discussion over at Drugmonkey's blog about whether it is possible to teach things like resilience and ambition to students. In my opinion, the answer is no. Similarly, I don't think it is possible to teach our students a strong "work ethic". By work ethic, I mean the desire to work/study hard to get results and to take pride in one's work, regardless of the public reward. By the time students get to us (typically 18 at the youngest), I think such traits are set. They may even be set to some degree at birth (that is, it comes more naturally to some people than others, and is harder to teach some kids than others).

The one thing these traits have in common is the DESIRE to use strategies that can be taught. Coping strategies can definitely be taught, but the desire to use them (i.e. resilience) can not. Career development strategies can be taught, but the desire to use them (i.e. ambition) can not. Work/study skills can be taught, but the desire to work hard/study hard and to take pride in one's work (i.e. work ethic, for lack of a better term) can not.

This is something I always suspected, but my opinion has been reinforced by the experience of raising my own children. It is really hard to convince some children (even if they are very young, even if you are providing assistance, even if you are modeling the next step) that they should want to cope with adversity instead of giving up. Other kids just jump right up and keep going without any additional input. For very young kids, I think it is possible to teach such traits (resilience, ambition, work ethic), but it certainly helps that with kids one can enforce something like resilience until it becomes more natural.

Becca has a great comment listing out things an advisor can do to help a group member who is facing adversity, but doing any or all of those things will not make a person WANT to continue on. Since moving to ProdigalU, I have seen students offered every possible assistance and quit anyway, and I have also seen students suffer with the double whammy of negative life experiences (research or otherwise) plus poor mentoring and still finish their degree and go on to great opportunities. Part of what I attempt to screen for when interviewing potential group members is persistence/resilience because it is so important in research (much more important than GPA!), and I don't think it can be taught.

I try to set up a supportive environment. I don't ever dress down students in public. When good things happen, I am happy with my students, when bad things happen, I try to help them get through it. I remind students that a rejection is about the opinions of a few people in the world, that many great ideas were first rejected (not that all rejected ideas are great, though :-), that business is business and personal is personal (and rejection is clearly business!), that failing at something is not the end of the world. I share coping strategies, and I am clear about the ups and downs about life at ProdigalU (and also at life at all the places I was before).

This goes for both my research group and my classes (to varying degrees, of course). My more resilient students probably don't need this (but hopefully it helps them feel supported). My less resilient students hopefully learn more about how to keep going. My non-resilient students quit (which may be the right thing for them to do--if they decide my field is not for them due to lack of interest, it is definitely the right thing to do). I am sure I fail some of my students, since I am not a perfect human being, nor I am I the best mentor for every personality type. I know that I have helped students who wanted to keep going, but didn't see a way forward. I don't think I have ever convinced a student who didn't want to keep going to use the mechanisms in place to help them.

UPDATE: I haven't been at ProdigalU that long, and have a small group. All of my grad students thus far have left with a degree. When I talk about students quitting, it is students in my classes, or students I am a committee member for. 

Tuesday, September 13, 2016

Collaborations and team grants

Pretty much at every stage of my career there have been team grants of some kind, offered by institutions, states, federal agencies, industrial funders, what have you. The idea behind these sounds good: "Collaborations encourage creativity and interdisciplinary science, which is good. If we mandate that people must be co-PIs to get money, we can encourage collaborations." In practice, many team grants turn out to be ways that two or more PIs can fund their individual research on a theme that vaguely connects the two or more groups which really a collaboration. Being honest, I've done this too. Money is money, and sometimes even if we plan to work together if the funding comes through, there is already momentum on the solo projects, while the joint project has to start from scratch.

In my experience, successful collaborations rarely start out through formal mechanisms. Usually, I am talking to someone informally about my work (or theirs), and while chatting, we have an idea for an experiment/calculation/analysis that one of us can do on the other person's problem. Sometimes, my collaborations have started when my students have done something similar. In any case, all of my successful collaborations have started from one experiment and grown from there. In the best cases, the single experiment grows into a new approach on a problem that neither of us can tackle alone (even better if we can then get joint money for it!). Always, though, the working together starts BEFORE the joint proposal, not as a happy outcome of writing a joint proposal. 

I won't say that team grants are a terrible idea, since once a collaboration begins, they are an excellent way to fund research that might be hard to get funded in other ways. Nor do I think that multi-PI grants that end up as separate projects along a theme are necessarily bad. I just don't think that anyone is really served by pretending that team grants produce collaborative science, rather than being an opportunity to support something that is already there.

Thursday, September 8, 2016

ResearchGate, preprints, and open access

I am not much of a social media person, so I don't usually sign up for new social sites, even when work related. Recently, I set up a ResearchGate site for myself, mostly as an experiment. A colleague of mine swears by it, and claims that his citations have really increased since he started using the site. It is an easy experiment to try, so I just set up my page, and let it go. So far, nothing I have seen convinces me that more people are reading my publications, but it is early days yet. Also, I am not illegally uploading my publications, so there is that. If I am still blogging here, I'll revisit my ResearchGate experience in a year or so.

My field is not really one that uses preprint servers much, though I have no real objection to doing so. I much prefer to read the nicely formatted journal versions (when available), rather than the preprints on the arXiv myself. I think preprints are a good idea, but peer-reviewed journals serve a beneficial purpose both in acting as gate keepers for junk (and I think anyone who has done a review knows that), and also in improving manuscripts. All of my own publications were improved during the peer-review process.

I have a few open access publications, and they are not cited at a higher rate than my other publications. All the data I have access to seems to confirm my own experience that it is not hard to get access to a paper, even if ProdigalU's library doesn't have it. Legally even. Interlibrary loan works, but it it slow. Faster is to just email the corresponding author. In my experience, they are happy to send a pdf (which is usually fine under the licensing terms--I have never even come close to sending out as many as I am entitled to as an author). Not instant gratification, but still pretty easy access to the literature. And that is before the quasi-legal or downright copyright violating methods. The actual fastest method is to just google the title, and often a non-pay walled link will show up. I just don't see access to publications as a major issue for most people with an Internet connection.

I get that some people have open access as a near and dear issue in their hearts. I get that extortionate journals are a problem, and that libraries are being squeezed by publishers to take on (and pay for) crap journals they don't want to get the ones they do through packaging. The thing is, open access doesn't solve this problem and adds new ones. Quite frankly, I can't pay page fees. I just don't have the money. Even if I did, there are not many non-predatory open access journals in my field, so I would just be paying more money to those same journals already extorting my library. Furthermore, at this stage in my career, the work that can go into high impact journals needs to go into high impact journals (mostly for the benefit to my CV), which means no open access.

Thursday, September 1, 2016

More research shenanigans, this time at Duke

Duke University is being sued under the False Claims Act, a US law that lets whistleblowers receive a percentage of recovered funds when they can prove that someone or some institution has defrauded the Federal Government. The suit arises from the activities of Erin Potts-Kant, a biologist who has pled guilty to embezzling $25,000 in research funds and also has had to retract or correct over a dozen papers due to "unreliable data".

I find it really interesting that almost all of the quotes Science uses refer to the dangers such lawsuits pose to research institutions:

The Duke case “should scare all [academic] institutions around the country,” says attorney Joel Androphy of Berg & Androphy in Houston, Texas, who specializes in false claims litigation. It appears to be one of the largest FCA suits ever to focus on research misconduct in academia, he says, and, if successful, could “open the floodgates” to other whistleblowing cases.

Really? Research fraud and misuse of grant money is so widespread that all academic institutions should worry? I think that is much more frightening than the prospect of whistleblowers collecting damages from research institutions. I actually think it would be a good thing if institutions were held responsible for looking the other way as long as the research money is flowing. Researchers who harrass their students, fake data, or defraud the government should not be protected. Universities should WANT to get rid of these bad actors. Maybe if they have to pay through the nose, they will.