The NIH Simplified Review Framework 

January 2025

Investigators: are you ready for the changes to NIH peer review in effect for grants with submission dates Jan 25, 2025 or later? 

What's called the 'Simplified Review Framework' will now cover nearly every mechanism, including most research, fellowship and training grants.  Small business and multi-project grants are not yet covered.  So academic investigators, this is for you.

Although there will be very little difference in what you submit, there will be significant changes behind-the-scenes. Reviewers will be trained to see your work differently, and this change of lens may require a new competitive strategy.  

Why NIH grant review has changed

In case you missed it, the current framework has been under the NIH microscope for the last 3 years. According to the agency, peer review has changed to address the complexity of the process and the  potential for reputational bias to affect review outcomes.  Let's look at both:

 Complexity

Investigators have long complained about the raft of review criteria plus the ever-increasing burden of compliance documentation.   With panels spending less than 10 minutes of verbal discussion per grant, it's difficult to justify the time commitment on things that distract from the core science.  Some have even argued that more and better research would be done if we abolished peer review and assigned funding at random [1].   But that's another newsletter.


 Reputational Bias

Whether positive or negative, reputational bias can have a massive impact on review outcomes.  And up close, it looks ugly.  I can't tell you how many times when criticizing an objective flaw in a grant proposal, I was told by another panel member: 


"He's a workhorse, he'll take care of it."

"She published 20 papers last year, she knows what she's doing."  


And so on.  In effect, all criticism is baseless because Dr. X can do no wrong.  Can't you see the quality-halo that negates all their errors?  

Even though we were trained not to have these conversations and review officers were trained to nip them in the bud, what was supposed to be scientific review often turned into a reputational pissing-contest fueled by 'social proof' from one or more panel members.  Sad, but true. 


The NIH is right to take up both of these causes.  Let's see what has changed and if the changes address the stated goals.

What has changed

  Changes to what you submit:

The instructions are an IQ test.    

Thou shalt read and follow all instructions.


  Changes to reviewer training:

Changes to who reviews what:

 In a major break for reviewers, some of the compliance documents that used to be on the table in panel discussion will now be handled by NIH staff, and only if the grant gets a percentile score in the ballpark for funding.  But you still have to write these documents, so NIH does not have to chase down your details prior to funding.  So polish them well:

Changes to impact scoring

  The 5 familiar scoring criteria are bundled into 3 novel 'factors'.  Factors 1 and 2 are scored; Factor 3 is not.  This reduces reviewers' scoring tasks from 6 items to 3: Factors 1, 2, and overall Impact Score:  

Image Source: Fractional Investigator Services LLC

Key takeaways:  

For Factor 3, reviewers will not provide a numerical score but will indicate, using a drop-down menu, whether they find the resources to be acceptable or not.  They can also comment on Factor 3 in their summary statement, addressing strengths and weaknesses.  It's still fairly vague what will happen here:

 "Factor 3 and Additional Criteria are not scored.  However, reviewers will consider them when assessing the overall scientific and technical merit of the application. Reviewers will be trained on how to assess these criteria prior to review. "  (Source: NIH FAQs: Simplifying Review)

Changes to percentiling and funding decisions

   There are no proposed changes to percentile calculations. To review, a grant's percentile is calculated by taking the overall impact score and converting it to a percentile based on all the scores generated from that study section over the last 3 review cycles. This adjusts for variation between study sections in the broadness or narrowness of the impact scores (1-9) assigned. 

It's for Congress to determine the level of funding for the Dept. of Health and Human Services (HHS) of which NIH is a part.  Individual institutes decide whether or not they set 'paylines'.  Some numerically non-competitive grants can still be funded by the largesse of program officers, and vice versa.  

Is complexity reduced?

Perhaps. The transfer of compliance review to NIH staff will allow more focus on the science.  But the existing scientific criteria have been put into new buckets with new names and titles, creating another layer of vocabulary.  Instead of 5 scores there will be 2, but it's important to realize that these sub-scores are only guidelines, not formulas, for overall scoring:

So as a reviewer I am still going to read the entire proposal and evaluate its global strengths and weaknesses.  Reading is the heavy lifting.  Coming up with a number is trivial, so little is subtracted if I generate 2 versus 5.

The five separate scores were, however, reported back to the investigator– usually at the top of the summary report. And if 4 scores were low and one was high, it told you exactly what you needed to work on.  

In the Simplified Review investigators will get fewer scores so it will be up to PIs to read the commentary to finds the areas that are weakest. But if you are like me you will obsessively read all the commentary anyway.


Is reputational bias minimized?

Not likely.  

Given the community complaints about this bias and my own experience watching it contaminate reviews, I find the official language on this point rather bland:


"NIH will manage peer review to ensure that reviewers follow guidance for Factor 3 and focus on expertise and resources as it relates to the proposed science, not general accomplishments."


  Sadly, this seems no different than business as usual.

 I expect that reputational bias will still affect peer review decisions but in a highly sporadic manner depending on the fortitude of the panelists and the diligence of the SROs.  And that's because scientists are humans, and humans are social primates that respond to signals about dominance hierarchies, and no adminstrative ordinances will change that fact.


Source: NIH FAQs: Simplifying Review

Are there unintended consequences?

   Unclear.  We're in a phase where no reviewer has deep practice with this type of scoring system. If grant review was a math problem, we would assign it to a quantum computer.  But it's not.  It's a social convention, like a dance, so groups of scientists need to practice their moves.  

Strategic Summary

  Government rarely changes except through stakeholder outcry and even then very slowly.  Pressure builds, as along a geological fault line.  Eventually there's a tremor but it's not yet possible to measure this one on the Richter scale.  Although numerical complexity appears slightly reduced, the possibility of reputational bias affecting overal impact scores will remain unless rigorously opposed by the 'resident immune cells' of panelists and SROs.

*Subject to our Privacy Policy  | Finding this content useful?  Share this page with your friends: 

LinkedInTwitterFacebook

If you liked this blog post, here's another: How to Make kanban 看板 for the Research Lab