Researchers are spending more time than ever on applying for
research funding. A survey conducted in the Netherlands, indicates that this fills now 20% of the time. Scholars write more project proposals, with a
decreasing success rate of below 15%. In the process also good proposals are known to get rejected.
The study also showed that 10% of scholars, approximately 20 big names, get more than 60% of the available free
funding (Van Calmthout, 20-21.2.2015, de Volkskrant). They often work in medical
sciences, bio sciences, or physics, involving large consortia of specialists. Other
scholars are considered lucky with a few millions in a life time, while many struggle
to get any, especially in languages and social sciences. This concentration has
been intended, but do the happy few indeed yield 100 times as good results per euro invested as
the others?
Similarly, also in Finland a sharp
increase of time invested in funding is noted. Deans push scholars to write more
and more applications for national funding, while their collective action makes
the chances drop even lower as the budget as a whole does not grow. On average, American research shows that writing one proposal costs a full month of work. For the EU
funding through Horizon 2020 there was so much promotion that the competition
has increased significantly compared to 5 years ago. As the many more applications
also need many new reviewers, some universities push less experienced
researchers to become reviewer to ‘learn the trade’. This, next to choosing for
familiar names or selecting your own stream or approach, explains why good applications
also can be rejected.
So far, I have been lucky to have
had in the last 8 years next to national funding 3 EU-funded projects, twice
consortium leader. But we also lost a good application because the reviewers
mistakenly thought that we had ‘misunderstood the call’. The call text focused
on cultural factors, while the review said economic factors should have been
taken into account. Moreover, the reviewers completely missed the fact that we
used the most advanced approach that was indeed broader than cultural factors,
as we research diversity rather than only cultural diversity. Needless to
say that in the Security area all kinds of reviewers such as with a technical
background are involved and this particular call needed a
social sciences approach. Security calls are currently
anyhow dominated by technology, often causing products that may seem to become a
commercial success but do not resonate with needs in civil society. Reviewers
check if human factors are included but do not note that this hardly gets
budget within the project.
I also lost an ERC application
because one of the reviewers said “I am not an expert in this discipline but I
feel that this field is not useful”. The project officer coordinating the
process should have taken notice of this remark and taken out this review, and
the reviewer should in the first place have passed the review on to someone
qualified in this field. ERC calls work with reviewer teams that are more
specialised. But one might encounter a reviewer that thinks “organizational communication is about manipulating”,
whereas the project actually was about societal problem solving, which is recognised by another enthusiastic reviewer, but the average is taken.
The review system depends on the
quality of peer review, while with the growing numbers of applications we currently need too many new reviewers. But more importantly, the costs in
researcher time of this system are becoming very high. The system has been
successful in reaching concentration in financially intensive research areas.
However, now it is time to ensure that also innovation in science is stimulated. The current system is reproducing
the big names we already have, it emphasizes commercial impact too much, and it prohibits
that new disciplines and approaches can break through.