09 August 2011

How can Researchers Maximise their Impact?

Why am I recommending a 298 page “Handbook for Social Scientists on Maximising the Impacts of Your Research?” In fact I was going to do a blog about how silly it is to have such a long guide to building impact, but the Handbook, from the LSE Public Policy Group (consultation draft 3) is really good. Lots of evidence and data analysis is provided, stratified by discipline and level of seniority of researcher, all written up in clear (if not pithy) prose.

The handbook argues that research impact comprises academic impacts (within the research world) and external impacts (those outside higher education). It defines impact as an “occasion of influence” not as a change in outputs or activities as a result of that influence or a social outcome (would this definition be challenged by the UK Research Councils?). The Handbook states that “verified causal links from one author or piece of work to output changes or to social outcomes cannot realistically be made or measured in the current state of knowledge”.

Personally I think we can push it further than this given a concerted research agenda (see end of the blog) but then I have always been bullish about this issue.

The first part of the Handbook focuses on citations (the average article in the social sciences and humanities is cited less than once a year! p.24): what shapes them (in the first few years book citations are lower per year than articles, but also are more enduring), how to assess them (I had not heard of Harzing’s Publish or Perish software, but is supposed to be very good or social scientists), how to increase them (have a distinctive name, choose memorable titles that also summarise your argument, and co-author, preferably with people outside your organisation and country), how to assess citation performance (e.g. the h-index where you have an h score of 10 if you have 10 articles that have at least 10 citations each--the average for social science researchers is 3-4!).

The second part focuses on maximising external impact. The Handbook uses a simple research discovery, integration, application and renewal model as a necessary condition for generating impact. It then discusses how these 4 activities are spread across the 5 main demands on academics time (research, academic citizenship, academic management, teaching and dissemination and impacts work). Bridging scholarship, across disciplines and organisations is vital for this. Also vital are “impacts interface” organisations such as think-tanks, the media, professional associations and consultancies. "Impact gaps" lamented by some users can be traced to incentives, culture, demand and supply mismatches, weak communication and poor social capital. Ways of resolving these are highly context specific, with exchanges likely to be important in many contexts. The Handbook’s framework for identifying academics who achieve external impacts include 6 behaviours: they (a) are academically credible (that is a relief), (b) have good networking skills, (c) have good personal communication skills, (d) an external profile, (e) are experienced/safe pair of hands, and (f) have a track record of influence. At the organisational level the Handbook argues that: it is the tacit knowledge of research teams/institutes that has the highest impact, not the explicit knowledge they pump out; commissioned work can shorten time lags for impact; it is vital to systematically collect impact data in customer relation systems; stretching for impact does not necessarily lead to a loss of academic independence, but it could, so make sure this risk is prominent on risk registers; “information wants t be free”—maintain an online depository; researchers write blogs (cut out the middleman), but rely less on single-author blogs because readers want something new every day and single author blogs cannot do this (unless you are Duncan Green).

I found the first part of the Handbook more interesting than the second part (which seemed more obvious to me, perhaps because at IDS we worry about these things and don’t have to be as scientific about citation indices).

What really struck me however was how thin the evidence base is in terms of how research influence is defined, assessed and shaped. There is a really interesting research agenda out there on what affects the uptake of research (we are doing a project on how the framing of a research report affects what policymakers retain).

Finally, the Handbook, even as a reference tool, needs to apply the principles it espouses if it is to have an impact of its own. It needs to be shorter, with more real examples, and quotes from users, aggregators and producers of research. It needs to draw on that tacit know-how and less on more explicit (but abstract) knowledge.

No comments: