From research to influence

In this module, we examine how influence actually happens. Policymaking is political, contested and rarely linear. Evidence is only one input among many.

We explore the concept of research uptake, different forms of influence, and why measuring impact is often difficult. Influence may take indirect, delayed or unexpected forms — and recognising this complexity is key to setting realistic expectations and strategies.

The significance of MEL for policy influence

Introduction 

Literature on how to monitor, evaluate and learn about policy influence is abundant. However, because influencing policy is such a complex, long-term and unpredictable process, some researchers and practitioners wonder whether it’s worth investing energy and resources into a systematic assessment of policy influencing efforts. In addition, some monitoring, evaluation and learning (MEL) activities can arouse apprehension, especially if they are perceived as an accountability exercise or a control mechanism.  

Our view is that incorporating MEL into the daily life of any organisation is well worth it. A smart and proportionate use of MEL tools, especially as part of a well-thought-out MEL plan, can help organisations to:  

  • Reflect on and enhance the influence of their research on public policy. 
  • Satisfy their (and their donors’) interest in evidencing the uptake of research in policy. 
  • Build their reputation and visibility, and attract more support for their work. 
  • Generate valuable knowledge for all members of the organisation. 
  • Reorganise existing processes for data collection so that they can be useful for real MEL purposes, and discard processes and data that are not useful. 

Why develop a MEL system? 

Being clear about why a think tank wants to do an evaluation (and MEL more widely) is a key first step.  

Consider the following questions about a think tank: 

  1. Does the think tank need to inform its donors and key stakeholders on the impact it is having? 
  2. Does the think tank want to strengthen and improve the way in which it implements projects? 
  3. Does the think tank want to make better decisions on the organisation’s strategic direction and/or its programmes? 
  4. Does the think tank want its staff to have more and better knowledge to improve the way it goes about influencing policy? 
  5. Does the think tank want to empower its members through greater consensus and commitment to the objectives? 

FIVE REASONS TO UNDERTAKE MEL

In its toolkit on monitoring and evaluating policy influence, the CIPPEC (2013) proposes five reasons why an organisation might undertake MEL. Note how these reveal the ways in which MEL activities could help organisations achieve the kind of objectives we have just considered:¹⁶

  • Accountability: To provide donors and key decision-makers (e.g. board of directors and/or donors) with a measure of the progress made in comparison with the planned results and impact. It can additionally be used as a cost–benefit tool to make funding decisions.
  • Support for operational management: Producing feedback that can be used to improve the implementation of an organisation’s strategic plan. When it comes to putting a strategic plan into practice, a monitoring and evaluation system will help detect, in practice, those elements that are unhelpful, obstruct work or simply need to be reviewed and/or readjusted to improve the organisation’s operational management.
  • Support for strategic management: Providing information on potential future opportunities and on the strategies to be adjusted against new information. A MEL system can shed light on aspects that need to improve when thinking of the strategic plan (e.g. aspects not included so far and which might be worth incorporating ‘now’). This offers a more specific vision as to where, strategically speaking, to pay greater attention and place the focus.
  • Knowledge creation: Expanding an organisation’s knowledge on the strategies that usually function under different conditions, allowing it to develop more efficient strategies for the future.
  • Empowerment: Boosting the strategic planning skills of participants, including members of staff engaged in the programme or other interested parties (including beneficiaries). The MEL process increases acceptance of shared objectives and commitment to them and creates a more suitable environment in which future activities have greater chances of causing a positive impact.

These objectives – and the reasons for undertaking MEL activities – can be more or less applicable to organisations depending on their individual characteristics, experience, leadership interests and values, and so on. The important thing is for an organisation to be clear about its own reasons for the MEL effort, since the strategies and methodology chosen will vary according to the type of knowledge that is needed and how it will be used. 

Note that there can be arguments against too much MEL.  Here are 5 reasons why a think tank might choose not to undertake Monitoring and Evaluation (or to avoid doing too much of it) — especially when it becomes a compliance-heavy exercise rather than a learning tool:

  1. It can become a major time-sink with low returns: MEL often has a “bad reputation” because it’s experienced as difficult, boring, and slow, with staff pulling information together for ages—sometimes only because a donor asks or a board meeting is coming.

  2. The metrics can be misleading (giving false confidence or false alarms): OTT notes that “technological data can be misleading”—even basic things like views, counts, and unique users can be distorted by how platforms measure activity. If you over-rely on these numbers, you can end up “proving” things that aren’t true.

  3. You may measure the wrong things and conclude you have “no impact”: OTT’s comms/MEL guidance warns that what you measure must match how you seek influence. For instance, if you’re a convenor, heavy media monitoring might miss your contribution (journalists quote speakers, not the organisation that created the space), making it look like you achieved nothing.

  4. It can turn into funder-driven accountability theatre, crowding out learning: OTT explicitly cautions that using monitoring data only to satisfy donor requirements or annual reports is a “missed opportunity”—because the main benefit should be learning and adaptation.  Relatedly, OTT observes how some think tanks may end up worrying more about accountability to foreign funders than to domestic audiences, and how “bottom-up learning” can start to feel like evaluation done only for accountability.

  5. It can reinforce the “myth of influence” and encourage over-claiming (or even distort research choices):  OTT argues that think tanks often tell neat stories of policy influence, but when scrutinised these claims “fall short,” because many other forces shape decisions (agendas, interests, public opinion, luck, etc.). Over-investing in M&E designed to “prove influence” can incentivise storytelling over truth. OTT also warns (in the evidence-policy debate) that pressure for “uptake” can push researchers toward “policy-based evidence making”—a risk when measurement becomes a driver of what gets researched and said.

Different purposes and reasons for MEL will require different approaches to the activities. For example, a MEL framework focused on learning what works and what needs to change will pay attention to ‘why’ the initiative was developed and ‘for whom’, with some concentration on ‘what conditions’ are needed to influence policy. For this purpose, drawing on methods like developmental or realist evaluation can be useful. But when conducting MEL for donor reporting, for example, the focus will be on the question of ‘if’ something worked. It will be geared towards demonstrating success and evidencing change.  

Assessing what matters: Is it possible to monitor and evaluate policy influence? 

Assessing the impact of research on policy is complex – and it’s important to acknowledge the inherent limitations and challenges of trying to do so. Research and evidence are just one of many inputs, and their impact depends on how they compete with other ideas and other influencing efforts. That’s why linear or rational models – where research leads directly to solutions that policymakers adopt – are too simplistic. A more intricate model that considers the interactions among various stakeholders and external factors reveals the complex realities of multiple decision-making arenas.   

What aspects of policy influence efforts can be effectively assessed? To address this key question, it’s crucial to first broaden the definition of ‘success’ beyond the strict achievement of policy change. Mendizabal (2013c) offers useful discussion points for defining the broader scope of policy influence: 

  • Research uptake is not always ‘up’. Not all ideas flow ‘upwards’ to policymakers. For most researchers, the most immediate audience is other researchers. Ideas take time to develop, and researchers need to share them with their peers first. As they do so, preliminary ideas, findings, research methods, tools, and so on flow in both (or multiple) directions. ‘Uptake’, therefore, may very well be ‘sidetake’ – researchers sharing with other researchers. By the same token, it could also be ‘downtake’. Much research is directed not at high-level political decision-makers but at the public (for instance, public health information) or practitioners (such as management advice and manuals). 
  • Uptake (or sidetake or downtake) is unlikely to be about research findings alone. If the findings were all one cared about, research outputs would not be more than a few paragraphs long. Getting there is as important as the findings. Methods, tools, the data sets collected, the analyses undertaken, and so on, matter as well and may be subject to uptake. The research process is important too, because it helps maintain the quality of the conversation between the different participants of the policy process. In essence, policymakers need to understand where ideas come from. 
  • Replication is uptake too (and so is inspiration). Consider the inter-generational transfer of skills. Much of the research conducted in universities and think tanks can be used to train new generations of researchers or advance a discipline or idea – and this counts as uptake. Writing a macroeconomics textbook or a new introduction to a sociology book, for example, is as important as producing a policy brief. The students who benefit from these research outputs are likely to have an impact on politics and policy in the future – something that is nearly impossible to measure in the here and now. 
  • It is not just about making policy recommendations. The purpose of research is not only to recommend action. Researchers, including those in think tanks, often influence by helping decision-makers understand issues rather than pushing for specific actions. To gauge research uptake, it’s essential to consider all the functions of think tanks: agenda-setting, issue explanation, popularising ideas, educating the elite, debate facilitation, critical thinking development, public institution assessment, and so on.  
  • Dismissal is uptake too. Uptake is often equated with doing what the paper recommends. But research does not tell anyone what to do; it informs stakeholders about situations, alternatives, and future effects rather than dictating actions. It’s up to policymakers to make the decisions, and researchers (and donors) shouldn’t anticipate that research alone will drive change. 
  • Uptake is ‘good’ only when the process is traceable. Good uptake happens when good ideas, practices, and people are incorporated into a replicable and observable decision-making process. The goal should be good decision-making capacities, not just good decisions. The latter, without the former, could be nothing more than luck. And in that context, bad decisions are just as likely as – if not more likely than – good ones. Bad decisions can be lived with – but poor decision-making processes are unacceptable. And worse still is keeping these decision-making processes out of sight. 

There’s extensive literature on evidence in policy-making, and various frameworks assess evidence-informed policymaking (EIPM) by examining both supply and demand factors, including policymakers’ ability to use evidence, and evidence quality, relevance, and timeliness. Think tanks typically operate on the supply or intermediary side, collecting or translating evidence for policymakers. A MEL framework for think tanks should therefore emphasise these EIPM aspects, but also consider other factors influencing the outcomes of their engagement with policymakers. 

The following video offers some additional insights into the opportunities and challenges involved in measuring impact:

Understanding think tank communications

This section introduces the broad (and important) topic of communications for think tanks. In the past, think tanks were used to being found by audiences who went looking for them. But the emergence of the digital space has changed this. To paraphrase Connery (2015), today, audiences expect their information to find and come to them. This means think tanks are now having to diversify how they reach their audiences.

We begin this chapter with a discussion on how to understand communications in a modern think tank. We consider different approaches to communications and present a tool for monitoring and learning from communications.

We then look at communications outputs and channels for think tanks, and discuss new approaches to publishing research in a digital world.

The next section focuses on writing to inspire policy change, sharing tips for good writing in a digital age and how to craft effective messages.

The final section dives into data visualisation, looking at ways to engage audiences with research data, the different types of data visualisation and what it takes to do it well.

The importance of communications

Communications is too often treated as a tag-team race: once the research is done, it is handed over to a communications person or team to put it into a template and send it out through the same old channels.

But think tank communications is much more than this. And it starts at the research planning stage. It is strategic, helping to define audiences and policy goals from day one. It’s an art of unearthing the research narrative, of shaping messages, and of choosing the right formats, channels and tools to reach and engage your audiences, and ultimately achieve your goals.

After all, even the most high-quality, robust and credible research won’t have an impact if it doesn’t reach the right people, at the right time, and in a way that they can understand and connect with. As Jeff Knezovich (2012) argues, ‘a policy brief is a piece of paper, it doesn’t DO anything on its own’.

Richard Darlington’s (2022 [2017]) article ‘Defying gravity: Why the “submarine strategy” drags you down’ describes how traditional research teams have tended to ‘submerge’ to the bottom of the ocean to conduct their research and analysis, thinking deeply, alone. When finished, they pop up to the surface – often with a 100-page report and some policy recommendations. Most think tanks today recognise that a submarine approach won’t work. But it’s not always deliberate. According to Richard, this is what happens by default when there isn’t a communications strategy.

Modern think tanks must embed communications into their teams and their work from the beginning if they want to have an impact with their research.

Communications as an orchestra

Enrique Mendizabal (2015) has described think tank communications as an orchestra. Rather than thinking about communications through a project-based approach, think tanks should treat their communications as an organisation-wide effort to maximise their chances of informing policy and practice.

Mendizabal believes that a think tank must develop three things:

  1. A portfolio of communications channels.
  2. A communications team with clear ownership over those channels.
  3. Tactics or rules to use these channels and resources strategically.

In Mendizabal’s orchestra model, the head of communications is the conductor: coordinating the different channels, ready to bring the right instruments into play as windows of opportunity arise.

Rather than communications staff being project-based, they should be specialists –developing and honing their skills in events, digital media, publications, and so on (much like the different instrument sections of an orchestra).

Monitoring and learning from your communications

Communications monitoring, evaluation and learning (MEL) often starts and stops with reporting on download statistics or retweets. But these numbers alone only give us a fraction of the picture. They don’t tell us anything about how someone uses your work – or what you could do differently next time to improve your communications and impact.

The communications monitoring, evaluation and learning toolkit authored by Caroline Cassidy and Louise Ball (2018), suggests that think tanks look at two areas:

Strategy and management 

You can’t monitor, evaluate and learn from communications if you don’t know what you were trying to do in the first place. So, you need a good plan. To monitor and learn from your communications, you should ask: Did we have a plan for this piece of work, and did we follow it? What can we learn for next time?

Answer these questions in a quick after-action review or meeting, making sure you note down any lessons for next time.

Outputs

There are three dimensions to consider:

  • Reach. How many people did you reach? (Most evaluations focus on this because it’s the easiest to track using analytics.) But also, did you reach the right people?
  • Quality and usefulness. Was it factually accurate, well-written and grammatically correct, containing clear messages, etc.? How did users receive and perceive it?
  • Uptake and use. This is the hardest to measure, but you can begin by recording anecdotal evidence and feedback and start building a picture of how and by whom your work is being used.

The toolkit breaks down key questions and indicators to measure each of these elements.

Having a simple MEL system for your communications function is a great way to start building an evidence base for what works, and to make the case for additional communications resources (Ball, 2018).

References and further reading 

Contact

If you would like to find out more about the OTT School's learning opportunities, please email us: school@onthinktanks.org