Development action with informed and engaged societies
After nearly 28 years, The Communication Initiative (The CI) Global is entering a new chapter. Following a period of transition, the global website has been transferred to the University of the Witwatersrand (Wits) in South Africa, where it will be administered by the Social and Behaviour Change Communication Division. Wits' commitment to social change and justice makes it a trusted steward for The CI's legacy and future.
 
Co-founder Victoria Martin is pleased to see this work continue under Wits' leadership. Victoria knows that co-founder Warren Feek (1953–2024) would have felt deep pride in The CI Global's Africa-led direction.
 
We honour the team and partners who sustained The CI for decades. Meanwhile, La Iniciativa de Comunicación (CILA) continues independently at cila.comminitcila.com and is linked with The CI Global site.
Time to read
2 minutes
Read so far

Lessons in Evaluating Communication Campaigns: Five Case Studies

0 comments
Summary

Lessons in Evaluating Communication Campaigns

Five Case Studies


by Julia Coffman (julia_coffman@msn.com)

Harvard Family Research Project (HFRP)

June 2003



This paper, the fourth in a series of projects commissioned by the Communications Consortium Media Center, examines the evaluation of communication campaigns. The author explains that previous research about the field of communications evaluation suggests that:

  • Evaluation practice needs to keep pace with campaign advancements: "the actual practice of, and knowledge base about, nonprofit communications evaluation simply has not increased at a concomitant or satisfactory rate..."
  • More research is needed about how to evaluate campaigns that aim to change policy.
  • Evaluators should focus less on counting their own accomplishments (the number of press releases, for instance), and "push ourselves to focus more on the outcomes of those efforts for the audiences we are trying to reach"
  • Evaluators should keep design options open: "Sometimes the need is not to determine whether the communications effort definitively caused later effects, but to learn how to do the work better. This means opening the range of evaluation possibilities up to more than just the traditional research paradigm in which the evaluation remains separate from the campaign and does not contribute to learning or continuous improvement."

The author then defines the 2 main campaign types - individual behaviour and policy change campaigns. In this context, she discusses general theories of change, which are describe or represent how a campaign has been designed to achieve its stated outcomes. These theories, she suggests, "can be a critical driving force in decision making about an evaluation's design". The paper then provides case studies of completed campaign evaluations of the following United-States-based campaigns: Safe Gun Storage Campaign, Stop It Now! Vermont, Voluntary Ozone Action Program, Community Trials Project, and The Devolution Initiative. Each case study features the theory of change undergirding the campaign, the focus/methods of the evaluation, the overall design and analysis used to elicit findings, a synopsis of key findings from the evaluation, and lessons learned from the evaluation experience.


Excerpts from Part V. Cross-Case Study Lessons

"...There are countless challenges that make evaluation difficult and progress incremental....Characteristics of the evaluations featured that should be more prevalent in this field include:

  • Evaluation based on a sound (and where possible research-based) conceptual model of how the campaign will achieve social change (e.g. theory of change).
  • Front-end research that informs where the campaign and the evaluation should focus (too often it is only the former or neither).
  • Recognition and acceptance that different evaluation needs and capabilities require different evaluation designs (and that causation is not always the most important question). But at the same, recognition that leverage to convince sponsors to invest in campaigns will be enhanced by causal analysis of campaign impacts.
  • Recognition of the interpretive boundaries associated with specific evaluation designs (and then staying within those boundaries).
  • A focus on evaluation for learning and continuous improvement (often related to a collaborative relationship between the evaluation and campaign staff).
  • A commitment by campaign staff to be resourceful when necessary and find ways to track data on their own when possible.
  • A push for methodological innovation and rigor when possible, and not always relying on the same approach (i.e., random sample or repeated sample pre- and post-campaign surveys).
  • A recognition that evaluation can respond to the hard-to-answer questions (e.g., whether attention to social context is a necessary ingredient of behavior change, or whether media advocacy can contribute to policy change)."

To access the full evaluation, click here to download an HTML version (part by part); click here for a PDF version.