As funders, communities, and evaluators become more knowledgeable about the root causes of racial and ethnic disparities in health, education, income, and other conditions of well-being, we begin to realize how community and systems change interventions are necessary to affect these root causes. In consequence, we evaluators find ourselves re-examining our roles, training, and competencies. Evaluations of these types of interventions do not only generate more knowledge or inform investments—at best, they also help strengthen communities and promote equity. In our evaluation of place-based work at Community Science, we see how evaluators play the following roles in addition to carrying out their technical responsibilities.
Evaluators as change agents. Information generated through evaluation is used to advise policies, strategies, and programs to end racial and ethnic disparities. When we actively promote an improved or alternative course of action based on the results, we have the potential to promote social justice and equity, allowing us to assume the role of change agent. This suggests that evaluators have to be vigilant on how the inquiry process is designed and implemented, especially in order to orient the intervention within a broader context and through a systems lens so as to lift up structurally racist policies and practices that might have contributed to the inequity. This role, however, can be challenging in the following ways.
Evaluators as negotiators. Funders tend to invest in communities without sufficient attention or resources to address the turf issues or value differences that frequently exist among organizations in a community or place. Competition for resources and power, as well as where an organization is situated in a system, can affect how it engages with the evaluation and the evaluator. Evaluators may find themselves caught in the web of complicated relationships—among organizations that compete for grants; between established and emerging organizations, with the former having more power than the latter; and among organizations with different values or that serve different constituencies. The evaluator faces the challenge of answering some formidable questions: Who represents or speaks for the community? To whom is the evaluator accountable? How can the various participants reach consensus? (Leviton, 2003). It can be difficult, if not impossible, for evaluators to remain separate from the dynamics that can affect the evaluation’s implementation, outcomes, and impact. Consequently, evaluators find themselves having to play the role of facilitator, broker, negotiator, and conflict manager on top of their role as evaluator. This role is essential, yet challenging, because:
Evaluators as capacity builders. In order to solve complex social problems, organizations have to implement strategies directed at community and systems change. For organizations that are used to delivering services, this is a shift in thinking, while for organizations that may already be engaged in systems thinking, they struggle with identifying and collecting data on the right outcomes. As a result, evaluators have to work with the organizations to build their capacity to link their strategies to the desired outcomes (i.e., through logic modeling, a process that program implementers usually dislike); design and implement a monitoring and learning system that can bridge program, evaluation, and social change; and engage in a community of practice in order to promote the evaluation of peer exchange within what is usually perceived as a competitive environment for resources. The field of evaluation capacity building has grown tremendously in the past several years (Wandersman, 2014), yet we continue to struggle with the following in building the capacity of public, private, and nonprofit organizations to use research, evaluation, and data:
Evaluators as critical friends to communities and one another. The field of evaluation has been exploding in the last few decades as advanced information and technologies contribute to new evaluation models, proprietary tools, and marketing and branding strategies. Evaluators compete with each other for contracts and for the ability to influence funders, but they may end up paying more attention to promoting their models and tools than helping stakeholders embrace and use evaluation. As a result, evaluation consumers find themselves with choices about what method is best for their intervention, how much it will cost, how it will benefit them in the long run—much like buying a car— and usually in order to comply with their funders’ requirements. These consumers are not equipped with the knowledge to make informed choices and, consequently, may end up making the wrong decision. Then fast-forward a couple of months, and another evaluator enters as the cleaner with the challenging task of fixing what went wrong. This situation happens more frequently than people would like to think. Chavis (2003) discussed how, as evaluators, we can be our own worst enemies in perpetuating the above situation, because we tend to:
So, how do we avoid the cauldron?
We cannot—because, as Saul Alinsky, the major thought leader in Community Organizing, asserted, “Change means movement. Movement means friction. Only in the frictionless vacuum of a nonexistent abstract world can movement or change occur without that abrasive friction of conflict.”
Evaluators need to speak truth to power. In fact, we need to understand and address power and our role in promoting change or the status quo. As change agents, capacity builders, facilitators and negotiators, and critical friends—roles that are not mutually exclusive—and as long as we work in communities and attempt to evaluate interventions designed to ameliorate social problems, we will always find ourselves in a cauldron where conflicts are inevitable. Rather than avoid or dismiss them, we need to learn how to deal with them.
Community Science has learned a lot about our role in evaluation, sometimes the hard way. Nevertheless, we remain optimistic that by continuously raising the hard questions—at the risk of sounding like a broken record—and challenging ourselves and our colleagues, we can develop a fireproof suit that will reduce the heat.
The American Evaluation Association’s Guiding Principles exist to help us remember that our job as evaluators is to make the world better, and a just and equitable world is a better world. Reflecting on the quote by Alinsky, we do not exist or operate in a vacuum, and we are not value-free. A commitment to address the root causes of inequity and injustice does not make us any less truthful or precise in our analysis.
So, what can we do? Perhaps we can start by:
Chavis, D. (2003). Looking the enemy in the eye: Gazing into the mirror of evaluation practice. Harvard Family Research Project, IX (4).
Chavis, D., & Lee, K. (2015). What is community anyway? In P. Tamber, B. Kelly, L. Carroll, & J. Morgan (Eds.), Communities Creating Health. London: Pritpal S. Tamber Ltd. in association with Stanford Social Innovation Review.
Leviton, L. (2003). Commentary: Engaging the community in evaluation: Bumpy, time consuming, and important. American Journal of Evaluation, 24 (1), 85-90.
McClintock, C. (2003). The evaluator as scholar/practitioner/change agent. American Journal of Evaluation, 24 (1), 91-96.
Wandersman, A. (2014). Moving forward with the science and practice of evaluation capacity building: The why, how, what, and outcomes of ECB. American Journal of Evaluation, 35 (1), 87-89.