“Information is power,” says Deputy Mayor Kanika Tomalin of St. Petersburg, Florida, during her welcoming speech at the annual Community Indicators Consortium in November 2017. She was referring to the conference’s theme this year—Information is Power: Data to Support Community Change—where hundreds of data-driven activists gathered to discuss the importance of community data for addressing inequity. The validity of data and reliable sources of information have never been as crucial as it is now, especially with accusations of “false news” and the idea of “alternative facts.” The way information is transmitted these days in real time has made it easier for elected officials, leaders, celebrities, journalists, newscasters, and the public in general to share their views, sometimes without regard to facts and the impact of their views. This includes information about policies, programs, and communities. Consequently, it’s hard for the public to know what is accurate and what is hype because the news networks, Twitter, YouTube, and other media outlets’ headlines tend to determine the narrative, sometimes before the facts are confirmed.
Thus, it is crucial that the people who are working tirelessly to create equitable and just communities can gather and interpret their own data, and decide for themselves what is working and not working and how they can improve their efforts. This is what evaluation capacity building (ECB) is about—facilitating the capacity of organizations to test ideas, identify what works and doesn’t work (i.e., to evaluate), and use that knowledge to improve strategies, advocate for change, and promote equity and social justice.
On the surface, ECB doesn’t sound that complicated. The current literature on the subject points to understanding organizations’ human, financial, and technical capacity in order to build their knowledge, skills, and systems to design and implement evaluation. However, as much as the process of ECB can build understanding, it can also stir up disagreement and conflict and perpetuate or challenge power differences. The most beneficial ECB will promote equitable change by skilled people and organizations, using evaluation information to break down the systemic barriers that keep some communities from having fair and equal access to opportunities and resources as well as improving the effectiveness of social strategies for all. The change process naturally creates conflict because it means shifting power. As such, the relational and political aspects of ECB need more attention, both in the development of ECB frameworks and practices and in the training of evaluation capacity builders.
As evaluators, we often find ourselves in the situation of building evaluation capacity, whether it is the capacity of our clients or, if our clients are foundations or federal agencies, their grantees. We naturally focus on their technical ability to determine their desired outcomes and metrics and to collect, analyze, and interpret data. Our emphasis is wrongly placed solely on the technical aspects of evaluation—methodology, data, statistical modeling, etc.—when in reality, there should be equal emphasis on strategy, power, relationships, and community.
This means that evaluators, in addition to being technicians, also have to be strategists, facilitators, bridge builders, therapists, educators, and most important, change agents. Many evaluators might ask if these skills are appropriate, especially if we are currently not trained to assume these roles. This is perhaps not the right question, because these roles are inevitable for an evaluation profession that desires to improve programs and policies and effect equitable change. The more important question is: How can we build our own capacity to assume these vital roles?
Here are a few potential solutions to help us overcome this challenge:
Evaluators cannot do it alone, however. The change in evaluator roles also needs to be “market driven.” Nonprofit organizations have to become more savvy consumers of evaluation and engage evaluators who can deliver a more holistic evaluation. Also, public and private funders might consider an all-inclusive approach to strategy development, evaluation, and learning, rather than having separate contracts for each of these functions and leaving it up to the consultants to work it out.
In summary, are we, as evaluators, really ready for the task of evaluation capacity building? Are the institutions that are part of the ecosystem for evaluation and learning ready to go about doing their business in a different way? How can the evaluation profession partner with professionals in the fields of organizational change and human interactions to raise our standards for the use of evaluation to inform and support social change? How do we make sure, in the process of building evaluation capacity, that we stay true to our responsibilities for the general and public welfare (American Evaluation Association’s Guiding Principle E) and do no harm?