Evaluation capacity building for improved organizational and system learning has been part of Community Science’s practice from our beginning. We see evaluation capacity building as having all the abilities and motivation to learn how to do better by systematically using data and others sources of reliable information to make decisions. Public and private organizations and public agencies have an obligation to continuously strive to equitably provide the best services and supports possible to individuals, families, and communities. In order to do that, there needs to be the capacity in place to evaluate and then improve the work of these collaborations and organizations. These days, just about every organization that receives funding from any public or private entity is expected to do some data collection and reporting. This global movement to use data is being further fueled by the advancements in information technology. There is a virtual Home Depot of evaluation tools available on line.
Evaluation capacity building can incorporate a Continuous Quality Improvement (CQI) process by providing the data to monitor the quality of strategy and program implementation. Performance or service quality standards and corresponding measures are collaboratively set in a CQI process. Frequent monitoring and data-driven discussions identify issues for further evaluative investigation and resolution. Community Science is facilitating a CQI process for a technical assistance (TA) center for government grantees. Quality measures include how many cases are open more than 10 and 30 days, pro-active (TA center-initiated) versus reactive (grantee-initiated) service activities as well as brief grantee satisfaction survey for all service transactions. Our staff coach the TA center’s staff on the investigative and problem-solving steps of CQI.
We see the purpose of evaluation capacity building as not just developing the ability to collect, analyze, and interpret data, but to use that information to improve what we do, change systems, and inform others as to what is working under which particular conditions. Community Science has developed a sustainable approach to evaluation capacity building in order to achieve those purposes called Innovating for Social Impact.
Community Science’s approach to evaluation capacity building is aimed at improving organizational effectiveness for philanthropic, nonprofit, and government organizations. Using a five-part model called Innovating for Social Impact (see Figure 1), Community Science works with organizations and collaborations for which change management is an ongoing process and where finding solutions for complex social problems is an urgent mission.
Innovating for Social Impact improves upon traditional knowledge management and learning organization approaches by emphasizing the goals of innovating (doing things better based on what was learned) and impact. It also borrows from systems theory and models to emphasize the interconnectedness of different organizational functions and a feedback loop. It was developed to help organizations move beyond their typical “siloed” practices, and to think and act more systematically and holistically about innovation.
The model guides organizations to assess and take action in the following five areas:
Innovating for Social Impact can be integrated into any evaluation or technical assistance project, or implemented separately via workshops, coaching, or other form of consultation, which are services that Community Science is ready and available to offer.
Beyond making organizations and systems more effective, evaluation capacity building has an additional role in changing systems by not only learning what works, but also by pointing out issues, inequities, and opportunities that need to be addressed. Having the ability to present issues based on data can have a powerful effect on mobilizing communities, advocating, and creating systems change. For example, Community Science worked with community organizations in a small midwestern city to identify issues that address the health and economic wellbeing of low income residents in order to mobilize residents and city leaders to act on these issues. We helped neighborhood leaders find resources that could collect and analyze the data for them, and assisted them in learning about how to interpret, present, and defend the information. Our staff worked with leaders to develop a questionnaire that they could use to go door to door to identify the issues on the mind of residents. Dilapidated buildings and vacant lots topped the list. We then worked with them to look deeper into the problem to identify issues that could be used to organize the community and compel city leaders into action. The community leaders looked at code enforcement data. The data analysis showed disparities between poor and middle-income neighborhoods in terms of enforcement of building code violations. Building and vacant lot violations were more likely to be pursued in “better” neighborhoods than in the poorer ones. This led to resident organizations taking actions that resulted in changes in the cities’ vacant lot and dilapidated building policies.
The promotion of evaluation capacity building is a welcome advance in the world of social strategies and programs. Funders and evaluators cannot just stop there. We must start developing the capacity not only to evaluate and learn, but also for improvement for advocacy.
If you are interested in learning more about our work in this area, contact David Chavis— email@example.com.