How you doin’? Nonprofits benefit from formal evaluations

 
By Lynn Sygiel, editor, Charitable Advisors

Jodi Snell grew up in a small town. Under 20,000 people live in Jacksonville, Illinois, but Snell remembers her parents were always busy helping to make their tiny community a better place.

Like organizing a softball tournament to raise money for a young cancer patient and her family. Snell recalls personally delivering a Game Boy to the girl in the hospital and recalling that garidathe joy was clearly two-fold: on the girl’s part and hers.

Amanda Lopez had similar experiences in her hometown of Wabash, Indiana. Her mom and dad over the years were foster parents to more than 100 children.

In each case the message is the same. For those who do it, community work can be a rewarding. Most volunteers say it is time well spent and personally fulfilling, even if they can’t be sure they made a significance difference.

In the nonprofit world, where organizations depend on donations and grants, it’s a different story. Nonprofits must prove their worth to keep the operating cash flowing.

And how exactly do they do that? With a little help from folks like Snell and Lopez whose vocabularies these days are full of somewhat dry words as program evaluation, data collection, logic model, outputs and outcomes.

Lopez is the president and founder of Wabash-based Transform Consulting Company. She learned the importance of evaluating programs from her days at Purdue University. She was a member of a service-learning project team whose goal was to interest third graders in engineering and science. But without a tool to measure success, it was hard to know if the kids were really coming on board.

As the only non-engineering student in the group, Lopez had a double role: to ensure that activities were developmentally appropriate and to execute short- and long-term evaluations to provide data.

“That really opened my eyes up to evaluation and the opportunity there,” Lopez said.

“When I went to grad school, I focused on systems and evaluations. (How can we) collect the right data that tracks and reports the impact that (nonprofits are) having or gives them the data that they need to improve and strengthen,” she said. After a stint in the government and coastal agencies, in 2008, she returned to Indiana and formed her consulting company to help nonprofits do just that.

Snell moved to Indiana after college with plans to be a teacher. But at the time, Indiana was laying off teachers, so she stepped into a job in the nonprofit sector. She quickly realized it was her dream job. In those early years, she admits that while she did evaluation work, it wasn’t formalized. ln fact, she describes it as “scrappy.” But from the start, she understood the importance of assessment.

Now, one of her responsibilities at the Indianapolis-based Hedges & Associates is to lead the evaluation team’s work. Since beginning in 2002, the company has offered services to build nonprofits’ capacity and later help with evaluations.

In 2013, a widely circulated essay by Microsoft mogul Bill Gates extolled the role that measurement plays in improving the human condition, how it improved the delivery of vital services worldwide. But he also offered a rueful observation.

“This may seem basic,” he wrote, “but it is amazing how often it (measurement) is not done and how hard it is to get right.”

Perhaps in response to the essay, Snell said local nonprofits began requesting technical support, heavily focused on quantitative measurements. To that end, the company hired a technical evaluation expert to help develop stronger metrics and intentional strategy.

But in many cases, this was a bit disconnected from reality. Snell’s team learned that what should be done might not be what nonprofits had the ability or capacity to do.

“Nonprofits were all of a sudden expected to track certain metrics and do certain things with very little resources provided to do so. Evaluation work is not cheap. It is labor intensive, it takes a lot of time even when you think of just cleaning up data,” said Snell.

So, her team began asking to see a nonprofit’s data before it developed a proposal or entered into a contract.

“Before we develop a proposal, can we see what you’re working with? We will take a look at it, and if it’s not consistently collected or there’s not enough data to make a valid finding, we’ll say, ‘Don’t waste your time.’” Instead, in those cases, she said, they suggested qualitative collection, to determine where to improve and oftentimes the development of a logic model and evaluation foundational pieces, helping to put measurement tools in place.

“Many nonprofits didn’t have data or the right data to do thorough and meaningful evaluations. What they needed was support to determine what to collect as a precursor to evaluation,” Snell said. “Then a year or two years from now, we have something meaningful to evaluate.”

Snell said initially local foundations drove evaluations, but now more individuals and corporate donors have joined the ranks. More importantly, some nonprofits have tackled evaluation, not because of outside influence, but because the organization is committed to its outcomes.

Today, the use of data governs almost every aspect of our lives. This is particularly true for philanthropy, which relies on it to inform decision-making, define problems and measure impact. Lopez and Snell have seen this shift firsthand. Nonprofits understand that they must show qualitative and quantitative data, but the challenge for many nonprofits revolves around the “hows” – how to accomplish it, how to pay for it and how to help staff understand the correlation between collecting data and their day-to-day functions.

And nationally, that’s been the case. In a 2018 book, the authors of Engine of Impact: Essentials of Strategic Leadership in the Nonprofit Sector, 50 percent of the 3,000 nonprofit stakeholders surveyed struggle with impact evaluation. Respondents cited inadequate or unreliable measurement of impact and performance being a challenge, and of the group, 42 percent said that more than half of their major donors require impact evaluations, but only a fraction are willing to pay for it.

And that’s not all they worry about. Lopez said nonprofit staffs often have a palpable fear of not meeting targets and that that will have an adverse impact on funding.

“We really try to build a culture of ‘We do evaluation for the purpose of learning and growth and improvement.’ And it’s OK, if that means we’re not hitting those targets. Let’s figure out why and what to do differently. If we’re not studying and implementing an evaluation plan, we’re not going to learn,” she said.

Snell said she found that local funders are looking for the nonprofit that discovers what’s not working and changes it. While the funders want outcomes, they’re practical and know that it takes time to set up tracking procedures and measuring for some time before it can be attributed.

“I think you definitely have to be looking at which of your programs are producing outcomes, but I think the other side of that is we’re working with humans,” said Snell. “Most of our work is in the social services sector. Some of the evaluation pieces may not always feel ethical. When you’re thinking about a test group, would you deprive a certain group of the population from a certain service to see if it works? I think there will always be that challenge of how valid you can get the data.” said Snell.

Bottom line, Lopez believes that Central Indiana funders are more partner-oriented.

“Local funders have pushed grantees to get clear about outcomes and have strong metrics in place with quantitative data to demonstrate their impact,” Lopez said. “For them it is not a high-stakes test – meet the metrics or funding isn’t continued – but rather, ‘Let’s have an honest conversation around where you are or aren’t meeting those metrics and what kind of capacity support is needed. Accountability is a strong word, but in a way, they’re really pushing the grantees that they’re partnering with to get clear about their outcomes and have strong metrics in place with quantitative data to demonstrate their impact.”

“And that’s where typically, we’ll be asked to come in to help support these nonprofits,” said Lopez. “Organizations are collecting data, that’s really not the issue. When it comes to evaluation, it’s helping them to figure out: Are they collecting the right data? Is the data clean and accurate to reflect what they’re wanting to collect? And how are they using it to make meaning and inform their work. And That’s what we really come in to help them with.”

Oftentimes, federal and state funding requires an external evaluator. But she’s seen the tide change at the federal level, moving from compliance to quality improvement and looking beyond a checklist of accomplishments. They are asking the nonprofits to show how their work is moving the needle.

“In multiyear grants, they want to see how you’re choosing your data and show that you’re using that data from the first year to inform any changes for the next year of programming, professional development and other refinements,” Lopez said.

What most nonprofits struggle with is carving out the time to collect the data.

“So, we really try to help them understand the critical value and importance of building that into their schedule, just like they would build in the next level of programming or services that they would offer. There are really tremendous and helpful data tools out there,” said Lopez who uses a participatory evaluation framework.

“We really want to build their capacity and that sustainability beyond our engagement because we know that most of them cannot afford to hire us forever to do evaluation work. We really want it to become a part of their culture, not just something they outsource to the consultant when a grant report is due. That’s why we spend time building that capacity and knowledge, called data literacy, and evaluation literacy within the organization,” she said.

Snell said it needs to be part of staff’s job responsibilities, not an afterthought.

“Nonprofit professionals typically didn’t start in their career wanting to be evaluators. Right? They started because they are caring and passionate about the program,” said Snell. “However, we owe it to the individuals we’re signing up to serve to know if what we’re doing really matters. This is a step to get there, and it’s not as scary as what people think.”

The opportunity and responsibility are there to utilize the results for planning and decision-making.

“That’s when you see the transformation really occur. And when we see it, it gets us excited.” Citing an example of a local Head Start organization that her company trained, agency staff reached out after it reviewed its data. The staff called because they wanted to go deeper and look at how dads are engaged.

“They felt like that’s an area of concern and stopped to really use their data and to dig into ‘What is happening with dads and where are the gaps and opportunities?’ before they just went to program changes. We’re like ‘Yay, this is so exciting.’ We didn’t have to remind them. They got it.” said Lopez.

Snell cited similar experiences.

“We’ve seen some really great success stories from organizations that have utilized research to inform and change their programming decisions,” she said. “We had one client who was able to secure funding for a whole new curriculum to be developed based on what we learned about the outcomes they weren’t able to get to with the current curriculum.”

Another local organization, she said, did the full evaluation gamut and learned that their collection measurements weren’t telling the entire story.

“We were able to reset how they were evaluating and now their story will be even stronger. I sat in hours and hours of interviews with their participants, and (through) the collection process (learned), we just weren’t getting to that same data,” said Snell who’s hopeful that both the qualitative and quantitative information will tell the same story in the next year.

Lopez believes that if a nonprofit is struggling with fund development, enrollment or retention, evaluation can help solve those problems.

“A lot of the issues that we hear a nonprofit is struggling with, usually evaluation can help solve. A lot of times, individual donors are becoming more sophisticated and want to see the impact that their dollars will have. Your evaluation can help tell that story of how (a donor’s) funding goes to support the cause and furthering its mission. It goes back to using your data.”

Not every organization is ready to jump into impact evaluation, there is a continuum. Some nonprofits begin with number counts. But as nonprofits become more sophisticated, here is some advice from Snell and Lopez.

• Meet your staff where they are. Hedges offers a workshop called “Love your Logic Model” and Transform offers “Evaluation 101.” Both companies believe in starting staffs with the basics. With turnover rate in the sector high, implementing standard operating procedures with internal systems and procedures in place is key to continuing the effort.

• Involve programming staff early in the impact strategy, helping to see the entire picture.

• Start by including metrics in job descriptions and take time to explain to potential candidates how data collection is part of the culture.

• Create a work-flow chart with a clear understanding of how evaluation fits into the day-to-day work plans to ensure the effort is not an addition, but a daily expectation.

• Continuously refine how data is collected.

• Reinforce that if data indicates a programming isn’t working, the focus needs to be on readjustment, not blame.

• Design pilot or innovative programs with research. There are multiple evaluation methods and numerous processes nonprofits can use to match desired outcomes.

• If resources are tight, interviewing participants should top the list to inform your program with the voices of those you are serving.

• Share what you are learning with two audiences – internal and external. Internally can be a powerful affirmation or enlighten staff, board and volunteers about targets not hit.

• Research to locate best tool, particularly with the more “social side” like self-efficacy there are tried-and-true evaluation tools that have been validated to test those specifically.


Evaluation Resources

If you’re interested in keeping up-to-date on evaluation, Amanda Lopez and Jodi Snell recommend two membership organizations that offer top-notch resources, webinars and conferences and share the ethics of evaluation and trends.

These are:
Indiana Evaluation Association that meets quarterly and every other year hosts a conference and
American Evaluation Association on the national level.

If you are looking for further reading, Snell recommends the 2011 publication “Leap of Reason” by Mario Morino. The author focuses on integrating evaluation into regular work.

“It’s the expectation that everyone is driven by those outcomes. What I really like is that nonprofit evaluation is not being driven by an outside force, but it’s the responsibility to the community you signed up to serve to make sure that what you’re doing works. If you’re not making sure it works than what are you doing. Why would we keep doing what we’re doing?”

Lopez has several tool recommendations.

Data-informed decision-making toolkit : Transform Consulting worked with the Indiana Early Learning Advisory Committee (ELAC) data workgroup to create this material, but the resource could be utilized by any organization. Some highlights: The data visual is a good overview tool and follows the 4-step evaluation process. It also includes a list of publicly available data by category and data visualization tips and strategies.

Data Playbook: A helpful resource for organizations to guide the evaluation process and plan. Lopez’s team used it to help develop the Indiana Early Learning Advisory Committee’s data toolkit.

• National Head Start Association (NHSA) launched its own Data Playbook resource for how to use data to inform programmatic changes (CQI process). Even if an organization is not in the early childhood education industry, this site provides an example of how organizations are using data to drive change and how applicable it is.