andrew-taylor, Author at 91ĘÓƵ Nonprofit Network /author/andrew-taylor/ Advocating. Leading. Collaborating Wed, 06 Jul 2022 00:09:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/uploads/2024/06/cropped-favicon-32x32.png andrew-taylor, Author at 91ĘÓƵ Nonprofit Network /author/andrew-taylor/ 32 32 Evaluating Public Benefit /2017/12/evaluating-public-benefit/ /2017/12/evaluating-public-benefit/#comments Fri, 15 Dec 2017 17:57:31 +0000 https://onn.c7.ca/?p=10718 How Changing Evaluation Practice Might Help to Solve an Identity Problem

As an evaluator working in the nonprofit sector, I’m often struck by the things our society sees as inherently, obviously worthwhile, and the things it feels it needs to evaluate. For example, everyone seems to take it as given that public parks are not simply worthwhile, but are in fact a fundamental and necessary element of a good community. Although an evaluation of the value of parks might be really interesting, I’ve never seen anyone demand evidence of the impact of one before investing money. Parks have a clear, strong, commonly understood identity.

ONN’s new paper, Introducing the Public Benefit Nonprofit Sector, makes it clear that society as whole does not tend to look at public benefit organizations as inherently valuable, even though a compelling case can be made that our communities need these organizations at least as much as they need parks. I think this is a really important point. Because the term “non-profit” lumps together public benefit organizations with the very different “mutual benefit” groups, the public benefit non-profit sector is left with a serious identity problem.

This paper really got me thinking. In my work, I see this identity problem reflected in the ways in which government approaches evaluation of the nonprofit sector.  Their evaluation requirements often focus on how effectively non-profit organizations have delivered particular services or events – an approach that implies a transactional, purchase-of-service type relationship between the sector and its funders.  We see this same transactional attitude reflected throughout society.  For example, we see it in calls for the sector to be judged by the degree to which it can minimize spending on overhead.

What would evaluation look like in a world where society understood and valued the public benefit sector? Where it was seen as the essential glue that keeps communities together in difficult times, and a crucial buffer against disruptive trends in a changing world?  For one thing, impact would be framed in a much broader and more complex way. We would be exploring how public benefit organizations had enriched civic dialogue, helped volunteers feel a stronger sense of belonging, or enabled our communities to respond to emergencies. We might also, (as ONN’s paper recommends), be calling on government to gather data on the economic and social benefit of the sector at a societal level, so that we would be better equipped to gauge the impact of particular organizations at a community level.

In such a world, chances are evaluation would also be a much more inclusive process. If other sectors of society saw the public benefit sector as an essential partner, they would have a keen interest in learning from it. Businesses, universities, advocates, service users, donors and politicians would be working together, digging into thorny and complex public benefit evaluation questions with gusto.

Perhaps – and I recognize this is a subversive idea – evaluation results would address the political elements of our social issues more overtly. They might  highlight shortcomings of government policy, point out the effects of power imbalances and systemic discrimination, or call for changes in law.

The biggest difference in evaluation practice, in a world which understood and valued the public benefit sector, might be in the basic purpose to which evaluation methods were put.  Instead of being used to hold a subcontractor accountable, evaluation would be seen as a process to advance our shared understanding of how best to build strong communities. It would focus on learning and action.

Fortunately, we don’t need to create this very different kind of evaluation from whole cloth. It already exists. Participatory, critical evaluation, focused on systemic issues and grappling with complex, nuanced questions, is practiced regularly cross 91ĘÓƵ. ONN has highlighted examples of it through its work on the Sector Driven Evaluation Strategy.  

This gives me hope.  Although there is much work to do to changes society’s attitudes to the public benefit sector, we know how to do this more meaningful kind of evaluation already. If we diverted resources from evaluations focused on accountability, and did more of this kind of evaluation work, perhaps, we would be doing more than simply documenting public benefit.  We might in fact be helping people to understand why it is important.

]]>
/2017/12/evaluating-public-benefit/feed/ 2
More useful evaluation for nonprofits? Yes, we can! /2017/06/useful-evaluation-nonprofits-yes-can/ Thu, 29 Jun 2017 14:11:00 +0000 https://onn.c7.ca/?p=10438 In 2015, ONN took on a project to look at the systemic issues of evaluation in 91ĘÓƵ’s nonprofit sector and design potential solutions to help us get to more useful evaluation. This framing led to the development of our Sector Driven Evaluation Strategy work, which to date now includes a resource treasure chest designed to help identify pain points experienced by nonprofits, and move forward with your stakeholders to create an evaluation process that can work better for your organization.
To tell the story of our work, we’ve created a of our project with highlights.

As the two-year project draws to a close, we want to share some of our own evaluation findings. We recently surveyed the network to help us get a sense of the ways in which our work has been viewed, shared, and used. We hope this information is useful to you when considering ways to use our materials to support your work.

How have our resources been used?

We know there are a lot of great resources out there, but often times they can get lost among the many other great resources that exist.
Our survey results showed that the top three ways in which our materials have been used or might be used in the future were:

  1. To improve my own evaluation knowledge or practices (78%)
  2. To engage with funders, grantees, or partners on my evaluation needs (48.6%), and
  3. To get more people in my organization interested in evaluation (40.4%)

What we heard

While those numbers tell part of the story, we also wanted to share some specific examples of use, in a “What we heard” format.
Developing an evaluation strategy

  • Our organization is working towards being more effective with our evaluation processes. The materials have been very helpful in assisting our strategy.

Improving relations with stakeholders

  • Used resources as part of internal training materials for staff, shared resources with staff who work with agency partners on evaluation (we are a funder).

Helping to write grants

  • To assist in preparing applications for grants or government programs
  • 6 Simple Tips for Communicating About Impact. This document is invaluable because it provides practical advice for a variety of purposes, addressing community, writing proposals etc.

Developing or strengthening relationships with others on evaluation

  • Keeps me in touch with others thinking about evaluation
  • To share with network members to start a discussion about the struggles with evaluation and find potential solutions/next steps
  • Deeper evaluation of effective practice will support funding and resource requests and help direct program improvements and new offerings. ONN resources will help start the discussion with staff (it’s not just coming from me!).

Training or Skill development

  • “Five important discussion questions” has helped me with different language to use in preparing evaluation workshops for the non-profit sector.
  • Reading the evaluation blogs and the evaluation resources have helped me understand the various evaluation approaches and to understand how I might apply them within my organization.
  • Materials have been used to guide discussions on evaluation and to generate knowledge sharing among staff who are members of an evaluation project team for a non profit sector agency
  • The discussion questions will be used at monthly meetings to keep the team thinking about evaluation.

Understanding the issues

  • Has helped understand the complexity of evaluation within a nonprofit organization (eg mission-related evaluation vs. operational, financial) and the various levels of inquiry (outputs through to impact)
  • The infographics help to frame up evaluation in a way that is accessible to those of us who don’t have a social science background but want to understand evaluation in order to learn how and when it is most useful.

Responding to government requests

  • We have used summary text about ONN’s position on evaluation in responses to government requests for feedback on programs and issues.
  • We see value in being able to point to a trusted organizations stance on evaluation – so more principles, or content that is worded in a way that we can point to and say “We endorse/recommend the stance the ontario non-profit sector has made on xyz….”

What has ONN learned?

When we started out, one of our key goals was to start conversations about evaluation. Our findings suggest that we have certainly achieved that goal. Another important learning came as a result of our literature review and highlighted the power in naming the issues, even if the solutions are not yet readily apparent. Being able to identify and name the reasons why evaluation wasn’t working resonated.
Finally, we learned there is a hunger for ways to use evaluation that is relevant, engaging, and actionable. Our six factors that lead to useful evaluation was another conversation starter and a useful frame for breaking down the underlying relationship factors that truly matter in an evaluation process.

What we need to do next

We understand there are many demands in the sector that affect people’s time and resources. Our own evaluation shows there is still work to be done to disseminate our materials more broadly to the sector and spread the word that nonprofits can help drive evaluation strategies. Our resource treasure chest will continue to be accessible online and we encourage you to help us share the work to your own networks and teams and report back if and how it’s useful.

Our vision

We also want to re-share our 2020 vision for evaluation. Our work is but one small part in creating a system that makes it easier, more rewarding, and less stressful for nonprofits and their partners to do meaningful evaluation work. We hope the conversation will continue and, with any luck, we’ll have an evaluation system that:

  • leads to action more often for more purposes
  • addresses needs and questions that are important to a range of stakeholders
  • is planned, conducted, and shared in a more collaborative way; and
  • is used when and where it can help the most.

Final thoughts

Lastly, we’d like to thank everyone who made the time to chat with us, share their experiences, and try out resources. It’s been a great experience to have met with so many different people across 91ĘÓƵ and to learn about the interesting evaluation work that is taking place. Many thanks as well go to our advisory committee and to those who contributed feedback and helped to shape the creation of our resources and other materials. We couldn’t have done this work without you.

About the survey


 

  

Resources

View our evaluation project uses, purposes, and key questions

]]>
Evaluation: Expanding design learning /2017/01/evaluation-expanding-design-learning/ Mon, 30 Jan 2017 14:00:32 +0000 https://onn.c7.ca/?p=10052 Late last fall, ONN’s evaluation project took me to the in Atlanta, Georgia, to present alongside Andrew Taylor, ONN’s Resident Evaluation Expert.
For me, it was an important professional development opportunity (one of the seven elements of decent work) and, thinking back, it was a bit of a surreal experience. Never did I think I would be in Atlanta at an evaluation conference of 3,500 people from around the world, to talk evaluation all day, for four days straight. On the first night, I found myself having a traditional southern American dinner with 20 or so fellow Canadians from across the country. In between bites of fried chicken, sips of beer, while talking about evaluation, it occurred to me how much my professional career had changed in only a couple of years. Back then, I had no idea there even was the discipline of evaluation!

Andrew and I were there to present our work related to our recently released guide, Learning Together: Five Important Discussion Questions to Make Evaluation Useful. It was also an opportunity for me to learn more about the diverse field of evaluation.

Evaluation learnings to share with the sector

The Americans don’t do anything small. There were over 900 breakout sessions to choose from and they spanned the gamut of the evaluation field. There were three sessions that stuck with me that I wanted to share with the network.
First, one of the highlights was a session called that featured well-known experts in the evaluation field. The talk was part of a larger discussion on the big picture role of evaluation in addressing society’s challenges. In particular, Dr. Stafford Hood emphasized that evaluation can have real world consequences and is therefore something that is critically important to do — and to do right. He referenced the Gregory Porter song and made the link that the powerful lyrics from that song could easily, but sadly be made into a song called “2016 What?”. He stressed that we shouldn’t be afraid to confront uncomfortable learnings. In Hood’s own words, “evaluation and research can serve as a conduit for social change or serve institutional racism.” In sum, Hood argued for evaluation to be a crucial piece in pursuing social justice.
His talk, as well as those of his fellow co-presenters, was a strong reminder that evaluation isn’t only about collecting and reporting data to funders. Rather, it is about helping to get answers to important questions, make improvements or changes, and ultimately to lead to action — whatever that action may be.
The session on was another highlight. A number of presenters showed how qualitative data could be better presented and often using tools no fancier than PowerPoint or Excel. In particular, Jennifer Lyons introduced a draft of her , a very practical and easy-to-use tool, to help emphasize the messages that really matter.
There were a number of sessions on data visualization at the conference and, as far as I could tell, they were some of the most popular. There is a hunger to be able to tell the story of data better than the traditional long-form report, and data visualization appears to be a growing skillset that will be in demand in the future.

Finally, the session named a real fear in evaluation that we have heard about in our work at ONN. The session featured stories from three panelists about when and how they failed, the implications of that failure, and how they learned from it. in particular left an impression after humorously introducing the concept of FOFU, or Fear of F***ing Up. Evergreen encouraged us to “fail big, often, and in front of other people,” while other session speakers emphasized not walking away from failure or dwelling on failure for too long. The key is to critically reflect with an eye to improvement and then push on.

What made the session work was the honest nature of their admissions. The failures were real and sometimes serious and the implications weren’t sugar coated. However, from those failures came important insights. It was an important reminder that failing happens and that this is not something to be dismissed, but embraced.

Spurring on more evaluation – and professional – development

Atlanta was a great learning experience and something I likely never would have thought to do on my own. It was an important professional development opportunity and I’m grateful to my colleague Andrew for taking the lead in putting together our submission to present at the conference and to be able to attend as ONN’s staff representative.

Back in 91ĘÓƵ, I’ve since registered to take a course on data visualization and we’ve also made a commitment at ONN to do a better job of discussing our own failures as a team so that we can better learn how to do our work better. The conference also reminded me just how diverse the field of evaluation really is and to remember that almost any question can be an evaluative one.

]]>