Monday, July 10, 2006

Logic Model Training

We've just created an interactive training module on logic models. It distills everything you need to know to create a logic model into one 15 minute audio-visual program. You can find it here. Please take a look at it and let us know what you think.

Sunday, June 25, 2006

Foundationspeak... A book review

If you've had just about all the foundation speak you can handle, point your browser to three PDFs available here. The first, When words Fail: How the public interest becomes neither public nor interesting is actually the last in the series and we think the best. We particularly enjoyed author Tony Proscio's comparison of the prose created by right leaning versus left leaning foundations. The former was straight and to the point, the later obfuscating. To make this point, Proscio extracts some from a document created by a conservative think tank and does a jargon audit. In place of buzz words like 'teacher work-force', 'career advancement structures', 'competencies' and 'human capital' he finds straight forward language such as 'hoops and hurdles', 'tests', and 'get rid of'. With this analysis, it's easy to see why the right is winning the war of words.

Proscio goes on to offer a theory for why left leaning discourse is so obtuse. It's centered around the idea that as the foundation community's ideals about altruism, sacrifice and the common good loose force in a culture dominated by the materialist ideology of the marketplace, left leaning organizations retreat and come to develop a culture of isolation complete with a secret and inbred language all their own.

We disagree. In our view it's quite the opposite. Foundationspeak is what it is because foundations seek to align their language with their primary reference group, academics, policy makers and other experts who require a (seemingly) value neutral language in order to sound non-partisan, dispassionate and ultimately, scientific. As Proscio himself points out, much foundationspeak parrots the latest language of business school-- 'metrics', 'value-proposition', etc. Straight forward talk about beliefs and values are nowhere to be found.
This theoretical disagreement aside, there is much in this essay to value. We particularly liked his deconstruction of some leading foundation buzzwords and his presentation of the pro-jargon position-- yes there is such a thing and it is more compelling than you may think.

Take a look. Whatever position in the jargon wars you take, it's nice to know the arguments on both sides.

By the way, you can find a complete dictionary of foundation jargon, based largely on this essay buy clicking here.

Monday, May 15, 2006

More Logic Models

We've blogged a bit about logic models and talked about how they can help non-profits formulate evaluation questions. But we see some problems with using them as well. The first is that non-profits often get bogged down in lengthy discussions about whether something is an output or a short term outcome or a long term outcome or whatever. We don't view this sort of thing as a good use of time. For us, as evaluators, logic models are heuristic devices. That is to say, they are tools for generating questions and hypotheses about whether a program is doing what it has said it will do, and if it is, whether it is producing results. Often, a list of these results, short term, medium term and long term, is all one really needs. When we work with a client to help formulate evaluation questions, we usually start this way and don't spend a great deal of time on working up a full blown logic model.

The other issue we see with logic models relates to the fact that they usually neglect the evaluation work an organization already does. Take a look at the Kellogg Foundation's Logic Model Guide or the United Way's Outcome Measurement Resource Network. Both view logic models as the first step in the process of creating an evaluation strategy. But as a practical matter, few organizations in fact start evaluating within such a grand scheme. Instead they start in little ways, by taking attendance at training classes, collecting demographic and referral information from clients, tracking community satisfaction with their work, or discussing case notes at weekly staff meetings. An approach to evaluation that doesn't foreground what an organization already does is doomed to step on toes, re-invent the wheel, and duplicate effort. Yet there is no place for this in the logic modeling processes we've observed. In these approaches one simply specifies outcomes, develops indicators for these outcomes, and collects data on these indicators in order to learn if, and under what conditions, they are present.

We advocate greater practicality. Generate evaluation questions using a logic model, but figure out how you can leverage existing data collection systems to answer them. If your organization is like most, you won't have to start your evaluation effort from scratch if you follow this approach. Determine out what you collect now, what you can easily collect in the future, and what you'll want to plan for down the road, and drive your plan based on what you can realistically accomplish.

Wednesday, April 19, 2006

Logic Models

We did an evaluation training two weeks ago in New York at the Support Center for Non-Profit Management. Like many introduction to program evaluation trainings, this one included a section on logic models. Logic models are basically graphical representations of how a program uses resources --> to create activities --> which have tangible results --> that lead to desired outcomes. If you like flowcharts with lots of arrows you'll like logic models, if you like Microsoft Visio, you'll love them. Here are some places to find examples: The Two Professors, UW Extension Service (extension services seem to love them), or our favorite The Kellogg Foundation Logic Model Guide.

Logic models have lots of uses. Funders like to see them and many require them. They're great for getting a program's stakeholders to sit down and specify their goals and how the work being done everyday will bring about those goals. Underlying any good logic model is a theory about how the world works-- for example, a theory that acquiring knowledge about how to write a resume will lead students to write better resumes and then to get better jobs. Logic models are also useful in designing an evaluation.

Why? Because logic models force those who design them to specify what a program's activities are and what outcomes-- short term, medium term and long term-- will come about once those activities take place. Here's an example: offer resume workshop and job search networking coaching to 20 recently unemployed workers--> workers learn resume writing and networking skills --> workers send out improved resumes --> workers make 10 networking contacts in first two weeks --> workers obtain interviews --> workers get jobs.

It shouldn't be too hard to see how even such a simple logic model like this one could help an evaluator development an assessment plan. Here are some of the questions: How many resume writing workshops took place? How many people attended? Did they take away the required knowledge? How many changed their resumes? How effective were the new resumes (did they implement the knowledge correctly)? How many used the coaching service? Did they view the session positively? Did they understand what was said? Were they able to apply it? How many interviews did they get? How many of the interviews were appropriate? How many started new jobs within a given time frame? The list goes on.

Logic models are useful in evaluation because they require programs to make very specific lists of exactly what they are going to do and what they expect to see after they do it. This is crucial information for an evaluation since it specifies exactly what needs to be measured. Logic models get everyone on the same page and tell evaluators, in concrete terms, what they need to look for. That's what makes them so valuable. So it shouldn't be surprising that nearly every "Intro to Evaluation" training we've ever observed has devoted a lot of time to logic models.

But we believe that their strengths have led evaluators to adopt logic modeling uncritically. Check back in a couple of days to see what we have to say on the subject.

Thursday, April 13, 2006

Gifts of the Muse

Gifts of the Muse: Reframing the debate about the benefits of the arts

Gifts of the Muse argues that beyond the instrumental benefits of the arts—on individual cognitive development, on attitudes or behaviors that improve school performance, on physical health, on community economic development— people are drawn to the arts for their intrinsic benefits, the sense of meaningfulness, pleasure and satisfaction they provide. The four authors, all associated with the Rand Corporation, make the case that the research which attempts to document instrumental affects is often weak and inclusive. They call for widening the discussion of the value of public support for the arts so that it includes intrinsic value considerations and offer a language and a rationale with which to begin that work.

While there is much in this volume that will be interest to arts organizations seeking public (and private) support, any organization which offers a program that provides intrinsic benefits should take a look. Here's an example.

We did an evaluation some time ago for an organization that teaches chess to children in the public schools. The program wanted to be able to document that chess helped kids concentrate, improved their spatial reasoning skills, and their ability to persist at solving problems. Despite a strong evaluation design, many thousands of dollars later all we could document were very modest effects. Does this mean chess should be eliminated. Of course not. Teaching chess to kids is an intrinsically valuable thing to do. It makes a small, if not entirely measurable, impact on many kids as the study showed. And it will probably make a significant impact on at least a few (say those who go on to play at the tournament level), but that's not the point. Exposure to chess gives kids a chance to try something new, something other than reading, math and, increasingly, test prep. It gives them the chance to try something different, exercise new intellectual muscles, and connect with a pastime that is centuries old.

There are two issues to think about here. The first relates to finding ways to persuade funders that a program's worth isn't always reducible to its instrumental benefits. As we said before, Gifts of the Muse provides a language for doing that. The second relates to figuring out how measure its intrinsic benefits. That's usually not easy and sometimes impossible to do. We'll work on developing some suggested strategies and post them as they percolate up.

Monday, February 27, 2006

Why this?

Why would anyone want to read about program evaluation? To own it, is the best answer we can come up with. Program evaluation is an inherently political activity. It competes for and uses resources, it articulates a program's goals, it defines criteria for success, and it can be used to justify continuing a program or eliminating it. As a practical matter, few programs are eliminated based the results of a program evaluation, mostly, they are changed, so we don't believe that most of the politics resides there. In our view, the political nature of evaluation is manifest most when it articulates goals and defines what success will look like. Yes, everyone in a well run organization knows what its goals are-- but often only in a very general and vague sense. Evaluation forces a program's stakeholders to sit down and specify the connections between what they want, what they do, and what they actually see. To be sure, we've evaluated programs that make a point of not having clearly articulated goals. But even then, the process of creating instrumentation, collecting data, and then searching through it for interpretations and meanings filters some things in and some things out.

So this blog is going to about these kinds of choices.