We've blogged a bit about logic models and talked about how they can help non-profits formulate evaluation questions. But we see some problems with using them as well. The first is that non-profits often get bogged down in lengthy discussions about whether something is an output or a short term outcome or a long term outcome or whatever. We don't view this sort of thing as a good use of time. For us, as evaluators, logic models are heuristic devices. That is to say, they are tools for generating questions and hypotheses about whether a program is doing what it has said it will do, and if it is, whether it is producing results. Often, a list of these results, short term, medium term and long term, is all one really needs. When we work with a client to help formulate evaluation questions, we usually start this way and don't spend a great deal of time on working up a full blown logic model.
The other issue we see with logic models relates to the fact that they usually neglect the evaluation work an organization already does. Take a look at the Kellogg Foundation's Logic Model Guide or the United Way's Outcome Measurement Resource Network. Both view logic models as the first step in the process of creating an evaluation strategy. But as a practical matter, few organizations in fact start evaluating within such a grand scheme. Instead they start in little ways, by taking attendance at training classes, collecting demographic and referral information from clients, tracking community satisfaction with their work, or discussing case notes at weekly staff meetings. An approach to evaluation that doesn't foreground what an organization already does is doomed to step on toes, re-invent the wheel, and duplicate effort. Yet there is no place for this in the logic modeling processes we've observed. In these approaches one simply specifies outcomes, develops indicators for these outcomes, and collects data on these indicators in order to learn if, and under what conditions, they are present.
We advocate greater practicality. Generate evaluation questions using a logic model, but figure out how you can leverage existing data collection systems to answer them. If your organization is like most, you won't have to start your evaluation effort from scratch if you follow this approach. Determine out what you collect now, what you can easily collect in the future, and what you'll want to plan for down the road, and drive your plan based on what you can realistically accomplish.
Monday, May 15, 2006
Subscribe to:
Post Comments (Atom)
2 comments:
Eric,
I enjoyed reading this post, and agree that few organizations commence evaluations from the top down -- especially when most of the data is collected from the bottom and sent up.
We are working on creating web-based logic modeling tools to help address this gap (and building-in functionality to collect data from existing systems). In support of your points, those who take a practical approach & make use of what they have (or know) are often the most successful.
Regards,
J.Garlough.
I'd love to hear back from you as this is a subject of perennial interest. There are a few other online logic model building tools I'm familiar with and I'd certainly be interested in following the development of your organization's tools. One of these days I'm going to get around to posting a list in this blog.
Post a Comment