Evaluating Downstream – 5 Ways to Measure Results

I’m sitting on the floor in a warm room in Phnom Penh, Cambodia, and I’m listening to a conversation in Khmer, a language that I do not know. Around the room, also sitting on the floor, are farmers and village agricultural leaders, men and women of various ages, listening and learning. The discourse includes a curious mixture of Khmer dialogue and Western-style flipcharts and diagrams. The two trainers, also Cambodian, interact comfortably with the farmers, and the conversation is frequently punctuated with laughter. An oscillating floor fan helps chase the monsoon humidity from the room.

This is September 2012, and I am in Phnom Penh to evaluate a Train-the-Trainer Program with people from five countries. These people learn to train in week one and then use their learning to train others in week two. This is week two, and the farmers are gaining skills in leadership.

A primary focus of my work is developing a downstream evaluation. Despite its liquid connotations, the term has no connection to the Mekong, which I walked past each morning, or the monsoon-driven deluge that nearly carried away my taxi on the way to the airport. Instead, it refers to the imperative in evaluating Train-the-Trainers—figuring how to measure the ultimate results—the impact on those “downstream” from the training. Why is this important? Because evaluating the training the day after may not mean much. All it can assess is immediate satisfaction and some skills or concepts. More elusive is whether the skills will be applied later, and whether they will bring about real change. In the case of the farmers, real change might include finding sustainable water sources, increasing rice yield, or weaning themselves of pesticides. Downstream, the stakes can be high.

Driving Results: Intention Follows Attention

Capturing downstream results is not only useful as a metric for funders and program managers. The focus on results can help impel participants toward learning and success. As a US Marine colonel observed in a recent meeting, what you inspect, you can expect. Whenever the evaluation lens focuses on results, those results become the focus, shifting it away from immediate activities. And while the latter are important, the outcomes are still the holy grail of programs.

A Downstream Solution

So, given the imperative to evaluate downstream, how do you begin? Surprisingly, you start at the end. Here are the steps in the plan:

1. Start with the downstream results. Have a conversation with learners, preferably those downstream of the program. What challenges are they working on? What do they hope to achieve?

2. Identify downstream metrics. Involve the trainers and preferably the downstream learners in the metrics and the process. What conditions will constitute evidence of results?

3. Develop accessible tools. A ready solution is a survey administered via smartphone using an online platform such as Survey Gizmo.

4. Administer later, but not too much later. Time is essential. Too early, and the results might not be known. Too late, and the trainers may have lost valuable data.

5. Compile the results, whether quantitative or qualitative or both, analyze the data, and report your findings regarding connections between the upstream and the downstream.

Is the process simple? Not really. But under most circumstances it will be feasible, and it will gauge and drive results.

Tell us what you think in the comment section below, and be sure to add your own tips for evaluating the impact of training and development downstream in your own organization.

To learn more about CCL’s work in Cambodia, check out these posts from
APMAS Knowledge Network and CCL’s Leadership Beyond Boundaries blog.
avatar

About Michael Sikes

Michael Sikes is a Senior Evaluation Faculty in the Evaluation Center, which supports CCL and the field through the development of new knowledge, methods and approaches to the evaluation of leadership development. In this role, Mike works with CCL staff, clients, and external evaluators to identify organizational and leader needs, articulate program outcomes, and evaluate initiatives for improvement and impact. Mike’s evaluation work includes CCL’s custom initiatives, open-enrollment programs, executive coaching, new product development, and external evaluations of leadership development programs. Mike works extensively with Leadership Beyond Boundaries and visited Cambodia in 2012 to evaluate a Train-the-Trainers program.
This entry was posted in Globalization, Leadership & the Future, Leading Globally, Nonprofit Leadership and tagged , , , , , . Bookmark the permalink.

3 Responses to Evaluating Downstream – 5 Ways to Measure Results

  1. avatar eyal policar says:

    I beleive there is a whopping difference in teaching leadership skills and teaching change management. For a farer to change a habit or adopt new technology what s needed is formost technical support and financial opportunities. NOT that this is easy,however it falls under the traits training.
    To teach leadership skills such as transformational thinking or flexible adaptation demands a much wider multidisciplinary spectrum.
    It is not coincidental that most agro training falls under the category of learning by doing.
    This allows for pre defined goals and succesful implementation can be measured, be it in the downward model or any other model.
    Measuring embedded leadership capabilities of a certain programme demands diffrent criteria.

  2. avatar K Shuler says:

    You make important points here. The only way to truly know if any training program is effective is to measure what is being put into action; best is several samplings at various intervals. In the case of a train-the-trainers program or any other where the expectation is that the baton will pass first to the trainers, then to those they’ve trained (and maybe even to third- and fourth-generation recipients), the only way to know if it’s effective is to measure in those following generations. My strong belief is that it is only through third- or more – generation programs and evaluations that we know what is truly effective with leadership training.

    I would suggest adding a 6th point: create a feedback loop incorporating results of the evaluations into not only the selection/finetuning of the metrics but also, at periodic intervals, into the shaping of the training. I would also recommend that one of the samplings be of 2nd and 3rd generation trainers’ perspectives on being ‘bridges’ in the process.

    • avatar Mike Sikes says:

      Thanks to both respondents for their comments, which I think are relevant and salient. Certainly the process of bringing about change is complex. This is why the evaluator’s charge is not only to measure “whether” but also to ferret out the equally important “how,” “when,” “under what circumstances,” and “why.”

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>