Part of what makes data collection and evaluation so challenging is that it’s not always clear what the most useful data to capture actually is. Last time, I talked about how an organization might determine what the most useful data is for evaluating performance of volunteers. It’s not as easy as it seems.
The most common approach to tracking volunteers is to simply count the total number. If I told you that my program recruits thousands of volunteers each year, you could reasonably conclude a few things: we do a good job recruiting volunteers, and our mission is compelling. But if you want to know something about volunteer performance or program efficiency, that number doesn’t offer much insight. Remember, volunteers require training, support, and oversight. Despite the free labor, an organization is still making an investment in time (read: money) to make sure they do their jobs well.
Instead, I might calculate volunteer performance relative to the cost to manage them. For example, if my volunteers make widgets, I would calculate the ratio of the cost to train and support each one versus the number of widgets they’re able to make.
If that ratio seemed too high, I might consider reducing the number of volunteers I use. This way I can use my limited resources to provide each one with more support, in the hopes that they get better at their jobs and make more widgets. Over time, I’d discover an optimal ratio, and have a better sense of the strategic investments necessary to grow the organization’s impact.
Ultimately, the data to use depends on the question you’re trying to answer and what you’re trying to achieve. But, we should make sure we know what the data is actually able to say.