Evaluating your evaluations

Getting better at getting better

Approximate reading time: five minutes, 1032 words

Meeting and workshop evaluations are only useful if they help us get better at the work of meeting and learning.

I’ve long been disappointed in both the frequency and quality of evaluations, as well as how poorly decision-makers often use the feedback gathered to inform their future event designs.

What follows are a few considerations to stimulate your evaluation efforts so they gather more actionable insights for your future efforts.

Revisit what you evaluate and why

Evaluations are assessments that usually address one or more of the following three categories:

1. Performance on metrics relevant for all (or most) meetings, workshops, or other gatherings

In my experience, most evaluation questions fall into this category and do not vary much from event to event.  They typically assess the logistics for the event (venue, room set, food, AV, online platform, et al); the satisfaction with the speakers or facilitators; and the relevance of the content and conversations.

Evaluation questions in this category usually can be improved by making them more specific.  Too often they merely assess vague levels of satisfaction.

For example, event attendees are often asked to evaluate the food offerings: How would you rate the food at this event?  While an overall assessment provides a quick benchmark, learning that 41% of attendees rated the food as unsatisfactory doesn’t really tell you what needs to change.

To gather more actionable intel about food (or any other event element requires more specific assessment.  For food you might ask for ratings on: variety, quantity/portions, offerings for different dietary needs, taste/quality, service options (banquet, buffet, boxed), et al.

And because food is essentially fuel to help maximize participant attention and engagement, I’d ask a question about how well we did on that metric as well.  If the food doesn’t fuel participant performance, I want to change it.

2. Performance on metrics tied to the unique objectives or desired outcomes for a specific meeting, workshop, or other gathering

Not assessing performance on the specific outcomes or objectives for an event is problematic yet I rarely see it done.  Without this feedback we don’t really know how well we did on what we set out to accomplish, information that is critical to gather.

If a meeting or workshop is meant to achieve A, B, or C we simply must have evaluation questions related to each of those outcomes or objectives: Please rate how well this meeting achieved each of the following objectives …

You literally can cut and paste the event’s objectives or outcomes into this evaluation question.

3. Marketing research: needs, preferences and other intel for future efforts

We don’t want to overstay our welcome when people are completing evaluations, but do consider asking a question or two to gather new intel.  The purpose of these questions is to collect info not reflected anywhere else in the evaluation.

A few possibilities include:

  • recommendations for future event locations or online platforms

  • speaker or facilitator suggestions

  • ideas for future meeting or workshop formats

  • a current need for which people would value more content

  • other sources people are turning to for their learning or content (competitive intel)

Share

Be strategic about the evaluation timing

At least three possible assessment moments come to mind:

Real-time: how are we doing so far?

Useful for longer meetings, workshops, or conferences when real-time adjustments might be made to improve the experience.

Example: conducting a quick evaluation at the mid-point of full-day event with two simple questions: (1) what is working well for you so far? (2) what changes might help us be even more successful in the second half?

Event ending (or shortly after): how did we do?

Evaluations seem to most commonly occur right as an event ends or shortly thereafter.

An evaluation right at the end has the advantage of leveraging people’s existing attention, but you probably can only ask a handful of questions.

An evaluation emailed immediately after the event allows for both more comprehensive questioning and thoughtful responses, but may get set aside or ignored resulting in a lower response rate.

In many cases a hybrid approach can leverage the advantages of both approaches while working around their respective weaknesses.  Simply brainstorm all of the feedback you’d like to gather and then prioritize which questions matter most and for which questions you want the maximum response.

Your “at the end” evaluation can contain the questions that matter most and for which you want the maximum response rate.

Your emailed evaluation shortly thereafter can include all the additional questions for which feedback is sought.

The follow-up assessment

The missed assessment opportunity, particularly for learning experiences like workshops or conferences, is reaching out to people 30, 60, or 90 days after an event.  Only by doing so can we learn what, if any, of the event experience has led to actual mindset or behavioral change.

Asking a limited number of questions at one or more of these extended time intervals allows you to gain insight into how participants have used what they learned at the conference or how their assessment of the experience might have changed.

Four questions I often use at this evaluation interval are:

  1. What is the most significant result you've achieved as a result of your _______*  experience? (*insert webinar, workshop, conference, etc.)

  2. What is a post-conference action item you still need to address and how might we support you doing so?

  3. What is a question, idea, or insight that has stayed top of mind, something you are still thinking about or are actively engaging with in your conversations and work?

  4. What is something that you really enjoyed or valued at the _____* but you’ve done little or nothing with since? (*insert webinar, workshop, conference, etc.)


Bottom Line?

Evaluations should drive action.  Attending better to the questions we ask and the timing for when we do so can help better inform what programs or meetings we offer in the future, as well as our design and facilitation of them.


© Facilitate Better and Jeffrey Cufaude, 2021. All rights reserved.

To license this content for your site or electronic or print communications, email info@facilitatebetter.com or complete this form.