Simple-Best Evaluation for Collaborative Events

— Transforming Organizations, Revitalizing Communities and Developing Human Potential


Have you ever?… struggled to get constructive, solid feedback on your event, but it seemed like too much effort, you weren’t sure how to leverage the data and it was a pain to get responses back because participants complained that your survey took too long? Leverage and customize a simple targeted evaluation tool to get meaningful data to improve your events and engagements:

Event eval is difficult. Design decisions by even the best practitioners often include at least some intuition. Potential disconnects are not untypical between events sponsors, the facilitators, and participants. Ambiguity of the purpose of the event tends to also challenge the focus of your evaluation.  The most effective way to address these concerns is to convene a design team [follow this link for a video & blog post on the subject] for your event. 

Even when evaluation is synchronous with the event, there is always a balancing act between what might be perceived as distracting participants vs. getting the best eval participation: Do you ask for feedback in real-time or after the event? Do you ask for real-time feedback during your event or do you risk that evaluation participation will drop off post-event? Depending on your audience, participation in a survey or questionnaire after the event will drop off significantly or drastically. 

But why do we evaluate? Your aim may be the adaptation of a specific process or activity, or general continuous improvement, the prediction of the effectiveness of future events, determining whether the purpose of your event has been achieved…  – overall, event evaluation tends to aim for more robust outcomes at the next event. 

Robust means that your process yields impactful results even when only [xx]% of process requirements are met. The more robust the process, the lower the percentage of required ideal conditions. To get robust outcomes, start with robust evidence-based collaborative event practice and tools – from, for example, the Change Library at nexus4change.com/library.

There are two competing demands for your evaluation: To [A] prove impact or to [B] improve for the future. To clarify what your evaluation needs to focus on, we use a theoretical basis that looks at the following five levels of evaluation, which in turn are developed from frameworks used in Training & Development, and combine Kirkpatrick’s (1998) levels of evaluation and Phillips’ (1996) ROI model. The resulting five levels of Evaluation are as follows: 

5-stepyramid.jpg

5.  ROI
4.  Outcomes/Results
3.  Behavior
2.  Learning
1.  Reaction/Satisfaction

If you want to dig deeper, check out this article by Steve Cady, Founder of NEXUS4change, et al. on Assessing Change from ODJ Spring 2018

To improve, your event eval will focus on levels 1-3:

1. Satisfaction
2. Learning
3. Behavior

To prove the impact of your efforts, your evaluation needs to focus on levels 3-5:

3. Behavior
4. Results/Outcomes
5. ROI

So, for ‘simple-best’ evaluation – it turns out that simple is best. As part of an evaluation framework focused on IMproving, start with: Level 1 [Satisfaction] & Level 2 [Learning]. We suggest three basic elements for your evaluation tool:  

To get meaningful/ quantitative Level 1 data, ask these questions. 1. Are you satisfied? 2. Will you recommend [future events]? Finally, as the best indicator of a participant’s satisfaction AND your success with future events – ask whether or not they’ll come back: 3. Would you return to our future events]? Optionally, asking participants to elaborate – voluntarily – on their responses, can give you good qualitative data.

To get actionable, qualitative Level 1 data, ask participants: What should we: START, STOP, CONTINUE doing?

To get qualitative Level 2 data: What was Most Helpful? and What Least Helpful?

If you have developed and clearly stated a purpose for your event [you should!] asking a question that helps you understand whether your purpose was achieved is meaningful for Level 2 data. 

At the Great Lakes Exchange and Design Session – our quarterly local two-part Fri. + Sat. event – our evaluation also includes a few key questions on food, the location, and our registration process; as well as this open-ended question that is great to get more insight: Did you get what you needed?

For Level 1 data in virtual meetings the polls and end-of-meeting surveys in Zoom and other video-conferencing platforms are also great tools. 

Finally, a few tips to get the best possible participation rates: 

- Make eval completion an agenda item.
- Offer both online and paper+pencil completion!
- Keep it to one page.

So start simple, but include at least the three questions 1. Are you satisfied? 2. Will you recommend [future events]? and 3. Would you return? in a basic eval for your next event. 


Check out NEXUS4change’s webinar series of 30-min. high-impact change tool talks. Check our events page [www.NEXUS4change.com/events] for more on the power of Design Teams, the Change Formula, Collaborative Roadmaps, Appreciative Benchmarking and more.