Measuring Making

One of the most common questions people ask me is “How do we measure the success of our maker program?” We cover this in our book, Invent To Learn: Making, Tinkering, and Engineering in the Classroom. However, I think there are more details that I can help with.

This is different than assessing student learning in specific subjects. I touched on assessment of maker projects in this blog post and hope to talk more about this soon. But what I’m going to talk about in this post is how to show that your program as a whole is a success.

First of all, you need to think about “success” – this is more difficult than it looks! In many cases, maker education initiatives are trying to go beyond test scores and grades into areas that are more difficult to quantify. You may be interested in increasing student empowerment, self-efficacy, interest in STEM, attitudes, or  problem-solving. So how do you do that?

Measuring affective changes in students is possible. Lots of people think that you can’t measure or quantify these kinds of things but you can. I believe it’s best to approach it both qualitatively and quantitatively.

Quantitative evaluation can be done with validated instruments and surveys you may be able to find and reuse. You may have to do a bit of research to narrow down exactly what you want to measure. For example, if you are looking for improvements in attitude, I did a quick Google search and came up with these this and  this. (I’m not recommending these, you need to find ones that best match your goals.) There have been many recent surveys about youth attitudes towards technology, STEM, and school in general. I would also look for “self-efficacy” surveys, and surveys that your district or state may already be using that ask students about their attitudes towards school, interest in STEM, etc.

Why bother doing this? If you use the same survey (or just take a few questions) that others use, you can compare your results with them. It’s powerful to be able to say, “The national data says x% of students in grade 8 are interested in STEM careers, but in our school, it’s risen from x% to y% in the year since we’ve implemented our maker program.”

However, I think it is even more powerful to create your own data. Ask people (parents, students, teachers, administrators) what they think about any program you run and use Likert scales to get data from their answers. Do pre/post surveys. Don’t be afraid to ask questions like “How do you feel your capacity to solve problems has changed?” or “Have you seen an increase in your child’s interest in science?”  Make the data you want to tell the story you want.

Finally,  make sure you are asking your participants and stakeholders to show and tell you what success looks like. Capture your stakeholders (all of them, especially students) on video as much as possible. Ask the same questions over and over again and you will have a compelling and powerful case. Take photos, videos, and screenshots not just of the finished projects, but the process. Combine quantitative data with documentation of projects, personal stories, anecdotes, and evidence of success. This will build your case better than data alone or stories alone.

But you must start NOW! Don’t wait to collect data, do surveys, and take video. Decide NOW what you think this picture of success looks like and start collecting the evidence. This blog post covers a workshop process that will help you decide what to ask and how to create those types of data stories.

With data, video, photos, events, and anecdotes, you can paint a complete and compelling picture of the success of your Maker educational initiative.

One thought on “Measuring Making”

  1. I am opposed to “grading” maker ed, but what you are suggesting here is aligned with improving it, and assessing its impact and showing its worth. I am perfectly okay with that!

Leave a Reply