Stereotype Threat – why it matters

Recently I attending the National Council of Women in IT (NCWIT) Summit on Women and IT: practices and ideas to revolutionize computing (I wrote about my session presentation here – Tinkering and STEM – good for girls, good for all.)

The summit kicked off with a wonderful keynote by Joshua Aronson who is an Associate Professor of Psychology and Education at New York University (NYU). Aronson studies stereotypes, self-esteem, motivation, and attitudes. He showed some remarkable research results that showed that when people are reminded of their race or gender in a testing situation where there is a negative stereotype, they do worse on the test.

This is called Stereotype Threat – which he defined as being at risk of confirming, as self-characteristic, a negative stereotype about one’s group. The threat causes anxiety, and all kinds of measurable changes – from the brain to heart rate, and also greatly impacts test results.

Simply putting a box to mark gender, for example, at the front of a math test significantly changed test scores – for both men and women. Compared to a test where gender was not asked for, if gender was asked for at the beginning of a test, boy’s scores went up, girls’ scores went down. If gender was asked at the end, boys’ scores went down, girls’ scores went up.

Seriously, that was the only difference – there was no mention of the stereotype (boys are good at math, girls are not). The only change was the position of the request for gender identification.

The implication that such a casual, seemingly inconsequential reminder of a possible stereotype  had a HUGE impact says that there is much we do not know about testing.

It implies that assessing human knowledge is not that well understood. It also implies that much of what we think we are testing may be a false reflection. It may have a lot more to do with the context of the individual and the environment than a true assessment of learning.

And it’s not just about knowledge either. He shared a study where white college students increased their jump height over several attempts when tested by a white test monitor – but when a black test monitor gave the same tests, the stereotype “white men can’t jump” became real. The racial/gender makeup of the classroom, the test giver, or even the environmental clues can change everything.

This wasn’t just one research study, either. Aronson showed slide after slide of research that perceptions matter, and matter a lot. Some of this research is on his faculty bio page, lots more in the links below, and a good intro to his work is an ASCD Educational Leadership article – The Threat of Stereotype.

This article also has some great suggestions for reducing stereotype threat.

  • Talk about stereotype threat with students. This reduces anxiety that students may feel by acknowledging they are not alone in worrying about these things.
  • Teach students that intelligence and ability is not inborn and that they can work to do better. They are not limited by stereotypes that restrict what they can do. Talk openly about stereotypes and show that they aren’t true.
  • Build a cooperative classroom environment, not a competitive one. “…cooperative classroom structures in which students work interdependently typically produce immediate and dramatic gains in minority students’ grades, test scores, and engagement because such environments reduce competition, distrust, and stereotyping among students.” – The Threat of Stereotype

Further reading:

Sylvia

PS The best thing he said in the keynote was something like, “The number one predictor of academic success is a student’s answer to the question – does your teacher like you?” Would love to find a real quote!

Buzzword alert. What does formative assessment really mean?

Education Week: Expert Issues Warning on Formative-Assessment Uses.

Education Week has an excellent (and short!) article about how formative assessment is not a well-understood concept. I seem to be hearing the words “formative assessment” with greater frequency, perhaps moving into the “buzzword” category. But what does it really mean?

“Margaret Heritage, the assistant director for professional development at the National Center for Research on Evaluation, Standards and Student Testing, or CRESST, at the University of California, Los Angeles, appeared on a panel here last week to discuss a new paper intended as a reminder of what formative assessment should be.”

“Referring to a body of work that sought to define formative assessment during the past two decades, including the influential 1998 article, “Inside the Black Box,” by Paul Black and Dylan Wiliam, she said formative assessment is not a series of quizzes or a “more frequent, finer-grained” interim assessment, but a continuous process embedded in adults’ teaching and students’ learning.” (emphasis mine)

Lately, I’ve been hearing summative assessment as if it means “the test at the end” and formative assessment is testing leading up to that. This is clearly not the case. Take a trip around vendor booths at any educational conference and you will see that formative assessment is being sold as mini-quizzes that are supposed to give the teacher “feedback” about how the student is doing so “adjustments” can be made before the final test. This is a terrible corruption of the meaning of formative assessment and strips it of its power.

“Teachers use formative assessment to guide instruction when they clearly define what students should know, periodically gauge their understanding, and give them descriptive feedback—not simply a test score or a grade—to help them reach those goals, Ms. Heritage said. Students engage in the process by understanding how their work must evolve and developing self-assessment and peer-assessment strategies to help them get there, she said.”

Turning formative assessment into more little tests is a deceit aimed at selling more testing products and making them easier to invent, administer and catalog.

To do formative assessment, teachers have to talk to students and look at student work. They have to have a relationship with the student so that the feedback is meaningful and useful. With good professional development and a supportive school culture, teachers can learn to do formative assessment. It doesn’t take more time to do it right.

What takes time is testing that focuses on catching students at what they DON’T know for the purpose of collecting more data points. Those gaps in understanding could have been caught in the context of learning. Missing those teachable moments is a lost opportunity that can’t be regained.

“Ms. Heritage’s comments echo others’ concerns that the meaning of formative assessment has been hijacked as the standards movement has pressed states into large-scale testing systems. The result, Ms. Heritage said, is a “paradigm of measurement” instead of one of learning.”

“A teacher quoted at the end of Ms. Heritage’s paper captures the essence of the paradigm shift Ms. Heritage has in mind.

“I used to do a lot of explaining, but now I do a lot of questioning,” said the teacher. “I used to do a lot of talking, but now I do a lot of listening. I used to think about teaching the curriculum, but now I think about teaching the student.””

Doing real formative assessment is not impossible, and shouldn’t be dismissed as “too difficult” or “too expensive.”

What’s really expensive is to do cheap things that don’t work, waste time, and discourage student/teacher relationships.

Sylvia

New – Technology literacy whitepaper

Download PDFToday we are happy to announce the release of a new whitepaper written by Jonathan D. Becker, J.D., Ph.D. Associate Professor of Educational Leadership at Virginia Commonwealth University, with Cherise A. Hodge, M.Ed. and Mary W. Sepelyak, M.Ed. Dr. Becker is an expert researcher in achievement and equity effects of educational technology and curriculum development.

Assessing Technology Literacy: The Case for an Authentic, Project-Based Learning Approach (PDF)

This whitepaper takes a comprehensive look at the research, policies, and practices of technology literacy in K-12 settings in the United States. It builds a research-based case for the central importance of “doing” as part of technology literacy, meaning more than just being able to answer canned questions on a test. It also explores the current approaches to develop meaningful assessment of student technology literacy at a national, state, and local level.

Where “doing” is central to students gaining technological literacy, traditional assessments will not work; technological literacy must be assessed in ways that are more authentic.

Building on this definition, the whitepaper connects project-based learning and constructivism, which both hold “doing” as central to learning, as the only authentic way to assess technology literacy.

True project-based assessment is the only way to properly assess technological literacy.

Finally, it examines our TechYES Student Technology Literacy Certification program in this light.

A review of existing technology literacy models and assessment shows that the TechYES technology certification program, developed and implemented by the Generation YES Corporation using research-based practices, is designed to provide educators a way to allow students to participate in authentic, project-based learning activities that reflect essential digital literacies. The TechYES program includes an excellent, authentic, project-based method for assessing student technology literacy and helps state and local education agencies satisfy the Title II, Part D expectations for technology literacy by the eighth grade.

This whitepaper can be linked to from our Generation YES Free Resources page, or downloaded as a PDF from this link.

Sylvia

PS – Share this important research with your PLN!

NAEP 2014 Technology and Engineering Literacy Assessment

For the past year, I’ve been on the National Assessment of Educational Progress (NAEP) Technology Literacy Assessment planning committee. (See my post NAEP Technology Assessment 2012.) The first phase of writing the framework (which is where my committee contributed) is now complete. At the last meeting, we recommended to the NAEP Governing Board that the name be changed to better align the assessment with the common vocabulary and conventions used in K-12.

Simply put, calling the assessment “Technology Literacy” didn’t really capture the breadth of the planned assessment, which will cover technology as anything in the “designed world.” That term includes engineering principles, design and systems in a wide variety of contexts. It goes well beyond the much narrower K-12 use of the term “technology literacy.” In K-12 schools, districts, and state departments of education, “technology literacy” typically means the knowledge and ability to use computers and technology with fluency, efficacy, and safety in schools.

This post outlined some of the issues inherent in the previous name “technology literacy” THE Journal: NAEP Gets It One-Third Right.

But now, the name has been changed to the NAEP 2014 Technology and Engineering Literacy Assessment. I think this aligns better with both the scope of the assessment and the conventions of K-12 schools across the country.

One other change, the date has been pushed back to 2014. This change is due to the time  needed to develop computer-based items for this assessment. For the first time, this assessment will be 100% computer based.

You can take a look at the framework at www.naeptech2012.org.

Eventually this will move to a new domain, www.naeptech.org, but this is not up yet (as of 3/10/10).

As someone who is both an engineer and works in technology education in schools, I believe this is a good compromise. I think it will help people better understand the results of this assessment as we move forward. And in the long run, I hope it will spur the design of innovative and diverse learning opportunities for students that combine engineering, IT, programming, math, science, collaboration, communication and many, many different types of technology.

Sylvia

Tinkering and the grades question

Tinkering is still at the top of my mind these days, even though I haven’t had much time to blog about it much (besides this). But often when things are on your mind, everything you see seems to relate. If you think about buying a yellow car, all of a sudden the world seems full of yellow cars.

So reading this Alfie Kohn News and Comments article about grades made me think about tinkering again. Because often when we talk about doing something different in schools, we hear, “but how will that fit into the current classroom?” And that means everything from 42 minute periods to test prep to grades.

But tinkering is one of those things that doesn’t fit in neatly. It takes time, doesn’t result in neat projects that work with canned rubrics, and might not have any impact on test scores. But should that matter? Can’t we help kids at least a little by making things more like tinkering and less contrived and pre-planned?

Then this hit me.

As for the research studies: Collectively, they make it clear that students who are graded tend to differ from those who aren’t in three basic ways. They’re more likely to lose interest in the learning itself. They’re more likely to prefer the easiest possible task. And they’re more likely to think in a superficial fashion as well as to forget what they were taught. Alfie Kohn

These are exactly what kids need to be able to do to tinker. And grades squash that.

Maybe we are asking the wrong questions. Maybe implementing “some tinkering” where kids are eventually graded, no matter how authentically, is a contradiction. Maybe even counterproductive if it confuses kids. Is it even worth doing?

THE Journal: NAEP Gets It One-Third Right

Today the THE Journal editor Geoff Fletcher published an editorial, NAEP Gets It One-Third Right, which opens, “WATCH OUT, tech directors. A train wreck is coming your way and you’re sure to receive some collateral damage.” (Read the rest…)

I’m not going to comment on this right now and here’s why. For the past year, I’ve been on the National Assessment of Educational Progress (NAEP) Technology Literacy Assessment planning committee. (See my post NAEP Technology Assessment 2012.) The first phase of writing the framework (which is where my committee contributed) is almost complete. Our final meeting will be next week. Now others will take the framework and turn it into an assessment.

At the first meeting, I asked about blogging along the way, without revealing personal things or anything still in draft form. I was told that this would be detrimental to the process. After some discussion, I agreed not to do it. Although I felt (and still feel) that openness is the best policy, I also felt that this “was not the hill to die on.”

Last month, a discussion draft of the framework was released for public comment. This Ed Week article contains a link to the draft.

Like I said, I’m not going to comment on the draft framework or the THE Journal editorial right now. I made a promise to keep my thoughts and comments within the committee and I intend to keep that promise. However, when I can, I’ll share my thoughts more publicly.

Your comments are welcome.

Sylvia

NAEP Technology Assessment 2012

I’ve just found out I’m going to be part of the National Assessment of Educational Progress (NAEP) Technology Assessment development (see the E-school News story: On the way: Nation’s first tech-literacy exam: Tech literacy to be added to Nation’s Report Card beginning in 2012).

The recent NAEP developed assessments for science and math have generally been well received, and I’m looking forward to being a part of the effort to create something similar for technology literacy. Of course I’m curious to see how this will play out, since technology literacy is not a subject or a discipline like math or science.

I’m hoping that part of the solution will be to increase opportunities for students to study real engineering, design and programming in K-12. My background as an electrical engineer is no doubt part of that hope.

There are two committees working on these frameworks, a steering committee and a planning committee. I’m on the planning committee. The first meeting is next week in Washington, DC, and I’ll know a lot more after that. One question I will definitely ask is how transparent the process will be. The last NAEP assessment planning was done before blogs became as ubiquitous as they are now. The eschool news story says there will be public input and hearings, and an extensive review process. Let’s hope this extends out as far as the net reaches.

Update – I’ve been asked to remove the names of the committee members for now…

Stay tuned! – Sylvia

Back to New York and NYSCATE

Well, it seems like I just got home from the east coast, and I’m off again!

This time I’m headed for the New York State education technology conference NYSCATE in Rochester, NY November 23-25, 2008. I’m looking forward to seeing old friends and meeting new ones, most likely at Dinosaur BBQ.

If you are going to NYSCATE, be sure to check out these sessions:

NYSSTL –Technology Leadership for the 21st Century
Sunday, 1:45PM Stacy Ward
Learn how the HFM and WSWHE BOCES have created the New York State Student Technology Leaders (NYSSTL) Club in 30 middle schools. Students help their teachers learn to use technology and their classmates prove their tech literacy, creating a community of 21st century learning in our schools.

Where Teachers Learn, Where Teachers Teach
Monday, 10:45AM Sylvia Martinez
For many teachers, technology professional development happens outside the classroom and never crosses the doorstep into the classroom. This session will explore two models of professional development that cross that barrier: classroom embedded and student-led professional development.

Little Green Monsters: The XO and Its Implication For Education
Tuesday 10:30AM Brian C. Smith, Sylvia Martinez, Dr. Gary Stager
The XO low cost laptop was designed to revolutionize education in the developing world. The panel will discuss the lessons we can gain from this learning initiative and the implications for the future of education. We will also explore why such a simple idea has created such controversy.

By the way, I’m happy to have someone record, live blog, or ustream my sessions IF you can come and do it. It’s just too hard to do it AND present.

After that, it’s back to New York City for a family/friends Thanksgiving, and then some workshops in Brooklyn. More about that later!

Sylvia

Subscribe to the Generation YES Blog