American fraud: University Assessment

I used to teach at a major University. I don’t anymore. I was once tenured faculty; I’m not anymore, for health reasons. I miss teaching, I miss interacting with sharp kids. And I miss good colleagues who became good friends. And I find that old habits die hard. Nothing I do now is governed by the academic calendar, but I am still aware of that calendar, and I do think ‘today, I would be meeting new students, yesterday would have been filled with department meetings.’ And that’s the part of university life that I don’t miss at all. Meetings, and especially meetings about university assessment.

‘Assessment’ essentially describes the way professors determine whether or not students are learning anything in their classes. At least, that’s the idea; to describe it so reasonably essentially involves slathering huge quantities of lipstick on some fabulously ugly pigs. We’re supposed to decide what ‘learning outcomes’ we expect from our students. We then devise ‘assessment instruments (or ‘rubrics’), and measure outcomes. And then adjust our teaching in response to those findings. Numbers are good. If you can prove that ‘learning outcomes’ are being achieved, and can prove it statistically, that’s the holy grail. The powers that be want evidence based educational improvement.

The Chronicle of Higher Education recently published a terrific article by Erik Gilbert. In case you don’t want to bother with the link, he says that his kids are heading off to college, and as they prepared to make that important decision, he realized that he didn’t care about their assessment programs:

My lack of curiosity about assessment when making an important choice about my children’s education probably surprises no one, but it should. It’s unsurprising in that no one, higher-ed insider or not, ever seems to worry about this when choosing a college. No admissions officer ever touted his institution’s assessment results. No parent ever exclaimed, “Suzy just got into Prestigious College X. I hear they are just nailing their student learning outcomes!”

Gilbert then continues to describe his experience with assessment:

Every year on my annual productivity report I write a mandatory and usually somewhat contrived narrative describing the ways in which I have changed my courses and teaching in response to the assessment data from the previous year. As an administrator, I sit on the Learning Outcomes Assessment Committee. . . .

So, what does it say that I looked at climbing walls, not assessments, when making a significant and expensive decision about my sons’ educations? It says that I, like virtually everyone else, don’t think that good assessment makes good universities and well-educated students or that bad assessment makes bad universities and poorly educated students. In fact, I am starting to wonder if assessment may actually do more harm than good.

Bingo.

The dirty little secret of higher education is that assessment–which involves a huge expenditure of time and resources–is essentially worthless, if not actually harmful. The key, in fact, is the line ‘mandatory and somewhat contrived narrative.’ Because he’s trying to be fair and balanced and reasonable, he understates: the narrative of assessment is essentially fraudulent.

I admit that my views on this subject are extreme. When assessment was presented to us in various department and college faculty meetings, I made an obnoxious pest of myself by asking, repeatedly, why we were wasting time with this ridiculous nonsense. That phrase, ‘ridiculous nonsense,’ led to a meeting with a Higher Administrator. Who admitted that, of course assessment was worthless. But we had to do it, so stop rocking the boat. Which I did. To the dismay of many many colleagues, who privately told me that my anti-assessment tirades were both entertaining and spot-on, so please don’t stop.

What never happened, though, not once, not ever, was any attempt to sell the program on its merits. If, for example, some trusted senior colleagues had spoken, talked about how valuable assessment had been to them, how much their teaching had improved; if any respected figure in higher education had ever once offered a testimonial, I think we might have been better disposed towards it. Never happened. Instead it was the worst kind of top-down management; you are doing this, period, so shut up.

Anyway. I taught playwriting. Here’s how you teach playwriting. You have students write plays. They read them aloud in class, and you lead a discussion, offering feedback. The students are asked to re-write their plays. A couple of weeks later, we read the re-write, offer more feedback. Repeat as necessary. Then, if possible, produce the play.

“But how can you know, how can you demonstrate, that the plays have actually improved, that the feedback really did help?” This from a senior assessment administrator. “What rubric can you devise to assess your method?” And this is what she suggested. Contact all the other college playwriting teachers in the state. Send them my students’ plays; get them to send me their students’ plays in response. Come up with a form, breaking plays down into different categories: characters, structure, dialogue, stagecraft/theatricality. Assign points to each play in each category. Everyone reads everyone’s students’ plays, and all assess them according to these criteria. If a play’s first draft scores a 30, and the rewrite scores a 45, then the feedback you offered was helpful.

I came out of that meeting greatly discouraged. Essentially, I would need to read four times as many student plays as I was already reading, and so would my friends at other schools. We were talking about a prodigious amount of work, all to prove what? That plays improve when they’re re-written?

Then, it occurred to me, that I had two alternatives. I could do all that, send bundles of plays off to colleagues at four other universities, do all that reading and assign all those numbers. Or I could just take ten minutes some afternoon and make up a bunch of numbers.

Guess which one I did.

Everyone did. I had another colleague who taught a beginning theory class. The students had various units in which they learned about various kinds of critical theory, and how to apply those theories to play texts: feminist theory, post-colonialism, deconstruction, new historicism, and so on. It was a terrific class, wonderfully taught by an energetic and imaginative colleague. The main assessment tool was a series of critical essays the students were expected to write at the end of each unit, applying the theory to the assigned play. Anyway, I was asked to be one of the assessors for that class, reading a ton of essays and assigning points in various categories: clear thesis, strong use of evidence, coherent argument, etc.

Here’s what we learned, and could prove (with numbers!). Some students were really into theory, and others much less so. Some students wrote really well, other students didn’t particularly. If a student didn’t like theory very much, and didn’t write very well, she could still work hard and do well in the class; this professor gave great feedback on the written work, with room for students to redraft.

So that’s what we learned. Obvious stuff that we already knew. Also, we learned, that the colleague teaching the class was really good at it. We knew that too. The next semester, I’ll admit, we kind of blew off all that reading. Making up numbers was way easier.

To be fair, the Chronicle also allowed someone to rebut Gilbert’s article, and Joan Hawthorne’s response is passionate and well written. It’s an administrator’s response, and describes the heady early years of assessment, a time when dinosaurs roamed the earth, insisting that what professors professed was more important than what students learned. I don’t buy it. The straw man she sets ablaze is certainly flammable, but no, there never was a time when professors didn’t care if students learned the material in their classes. I can say that with some confidence; my father, grandmother and aunt were all university professors.

At this point, I think the burden of proof is on assessment, not on its detractors. If assessment is so terrific, why have no faculty, anywhere, defended it? Why isn’t the literature replete with anecdotal evidence, with ‘I thought I was a good teacher, but assessment opened my eyes’ testimonials? Why do I have this sneaking suspicion that administrators and compliance officers are keeping assessment’s dessicated remains breathing because doing so justifies their bloated salaries and ever-expanding numbers? Why do I see this as analogous to primary education, with its preposterous proliferation of standardized tests for second-graders?

I will say this: many faculty fake the numbers. We learn the assessment jargon, and we pretend that assessment matters and that we’re doing it properly. I did, and so did every colleague with whom I interacted across campus. Assessment is just the latest edu-babble fad, super-attenuated but essentially worthless. It does no good, and never has, because it doesn’t actually serve the students. It is–here’s that phrase again–ridiculous nonsense. Time for it to go bye-bye.

11 thoughts on “American fraud: University Assessment

  1. queuno

    I’m really glad one of my BYU majors was in a field where the best assessment was (a) did the code compile? (b) did it produce the desired output? (c) did I meet some stylistic criteria? and ultimately, (d) did I get a job upon graduation? For my other major, we had a final conversation assessment before graduation to determine if the student could speak the language.

    Simple rubrics.

    Reply
  2. juliathepoet

    As someone who didn’t ever consider BYU, I can state emphatically that my lack of desire to go there was the faking of assessment criteria.

    Yeah right!

    Reply
  3. Julie Saunders

    So those assessments from Playwriting class that I definitely knew were happening and cared a lot about were faked?!? I feel so betrayed.

    Reply
  4. CameronH

    This is my first year at BYU, and my first full-time faculty position. A good third of the way-too-long faculty meeting this week was dedicated to assessment. I have yet to even begin to understand the issues you’re complaining about, and already I hate it.

    Reply
  5. Nikki

    You realize that without assessment, BYU (or any other university) wouldn’t be accredited. Without accreditation you don’t have a college/university. Without the university you would have never had a job. Do you know how much your little assessment fraud could cost BYU? If someone from the accreditation organization read this, it could still cost BYU. What you’ve proven here is not that assessment is a joke. You’ve proven that BYU apparently has a character problem and needs to find a better way to weed out the dishonest. They should also probably explain to the faculty why assessment is needed. They are apparently completely ignorant.

    Reply
    1. admin Post author

      Nonsense. My moral obligation was to my students. Assessment harms the ability of faculty to teach effectively. I engaged in a harmless act of principled civil disobedience. As did essentially every other good teacher I knew.
      As for accreditation, I met with university creditors. They were fully aware what a joke assessment is. As long as we paid it sufficient lip service, they could not have cared less.
      It’s time to end this fraud forever.

      Reply
  6. Randy

    The problem is not BYU—it’s the whole assessment culture that creates this double-minded hoax. Powers that Be expect results; they want them reduced to numbers; colleges construct a rubric that tries to describe what they already do; they do it, and generate good numbers; Powers that Be can tell constituencies, “We have high numbers!”

    Reply
  7. Darise

    I currently have a VP of Instruction whose favorite phrase is: there are no opinions, only data. Well, bar graphs may be colorful and easy to interpret, but the say nothing about student learning, particularly in the Fine and Performing Arts. At graduation, I was seized upon and bear hugged by a beautiful young woman in her cap and gown who told me, “I couldn’t have done this without you.” I took a student to a touring company musical, and we talked about it all the way home – staging, directing, set, performances, and musicals vs. straight plays in general, with the student leading the discussion. Stick THAT in a pie chart!!

    Reply

Leave a Reply