At the end of every semester there is an important ritual that I participate in. In my department, this ritual is initiated by Margaret, our indispensable Assistant to the Chair. She sends out a message requesting that all untenured professors schedule a ten-minute period during each of their classes during which course evaluations can be completed by students. I always try to push this date as late as I can, because I want my students to have experienced as much of my class as possible; it would make the most sense from an accuracy standpoint to have students do these evaluations after the class is over, but practically it is impossible to track down students once the semester is done.
Generally course evaluations are administered by one of our work study students, which is a little weird given that their peers will be completing the evaluations. Before the work study student arrives with envelope in hand, I always give the same short speech to my students. I tell them that course evaluations are important for a variety of reasons. First, they are about the only institutionalized means by which students can provide feedback on the education they are receiving. Second, they serve as an important means of assessing the quality of their instructors. Third, they provide invaluable information on how courses can be improved, so they represent a means of improving Pratt for future generations (I point out that although there is nothing that can be done to improve the course they have already taken, a lot of the things that they liked about their courses probably were shaped by past students who took the time to provide meaningful feedback). After explaining the importance of the process, I point out all the ways in which the course evaluation process is completely anonymous, which I hope will empower students to be completely honest on these forms. After giving this short speech (sermon?) I leave and do not come back until the work study student gives me the signal that all students are done completing the evaluations.
Once the course evaluations are collected, they go back to our department where another work study student (?!) will enter each of them into an Excel spreadsheet. There are numerical ratings as well as open-ended questions, so the spreadsheet fills up with not just numbers but also words. Each student’s answers are labeled anonymously (student 1, student 2, etc.) so that it is possible to reconstruct what a particular student had to say but not who that student was. It is important that the spreadsheet is the final repository for the evaluations, because it completely anonymizes the data; the forms themselves contain handwriting, and in many of my classes I see enough student handwriting to identify students by their written text. I believe that both the actual forms and the aggregated spreadsheet then go to Human Resources, where they are retained for important decisions (more on that later).
I also get a copy of these spreadsheets, one per class section, about a month or two after the semester is over. By this time any personal impressions I have of particular classes is fading, but I still open each Excel file with a mix of apprehension and anticipation. No doubt about it: being evaluated is challenging. I try to look at the evaluations once and then let their emotional impact — good or bad — dissipate for a few days. Then, I set to the task of trying to make sense of what students have written. I make a sincere effort to construct meaning out of both my ratings and the comments that students have made. I am looking to see trends: did the majority of students have the same complaint? Are there things that students really like that I need to make sure to retain? Are there things that most or many students feel are substandard, and do their misgivings about these elements of the course suggest a change that might make the course better? I try to answer these questions.
The ratings have a nice quantitative quality, and I usually can make some sense out of what students have scored for different categories. When there are lower ratings, they generally have to do with workload (too much) and the fairness with which I grade their work (too rigorous). I have tried to react to these ratings, but it is hard to judge how much they indicate a shortcoming of my courses; after all, most students would prefer less work and easier grading.
Beyond the ratings, I also get comments. Some are laudatory. Some are comical. Some are nonsensical. Some are critical. Of this scattered data I need to make some sense, so again I am looking for trends. But there also can be insight in a particular student comment, so I look for those gems that shed unprecedented light on my course.
After thoroughly reviewing these evaluations, I create a set of notes for improving each course the next time it is offered. In some cases student evaluations push me to offer a course more, either because it is already well-appreciated or because I see new potential to improve it based on student feedback. In some cases student evaluations push me to shelve a course, because it is not accomplishing what I or my students expect it to.
Once I review my course evaluations, there is only one other time that I know that they are used: for reappointment, promotion, and tenure. Without going deeply into the process, at Pratt full-time professors periodically must be reappointed until they are promoted and earn tenure. This sounds routine and bureaucratic — and it is — but it is also very important. The process by which faculty are reviewed is critical to retaining excellent faculty and getting rid of those who are not serving our students. Teaching is not the only criteria on which faculty are judged, but at least in my department it is the most important criteria; we are, after all, a teaching department in a teaching institution. I have been on both sides of this review process, and course evaluations are taken very seriously. It is in some ways too bad that students do not get a window into the great importance of their feedback on the courses they take.
This fall I will be up for promotion and tenure. If I succeed, I will become an Associate Professor and have job security for the rest of my career at Pratt. If I fail I will have one more year of employment and then I will have to look elsewhere for employment. Course evaluations matter a lot for me right now.
But if I do get promoted and tenured, what purpose will future course evaluations serve? In my department we have the practice of making course evaluations optional for tenured faculty; I will certainly keep requesting them, but under current policies I do not have to. And even if I do, the course evaluations have reduced influence: if they were really atrocious for several semester they could cause me to lose my job, and if they continued to be good they might aid a bid to be promoted to Full Professor.
One thing is clear about course evaluations: once they are completed by students, they never directly serve students again. This fact bothers me: it is great that students indirectly benefit from past evaluations (if their professors use these evaluations to improve courses), but shouldn’t they also benefit more directly from these evaluations? Shouldn’t student course choice be in part driven by what past students have had to say about their courses?
That is the premise of Rate My Professors, the (in)famous site where students can numerically rate and leave comments about their professors and the courses they offer. In many ways Rate My Professors is easy to dismiss as juvenile: two of the four major rating criteria are “easiness” (as opposed to “difficulty” or “rigor”?) and “hotness” (yes, literally a measure of whether a professor is easy on the eyes or — I suppose — ears). But the endeavor of Rate My Professors has an underlying ambition that I appreciate: it is trying to democratize the process of evaluating professors. Interestingly there does not seem to be much of a culture of using Rate My Professors at Pratt: after twelve semesters of teaching nine credits per semester, I only have nine ratings. A closer look at my profile on Rate My Professors also points out another big problem: it is not just that fewer students use Rate My Professors, it is that particular kinds of students are attracted to it. Generally I see either very laudatory or very critical/snarky reviews, but not much in the middle. And this makes sense: only students particularly motivated (either by extreme satisfaction or extreme frustration) are going to bother going to this site.
I want to hold myself publicly accountable for how well I teach, and I want my prospective future students to have information they can use to decide whether my courses are right for them, so I have decided to make some components of my course evaluations public. In doing so, I hope to elevate the use of these evaluations, to better honor the work that students have put into providing feedback on my courses. Under my For Students section of this site I now have a page on Course Evaluations.
At present I have done what is relatively easy, which is to post information on the overall ratings for each of my courses during each semester. This data allows an assessment of what courses students like best, where my courses have been heading in terms of student impressions, and how my overall teaching ratings are changing over time. Missing from this initial presentation is a lot of granular data: I report only anecdotally on how different rating categories influence my overall scores, and there is no data on non-numerical ratings. Eventually I will figure out how to present the smaller-scale ratings data, but how to present student comments is a bit more vexing. What I have posted for now is a start, the beginning of providing information to prospective students, the beginning of holding myself more accountable.
Although posting these invites a kind of valuation of me and my courses — and that valuation is institutionalized through the faculty review process — I would like to move away from a value-based use of course evaluations. If we look at course evaluations solely as a means of “rating” our professors, we lose sight of the overall goal of evaluation, which is to improve the quality of education. So I am trying to approach my evaluations from an outcomes-based perspective, using them to answer the question “am I achieving my goals as a teacher?”. By posting my evaluations to the site, I hope to ritualize my self-assessment and make transparent what I do and do not do well as a teacher.A Major Post, Assessment Methods, Course Evaluations, Higher Education, Pratt Institute, Teaching