DRAFT: This module has unpublished changes.

I administered the University evaluation as well as my own evaluation each time I have taught (2003-2005). The university evaluation may serve some purpose, but I am looking for specific input that I can use; and this is not the purpose of the University evaluation. The university evaluations typically answer questions in the form of "do the students think they are learning?" or "do the students think you are a good teacher?" – summative input that is not useful for making concrete changes that might lead to improved summative evaluations in future courses. One document linked to this section ("Commitment to student input") is a list of questions I developed for a department (Biostatistics, Biomathematics and Bioinformatics) that each reflects particular input that people might be seeking as specific -and actionable - for their teaching. 


The linked evaluation summaries represent my own evaluations; these evaluations are very different from the University one in two key ways. First, I designed my course evaluations to elicit specific input on what I could be doing better - so there is no reason to summarize the entire evaluation with a single value. Therefore, the summaries attached reflect all the questions I asked– this input cannot be sensibly summarized with a single value. Secondly, my own evaluations are asking students questions of the form, "what can the instructor do better?". To reiterate, it defeats my purpose to try and summarize this input with one value, e.g., 'overall rating of what I could do better'? That doesn't make sense. But the institution needs to be able to describe (summarize) how effective the teaching is; rather than utilize a single value summary, I incorporated a 'most important question' ("overall rating"), which makes sense for this particular purpose. So, my one-on-one teaching evaluations (students, peers) are summarized below with the overall rating.

 

Summary of quantitative data from student one-on-one evaluations (available on request)

Course

Course no.

Rating

Rating scale

One-on-one consultations with 1 grad student in each of 2009 and 2011, one postdoc (at GU, 2010-2011); and one undergrad student at UNC (2009-2010)

NA

Overall: Excellent (5/5)

 

5-level rating scale ranging from “not acceptable” to “excellent”

Summary of peer evaluations of one-on-one teaching (available on request)

Peer evaluations have been obtained following faculty consultations from 6 (2008); 3 (2009 –including one from U Kentucky); 2 (2010) and 6 (2011) interactions using my one-on-one peer evaluation form; plus a peer evaluation of a workshop that I gave to help the Pathology Department revisit their Residency evaluation program (2010; see "Recent Scholarship of Teaching and Learning" tab, "Evaluations: Pathology" for this workshop content). 

 

The 2011 peer reviews include 2 from Medical Education Research Certificate (MERC) program scholars from Washington Health Center, one MERC scholar from National Rehabilitation Hospital and 2 from GU Hospital MERC scholars with whom I consulted on their education projects. All overall ratings on peer reviews are “excellent” (5/5), specific domain ratings average 4.8 out of 5 on the same 5-level rating scale ranging from “not acceptable” to “excellent” as for the one-on-one student evaluations. Most also have specific comments.

 

To summarize the University's data, according to the Official evaluations - which were administered at the end of the semester of which I only taught the first 8 weeks, 55% of respondents rated "how much have you learned in this class" a four or five (with five highest) in Fall 2003; 56% gave these ratings in Spring 2004 and 57% gave these ratings in Fall 2004. In Fall 2003 78% gave an overall evaluation of the instructor of 4 or 5 (5 highest); 76% gave these ratings in Spring 2004 and 82% gave these ratings in Fall 2004.  The average values are summarized under tabs for the specific semesters.

DRAFT: This module has unpublished changes.