DRAFT: This module has unpublished changes.

Teaching Interests/Teaching Philosophy

Rochelle Tractenberg

 

            I wrote my first statement of teaching interest or philosophy during my first PhD program, in 1994. At that time I had been selected to train and mentor the incoming Teaching Assistants in the School of Social Sciences for the 2nd time (of 3), and was also a Teaching Assistant myself for large undergraduate courses. Since completing my first PhD (in Cognitive Sciences, 1997), however, I have been funded by research grants, and so have had relatively few opportunities to teach – and almost no interaction with undergraduates. Since 2006 I have focused mainly on curriculum development for higher (graduate, post-graduate) education, while also consulting with faculty on course and assessment design and test interpretation. This includes three intramural one-year grants I received for research projects to improve the medical school curriculum. I am very interested in returning to a formal teaching role, and while I have not been teaching since 2006, I developed the materials (lectures, homework and tests) for a conceptual introduction to biostatistics (BIST 500-A); I co-developed the lectures and selected the textbook for a course on the considerations for clinical research design (BIST 500-B). These two courses, which I directed, were offered every year (taught by others) for the 2007-2010 cohorts completing the DC Clinical Research Training Consortium that was funded by a K30 grant from NIH, for which I was Director of the Curriculum (2005-2010). I also outlined the syllabi for courses on outcomes (BIST 504) and statistical literacy (BIST 509), and created the first graduate level independent study course in biostatistics and “advanced” methods  at Georgetown, (BIST 901). BIST 901 has been taught twice, to a PhD student and to a post-doctoral scholar.

            I have updated my teaching philosophy statements periodically since 1994 (every 2-4 years) and my goal as an instructor has been essentially unchanged for over 15 years:  not to facilitate the memorization of material, but to support the development of pathways to its retrieval, understanding, re-creation, or realization.  I have always perceived, and treated, students as responsible agents in an information exchange and agents of their own development.  My philosophy is that I have the obligation to help people appreciate these responsibilities and to give them some tools for, and direction in, accepting them.  In this manner I hope to achieve one of my primary objectives as an academic: to help people become self-directed, and lifelong, learners.  That is, I see my job as that of helping individuals both access and apply information, a combination of skills that perpetuate learning.  I am also motivated by the belief that any course, even those which are not of primary interest to the student, can serve the student by broadening their perspective, leading to the discovery of previously unknown or unappreciated skills, or opening up an avenue of exploration that might serve them in the future. 

            As an instructor of an introductory graduate level course in biostatistics (2003-4), a field that most people come to unwillingly and/or without much background or preparation, I felt it was critical to establish connections between novel ideas and theories, such as probability and inference testing, and everyday experience.  It has never been my intention that students memorize formulae or decision-making processes, but rather that they internalize a way of thinking about questions and how they can best be answered.  This derived from my approach to teaching undergraduates (1992-1996) and was bolstered by two semester-long College of Education courses that I took in 2005 at the University of Maryland while completing my 2nd PhD there (in Measurement, Statistics and Evaluation, 2009). These two courses focused on assessment, and the kinds of evidence that instructors should seek to support our assessment-based claims about what learners have learned.  I have embraced the evidence-centered design approach in my course and curriculum development endeavors, leading me to develop the construct of a Mastery Rubric (Tractenberg et al. 2010; Tractenberg & FitzGerald, 2011; Tractenberg & Weinfeld, in review). Since 2005 I have also used a construct-centered approach to valid assessment, asking Messick’s 3 key questions of lectures, courses, and curricula (whether I am creating, consulting on, or supporting their creation):

A. What complex of knowledge, skills, or other attributes (constructs) <were targeted>/ should be assessed? 

B. What behaviors or performances should reveal those constructs?

C. What tasks or situations should elicit those behaviors?

 (-Messick, 1994, p. 16, http://www.jstor.org/stable/1176219)

            In my teaching I emphasize critical thinking and contextualization over memorization.  I also try to model logical thinking and problem solving for each question that is asked, demonstrating the propagation of knowledge via access ("what do you know?") and application ("how can you use what you know to get to the next step?").  I am challenged to devise methods to evaluate student performance when my main motivation is to improve their thinking, not their test scores. This is especially difficult when students have been trained to focus on the homework and test scores, and course grade, instead of what they themselves derived from the course.  I believe this attitude comes from our higher-educational system, which downplays interaction with information, and its critical consumption, and emphasizes memorization and the idea that there is always a ‘right’ answer.  As a statistician and scientist I am faced with the fact that we only “know” things that have not yet been disproved; however, in an introductory course there is a fundamental base of ‘knowledge’ that students should master. I must balance this need for ‘learning’, or access in my construction, with my goal that applying this new information is even more important than just being able to produce the information on demand for a test.

            In the role of instructor, I strive for a clear presentation of the material, a comfortable environment in which learners can get a better understanding of it, and a sense of empowerment that comes from interacting with, not memorizing, information.  As self-directed learners, I want my students to feel empowered to use their newly discovered abilities in other classes and situations.  I hope to enable students to uncover the importance of awareness of both the strong and weak points in their understanding, leading to better formulated and more directed questions when they ask for help- or collaboration, as the case may be.  In short, I am interested in fostering metacognitive development. My emphasis in class is on both focused questions and reflection – becoming aware of one’s metacognitive abilities; this interest and world view led to my joining a new (2011) Task Force on Reflection in the School of Medicine curriculum.

            I am explicit about this objective and have tried to model this sort of thinking, in large and small class settings.  I try not to just "lecture" or "give the answer".  My teaching evaluations reflect this; one typical comment is that I won't "just give the answer".  This is framed as both a positive and a negative within any particular set of evaluations.  Although some students find it frustrating to be continually asked to work through their own, and their peers', questions, when I was involved in multiple-section courses as a graduate student (1991-1996), students in my discussion sections tended to demonstrate an average overall course performance that was better than the whole class mean. 

            As an instructor I also encourage people to attend to their notetaking and thought processes, and to articulate what is unclear.  In class, workshops, or consultations I ask people, individually or in groups, to try to answer the questions themselves, and to identify what makes the most satisfactory answer.  By sharing responsibility for the 'right' answers with students and others with whom I consult, I invite them to think about the relationship between the material and their own knowledge, to search for flaws in their logic or understanding of concepts.  In the classroom, we share information by exchanging and answering each other's questions.  I have used 3x5 cards, lists, small groups/pairs, and in-class activities to inject variety and maximize both reflection and participation in both lecture and discussion settings. 

            The Mastery Rubric (MR) curriculum building tool outlines a developmental trajectory that both learners and instructors can use to determine what the next training opportunity should focus on, and what sort of evidence (assessment) should be used to document that specific knowledge, skills, and/or abilities have been learned. The MR encompasses my emphasis on metacognition and I have been heartened by the adoption of the framework so far by experts/instructors in diverse fields (clinical training, ethical reasoning and evidence-based medicine). Having a MR for a topic or curriculum will enable reflection and metacognitive processing whereas students typically only have a list of courses to pass, or topics to “study”.  I recently submitted an NSF grant proposal to study whether and specifically, how, a Mastery Rubric (the newest one, which focuses on statistical literacy) achieves this purpose.

            I give my students and consultees as much information about my approach and my teaching style as I can from the outset, making my teaching goals and expectations explicit.  For some students, this is the first time any aspect of the educational enterprise beyond “high scores = good gradesl” has been made explicit for them. Some people resent it, and others are moved to engage and explore their metacognitive abilities. Still others remain focused on getting the grade they need to move on.  I invite them all to be as explicit of their expectations of me as they can, and I request frequent, concrete, and specific feedback from them.  I designed mid-term evaluations that I administered in my introductory biostatistics class (2003-2004), and have had lectures and discussion sections videotaped for consultation.  I have collected peer and one-one teaching evaluations since the Faculty Development Subcommittee on Teaching and Pedagogy, which I co-chaired (2006-9), created these forms. I depend on these tools, together with ongoing, open dialogue about how students feel they are doing, how they believe the lectures or discussion sections are organized, and their office hour attendance, to determine the degree to which my approach is working, and to identify aspects that are not working as well as they could and why not.

            I am interested in returning to teaching, after more than 10 years spent focused on research and consultation with a particular emphasis on faculty and curriculum development since 2006. I am interested in developing and adapting new ways to interact with students and material, and am interested in adapting new technologies to instruction, perhaps the audience response system in particular.  I would also like to explore the integration of statistical literacy throughout the undergraduate and/or graduate curricula, which I have proposed in my 5-year grant application to NSF. This application/project focuses on challenges to, and explores solutions for, real or imagined barriers to critical thinking and numeracy. The latest (NSF) project evolved from a 2010 fellowship application to the National Academy of Education (for which I was a finalist (of 170 applications), but that was not selected for funding). 

            I would like to develop graduate-level courses in theory-building for scientists, and in the scholarship of teaching and learning in the context of a preparing-future-faculty program. I also plan to write or edit a book about the Mastery Rubric curriculum building and evaluation tool, which will encompass the three completed Mastery Rubrics (for clinical research, responsible conduct of research, and evidence-based medicine), the Mastery Rubric for Statistical Literacy that I am currently working on (topic of my NSF grant proposal), and a qualitative study I completed in 2010 that led to the identification of elements that are needed in order to build any Mastery Rubric. I am in the process of transforming the annotated notes and materials for BIST 500-A into either a book, an open-access self-guided course, or both. With my collaborator on the Mastery Rubric for the Responsible Conduct of Research (RCR), I have applied for funding from both NSF and NIH to study this training paradigm, and we are pursuing options for creating a certification process, possibly to be managed by the NIH Office of Research Integrity, for RCR training mentorship.

 

References:

Messick S.  (1994). The Interplay of Evidence and Consequences in the Validation of Performance Assessments. Educational Researcher 23(2):13-23.

Tractenberg RE & FitzGerald K. (2011). A Mastery Rubric for the design and evaluation of an institutional curriculum in the responsible conduct of research. Assessment and Evaluation in Higher Education.

Tractenberg RE and Weinfeld J. (in review). Bloom’s taxonomy, a developmental trajectory, and instruction throughout medical education: A Mastery Rubric for Evidence-Based Medicine.

Tractenberg RE, McCarter RJ and Umans J. (2010).  A mastery rubric for clinical research training: guiding curriculum design, admissions, and development of course objectives.  Assessment and Evaluation in Higher Education 35(1):15-32.

DRAFT: This module has unpublished changes.