DRAFT                                                                       NOTE: This latest draft reflects the results of our phone conversation with Bob Froh last Monday, April 7th.

To: Academic Council14th

From: Ad Hoc Committee for the Evaluation of Teaching

Date: August 28th, 2003,  Amended October 2nd, 2003  (amendments are emphasized in color)

Re: Recommendations to Improve the Evaluation of Teaching at Elon

 

            Charged with the task of examining current methods of evaluation of teaching at Elon and making suggestions for improvement, this Committee has solicited input from Elon’s academic community.  We have conducted interviews with department chairs, worked with Dr. Robert Froh of the New England Association of Colleges and Schools (who led a Numen Lumen session on March 7, 2003), and incorporated feedback from faculty in our deliberations.   The modest amount of comment that we received from faculty suggests no widespread dissatisfaction with the evaluation of teaching at Elon.  However, our recommendations seek to promote improvements in the current system.The information that we have received indicate only a modest dissatisfaction with Elon’s current system for evaluation of teaching.  However, the committee wishes to take a proactive leadership role with its recommendations to improve the system.

 

            Several issues have emerged from our own deliberations and from our conversations with faculty and department chairs.

 

            We have been asked to address a number of issues:

 

We found that different groups of faculty feel differently about how teaching is evaluated at Elon.Articulated opinions on teaching evaluation are diverse.  To a degree, there is the perception that hiring, retention and promotion decisions are made based on a narrow spectrum of information.  Junior faculty, especially those comingindividuals up for review,promotion/tenure focus their concern on:  1) the weight placed on Student Evaluations of Teaching (SETs); and, 2) on the dominant role department chairs play in observing and judging the faculty member’s ability in the classroom.  With respect to the criteria for effective teaching from the Faculty Handbook (See Appendix) the current SET form effectively assesses criteria # 1, 2, 6, 7 & 8.  A single form, however, does not lend itself well to the assessment of non-traditional “courses,” such as internships, laboratories, undergraduate research, and the like.

 

            In addition to an evaluation form specifically designed for Elon 101, another form to provide student feedback to advisors has been developed by the Ad Hoc Committee on Advising (Becky Olive-Taylor, Chair).  It is being tested by several departments, and an online version is being considered.

 

            Criteria # 3, 4, 5 and 9-11 are assessed primarily by department chairs and individual faculty themselves..  Contrary to the perception mentioned above, department chairs have clearly demonstrated that they use (in addition to the SET data) a variety of sources for their evaluation of faculty.  This includes direct observation of faculty in their daily routine, evaluations of faculty annual reports (Unit I’s), observations from other colleagues in the department, conversations and interviews with students and classroom observations.  They have expressed concern over the pressure to compile information and make summative decisions rapidly.  They generally want the evaluation system to run more smoothly and swiftly than it does. Recognizing the limits of what they can do as direct observers of faculty teaching, they nonetheless note with concern that time is required to do more.  Both chairs and deans have commented on the inconsistency of Unit I self-evaluations by faculty, which vary considerably in length and quality.

 

            Some senior faculty feel more distanced from parts of the evaluation process.  This is not surprising, since they have been rigorously evaluated to earn tenure or promotion, and they are a "known quantity" to the University; there is less demand for additional evaluation.  Regardless of rank and seniority, most faculty and department chairs have expressed interest in an evaluation system that is geared not only to summative judgment but also to formative development of teaching skills.

 

            A notable development is that as a condition of accreditation by the AACSB, the Love School of Business (LSB) is establishing a more extensive system for evaluation of teaching, which includes provisions for peer review:  a three-member committee of tenured peers is selected from those LSB faculty who have tenure or professional status.  They review the faculty member’s performance as part of the mid-point evaluation and submit a letter to the Dean.  For promotion/tenure evaluations, the letter is submitted to both the Chair and Dean.  The 2003-04 academic year will be the "pilot" year to implement this new system, which is designed to complement the current system.  The LSB experience will serve as a valuable source of ideas and information for the University. There is also substantial diversity in the length and content of faculty Unit I’s, which can range from very brief summaries (1-3 pages with little more than a listing of classes taught during the year) to more voluminous reports (over 30 pages in length).  The longer reports can be a substantial burden for faculty to write and for chairs and deans to read.


The current form used for student evaluation of teaching asks students to assess the effectiveness of teaching in accordance with selected criteria for effective teaching specified in the Faculty Handbook:

Command of the subject matter

Use of current and relevant materials

Ability to communicate effectively with students

Use of appropriate and varied methods and strategies of teaching, assessing, and grading

 

However, the current form does not assess learning activities that do not fit the traditional classroom setting (internships, undergraduate research and activities such as musical performances or laboratory classes in the physical, life and social sciences).  It does not assess the effectiveness of advising.  Becky O-T comments:   Evaluations of undeclared majors has occurred in Elon 101 for years.  What we’re missing is feedback from sophomores as they declare their majors.   A web-based “advisor feedback” form is being evaluated and ”pilot” tested with the biology department and the School of Education.    The percent return rate has not yet been obtained, nor has any analysis of the data been performed.  Council recommended that specific data from this form goes to faculty and summary data goes to chair.  Faculty would have option to include this data in their Unit I’s.  Questions on the Elon 101 evaluations do not closely parallel the advisor feedback form, but they were developed originally for separate purposes.

 

 

The administration of SET’s in the classroom lacks some consistency; a significant number of evaluations are done at a late date and/or in a hasty manner.  Combined with the fact that SET’s are given only for fall classes, this behavior builds a perception among students that SET’s are not taken seriously.  A significant number of students do not have a clear perception of how SET’s contribute to the overall evaluation of faculty.

 

As a condition of accreditation, the Love School of Business has already embarked on a “pilot” year of more extensive peer review of teaching, which will be both summative and formative. The Business School’s experience here will serve as a valuable source of ideas and information for the University.

 

 

            Student opinions of the faculty evaluation process are focused mainly on the SET forms and their administration.  Students perceive that evaluations are not important when they are administered in a hasty or casual manner and/or very late in the semester, especially on the last day of classes.  The lack of evaluations in spring term classes reinforces this perception.  As a result, students are less motivated to provide substantive feedback to the instructor.  Finally, a small percentage of students refrain from providing feedback, worried that if their handwriting is distinctive, their comments are not actually anonymous.

 

 

Recommendations

 

            Our committee recommends the following goals to improve the evaluation of teaching at Elon University.  We list them under three headings, which indicate the types of problems we became aware of and the concerns we have heard.  Bulleted items represent suggestions for ways in which these goals can be implemented:

 

  1. Ways to make evaluation of teaching more complete and consistent

 

  1. Broaden the base of information used to evaluate faculty performance in teaching and advising
    • Develop alternative forms for student evaluation of non-traditional courses, such as internships, undergraduate research, and study abroad courses;
    • Implement regular evaluation of advising using the recently developed form;  distribute the form during a “prime time” class period during a designated week;
    • Monitor the progress of the LSB peer review system; consider wider implementation of this system throughout the university.

 

  1. Make the administration of SETs more consistent, and speed up data processing so that both faculty and department chairs may use the data in a timely manner:
    • Designate a one- to two-week period for the administration of SETs;
    • Allow sufficient class time for completion of the forms; include the anticipated date for evaluation in each class syllabus;
    • Encourage more strict adherence to the existing protocol for instructing students when the forms are distributed;
    • Administer the SET for all classes in both semesters, so that faculty have access to as much formative data as possible.  If classes are not evaluated in both semesters, make a better effort to justify to students why they are not essential.  (Sufficient data is acquired in one semester, faculty have greater opportunity to be innovative in their teaching during the spring, etc.)
    • Subject to accreditation requirements, allow faculty to select data from two thirds of their annual teaching load for inclusion in their summative permanent files; however, ensure that over a two or three year period that no single class is completely exempt from summative evaluation.

 

  1. Ensure anonymity of students’ comments on SET forms:
    • Allow students to submit typed comments to Alamance 118 (within 48 hours of administration of the SET) for inclusion into the packet of SETs.

 

  1. Promote consistency in the interpretation of data used for summative evaluations
    • Provide statistical information on the distribution (standard deviation or error bars) of numerical scores for each item on the form; if necessary, train department chairs in the interpretation of the numbers;
    • Ask department chairs to comment in their Unit IIIs on how typical the results are for individual faculty or particular courses so that faculty are not (for example) penalized merely because they happen to be teaching a difficult or unpopular course;
    • Provide some more specific guidelines on the length and content of faculty Unit I’s to promote consistency across the University.
    • Encourage faculty to supplement their Unit I’s and SET data with additional information of their choice; the total package can evolve over several years into a complete teaching portfolio.

 

 

  1. Ways to ensure that students and faculty are well informed about the entire process whereby teaching is evaluated

 

    • Provide information through Elon 101 sections about the nature and significance of the SETs. SGA members might be willing to visit Elon 101 sections for this purpose, or it might be done by Elon 101 TAs and/or instructors;
    • Place more emphasis on the criteria for teaching evaluation stated in the Faculty Handbook during orientation sessions for new faculty; mention assessment of teaching criteria more consistently during the opening-year department or divisional meetings;
    • Educate faculty more thoroughly about the role of SETs in evaluation and how the data is complemented by other information that chairs acquire;
    • Brief newly-appointed department chairs more thoroughly as to their responsibilities as evaluators of teaching.

 

  1. Ways to help faculty improve their teaching through various kinds of evaluation

 

  1. Establish departments as the focal points for formative, collegial (peer) evaluation: activities should help faculty reflect on and develop their teaching. Instead of adopting a University-wide system, each department will develop the formative evaluation activities which best suit its own needs; for example:
    • create departmental standards for teaching portfolios;
    • develop voluntary “teaching circles” or “teaching partnerships”;
    • promote classroom observation as a mechanism for formative assessment of teaching effectiveness;
    • allow individual faculty to volunteer to have selected classes videotaped for subsequent review;
    • plan departmental seminars where faculty volunteer to present a teaching activity or method for discussion and analysis;
    • experiment with online formative evaluations (developed by individual instructors or departments during the semester) using Blackboard software.

Chairs will include a description of their departments’ activities in their Annual Reports.

 

 

  1. Chairs could use their Annual Report to record their departments’ activities.
  2. Educate faculty and department chairs in the value of the Unit Is and Unit IIIs as places for reflection upon and analysis and evaluation of teaching.
    • HaveIn particular, we recommend that faculty list in their Unit Is the goals for each course and comment on the extent to which those goals were achieved. In this way, evaluation of teaching is linked to course assessment.
    • Ask department chairs tomight emphasize the nature and significance of the Unit Is at their November department meetings, before the annual review process begins.

 

  1. Emphasize to faculty approaching promotion and/or tenure decisions that their promotion dossier can have not only summative but also formative value with respect to both teaching and other professional activities.
    • Ask the chair of the PP&T Committee to include this information in the annual information sessions for faculty;
    • Include this information in new faculty orientation sessions.

 

A minor brainstorm, Rosemary:  Maybe we can present these recommendations at the May faculty meeting and schedule time for comments and feedback at future faculty meeting (like we do with by-laws changes). GENE****if there’s space on the agenda, yes. But May meetings are notoriously packed and wild!

 

Implications of these Recommendations

 

            Some recommendations above can be implemented with little time and expense.  However, administration of SETs in both semesters will have substantive budget impact.  Quicker turnaround of SET results might require an earlier administration date, coupled with commitment to process results more quickly. It might be necessary to use more personnel to process, analyze and disseminate SET data.

 

            Activities to make the system more transparent to both faculty and students clearly involve decisions about when such messages will be conveyed and by whom. For example, if students are to learn more about SETs in Elon 101 classes, training of Elon 101 faculty may need to be adjusted accordingly and requisite class time may need to be set aside. Department chairs would need to set aside appropriate time in meetings for discussion of teaching evaluation.

 

            The biggest commitment of time and money clearly comes in the establishment of peer review activities. Visiting classes, discussing teaching, and reflection and analysis require substantial amounts of time. When and where can this time be found in the life of a department?  Leadership of department chairs will be crucial in motivating faculty and in establishing and then facilitating peer review activity. Members of peer review committees must enjoy the same (or a comparable) level of support and faculty trust as members of the PP&T Committee. 

 

            More problematic, perhaps, is the impact on the morale or emotional temper of a department in having faculty reviewing one another’s teaching. If the review is summative, having impact on promotion, tenure, or even retention of faculty, the question of who reviews whom will be one for fairly anxious deliberation.   

 

 

Appendix 1:  Excerpt from Elon’s Faculty Handbook

 

.       First Level Criterion – Teaching

 

Effective teaching is activity which promotes the intellectual vitality of the university community. While the primary focus of this activity is transmission of knowledge and the development of new skills, insights, and sensitivities within the classroom, teaching is not limited to that setting. It also includes the advising, supervising and mentoring of students, the sharing of personal and professional growth with others, and the presentation of intellectual and moral concerns within the university community. Some indications of effective teaching are:

 

 

1.)          Command of the subject matter

2.)          Use of current and relevant materials

3.)          Attention to academic advising

4.)          Availability to students

5.)          Concern for the wholeness and well-being of students

6.)          Ability to communicate effectively with students

7.)          Use of technology

8.)          Use of appropriate and varied methods and strategies of teaching, assessing, and grading

9.)          Participation in general studies

10.)      Participation in workshops which develop teaching skills

11.)      Mentoring undergraduate research