As term test and assignment marks start rolling out and finals loom on the horizon, the issue of mark adjustment gets brought up more and more often. Given the range of student experiences and instructor practices, mark adjustment can be a very confusing part of the undergraduate experience in Engineering at the University of Toronto.
The confusion perhaps starts from the term “bell curving,” frequently used to refer to any form of mark adjustment despite the (true) claim that bell curving is forbidden. Now this does not mean that instructors are violating university policy by adjusting course marks: “bell curving” refers only to assigning grades based on students’ performance relative to one another so that the final marks form a Gaussian distribution with desired properties, such as a certain average or only a certain percentage of the class having a certain letter grade.
This specific type of mark adjustment is prohibited by the Governing Council’s University Assessment and Grading Practices Policy, which states that the “distribution of grades in any course, examination[,] or other academic assessment must not be predetermined by any system of quotas that specifies the number or percentage of grades allowable at any grade level.” Other than this, however, adjustment is permitted by the policy even if it is based on reference to past statistics such as averages (as long as this is not the only factor considered). This can be done by instructors before they “recommend” final grades to the department Chair and the Dean or Dean’s designate for approval, but the Chair, Dean, and Dean’s designate can also adjust the final grades in consultation with the instructor if they feel that the current grades are “injurious to the standards of the University, or […] not in keeping with divisional grading guidelines.”
The Faculty of Applied Science and Engineering’s policies in its Information for Undergraduate Instructors booklet is not much more detailed. It mainly quotes the Calendar in suggesting that grades are “an expression of the instructor’s best judgment of each student’s overall performance,” reiterates the prohibition of limiting the number of students that can get a given grade, and discourages instructors who “are calibrating or adjusting marks” from referring to the practice as “‘belling’ or ‘curving.’”
However, it also outlines the way the Faculty applies the two-stage process for mark adjustment laid out by the Governing Council. Instructors first submit provisional marks that they may have adjusted to the Examinations Committee through the Registrar’s office. This Committee includes six instructors, two undergraduate students chosen through EngSoc VP Academic, and four ex officio administrators: Dean Cristina Amon, Vice-Dean Undergraduate Thomas Coyle, First Year Chair Micah Stickel, and Registrar Don MacMillan. Its mandate involves ensuring “that students in all undergraduate academic programs and courses are fairly evaluated,” and it is the body that approves the marks. It can request that the instructor adjust the marks, though it “has the final responsibility for assigning the official course grade.” Grades are only posted as official on ACORN when approved by the committee. The Academic Regulations in the Calendar notes that chairs of departments or the Division of Engineering Science may also convene departmental marks review committees to make recommendations to the Examinations Committee for courses offered by the department/division.
In practice, this framework plays out with a number of moving parts. Prof. Jun Nogami, chair of MSE, explains that “most instructors would prefer not to adjust the grades at all” in his experience, but would check if the grade distribution is reasonable and take steps if the grades are affected by “a particular difficult exam” or some other issue before submitting it. As well, instructors are provided with a chart containing historical mark data based on all the courses at each year level, without any mention of individual courses, as a guideline. This chart includes the mean average, the mean percentage of students with As, Bs, Cs, and “below” over all courses, the range of averages covered by “80% of courses with 10 or more students,” and the range of percentages of students with each letter grade category covered by the same 80% of courses. In general, the later years tend to have higher grades overall.
According to this chart, provided to The Cannon by Prof. Jim Davis (UTIAS), Chair of the Examinations Committee, the committee is “particularly interested in the percentage of students at or above the honours level in a course because of its relevance in competitions for scholarships and graduate fellowships.” As well, Prof. Davis explained that “there is some expectation” that the mean and distribution of grades in a course do not “vary significantly” from year to year. According to Phuong Huynh, Course Administrator for Engineering Strategies and Practice (ESP), this is mostly so that no cohort is advantaged or disadvantaged by fluctuations in assignment difficulty and marking practices between years. Thus, according to Prof. Davis, there are “some cases” where the committee asks the instructor for justification, and a “small number of cases each term” where the committee requests changes to the grades, either up or down. Prof. Nogami notes, however, that “all grade adjustments are done with the agreement of the instructor,” while Prof. Davis emphasized that the committee only “rarely requests that an instructor alter submitted grades” and that “it is only done after careful review.”
This all means that instructors do not really have a sense of the adjustment that would be needed until the course is done: ESP II, for example, had marks adjusted so that more students would get As in 2016 but not 2015, since the raw marks in 2016 would have produced an unusually low number of As. This situation explained why some instructors said before the end of the 2016 offering of the course that there would be no adjustment. Similarly, grades were adjusted upwards for the first time in APS104 in its final offering in 2016, when “the final exam was judged to be too difficult.”
Prof. Nogami explains that when adjusting the APS104 marks, he “paid attention to the percentage of students who failed, and made sure that both the course average, and the grade distribution was not too far from historical averages.” This is but one example of the many factors instructors consider when adjusting grades. Prof. Guerzhoy tries to “give every student the appropriate grade” by using descriptions of each letter grade from the Academic Calendar, and does not aim for a particular average. While this has led to small deviations from preceding years, the Faculty accepted the grades he submitted when he was able to justify it.
In addition to calibrating marking schemes so that exams that show understanding characteristic of “A students” would get A range marks, exams that show understanding roughly deserving of a pass would get low passing grades, and so on, Prof. Guerzhoy tries to choose adjustment techniques that reflect the situation at hand. For example, he would multiply all grades by the same factor for situations like an excessively long exam, which impacts students proportionally to their ability, but add the same number of marks to all students if there were too few easy questions.
Many other adjustment techniques exist: Lu Chen (Indy1T8) mentions having encountered both bonus questions and bonus marks she describes as “karma points,” while Susanna Rumsey (EngSci1T4+PEY, ECE MEng 1T6, MASc 1T8) recalls an instructor who made the adjusted mark a fractional exponent of the original mark multiplied by a constant so that those with 100% stayed at 100% and those who had 0% stayed at 0%, but everyone else gained some marks. Other practices attested include the reweighting of assessments so that tests students struggled on counted for less, approaches such as boosting marks until at least one student reached 100% or purposefully making exams difficult and planning for adjustments, as well as flat-out refusal to adjust marks.
Several upper year students felt that the curving they received decreased as they went up in year: Chris Rockx (EngSci1T7+PEY) suggested that in Engineering Science this could be due to students transferring out. Dale Gottlieb (MSE1T8), who spent a year in EngSci, also thought that there was more adjustment in EngSci in general (compared to MSE), possibly due to the smaller class size in MSE, which seems to allow professors to “have a better idea of the level of the class” so that the tests met the single-section class’s needs better. Overall, Lu also suggested that “there is a general expectation that courses are going to be adjusted,” and felt that her marks reflect her “knowledge of the material relative to everyone else’s knowledge” rather than purely her own knowledge.
Of course, all these are individual anecdotes, and, as Prof. Leslie Sinclair (MIE) says, “studying and practice will ensure better results than relying on mark adjustments.” Nonetheless, the petition and academic appeals process is an option in situations where courses don’t seem to have been assessed fairly. As well, class reps, discipline clubs, EngSoc, and relevant faculty offices are all resources in the case of questions, concerns, or suggestions.