The question of assessment
From the beginning the FOSC realised that assessment would be the ‘tricky’ issue and so it proved. At the second FOSC meeting the overwhelming majority of members expressed a preference for a form of assessment other than external examination. However, all were mindful of Professor Ken Ruthven’s statement, ‘What we come up with will achieve credibility by means of its assessment system, because rightly or wrongly the community expects schools to assess and rank students.’
The Design developed a bipartite system of assessment involving work requirements and common assessment tasks, one which allowed a student to satisfactorily complete, through meeting work requirements, the four units of work and receive a recognition of satisfactory completion in English, but also one which graded A+ – E in Common Assessment Tasks in Units 3 and 4 as a determinant for tertiary selection.
The most popular of the work requirements proved to be the Communication Project
The S/N satisfactory completion model relied on a definition of ‘finished work’ determined by consultation between student and teacher.
The four CATs represented a judicious mix of school based and external assessment with a weighting of 75% teacher assessed and /or consensus moderated and 25% external assessment. The Oral CAT was teacher /school assessed. Presentation of an Issue and Writing Folio were teacher/school assessed and then consensus moderated at statewide school cluster/regional verification meetings, and the Text response was assessed by a two hour external examination.
As perhaps foretold in Ruthven’s warning, the assessment proved the area of the Design where teachers’ professional judgments were least respected. The University of Melbourne in particular was implacable in its refusal to accept the Design’s assessment processes and threatened to set its own selection exam. In the 1992 review of the Study Design most of its concerns about assessment were accepted: 50% external assessment, the introduction of an external Writing Task CAT, the Oral CAT downgraded to a work requirement, and the Text Response CAT, with its prompt questions, recast to make it more in line with the kinds of propositional questions found in Group 1 English exams.
The massive logistical problems, and the expense, of the verification process, proved prohibitive. Even though it was provisionally supported by all three reviews of the Design (Northfield, McGraw and Eyers), it was replaced by a General Achievement Test (GAT) against which school based assessments were statistically moderated.
The tabloid media was particularly obsessed with the susceptibility of anything that involved teacher assessment to ‘cheating’.
Suggested reading list (items available in VATE office):
- Moderation and verification
- Foster, Brian, ‘Towards a revised policy for the HSC: consensus moderation’ in Viewprints, No. 3, April 1985, Victorian Institute of Education (VISE): ISSN 0813 5150
- Ryrie-Jones, Alma (1984), Making moderation work. The history of a successful consensus moderation scheme, Preston College of TAFE: ISBN 0 9589654 0 4
- VCE Verification Manual 1991 – English
- Maher, Janet (State Verification Chairperson, English 1991), Report to the Manager. English Field of Study.Victorian Curriculum and Assessment Board on English Verification 1991
- Video: V.C.E. A Fair Measure plus excerpts
- Text response
- Withers, G. and Gill, M. (1991), Assessing Text Response. The 1990 pilot CAT. A review for teachers, VCAB (Victorian Curriculum and Assessment Board): ISBN 0 7241 9822 9
- Victorian Certificate of Education VCE (HSC) Core Examination 1989 English Group 1, VCAB (Victorian Curriculum and Assessment Board)
- Victorian Certificate of Education VCE English 1991 Common Assessment Task 4: Text Response, VCAB (Victorian Curriculum and Assessment Board)
- Withers, Graeme, Research Issue No. 1. Planning, drafting and ‘prepared’ answers, 2 October, 1990
- Framing…, Idiom, Vol. 29, No.3, November 1994