Popular Posts

Thursday 27 March 2014

Conceptualizing the New Assessment Culture in Schools




Menucha Birenbaum (In press) uses a complex-systems framework to conceptualize assessment in schools in a chapter titled, Conceptualizing assessment culture in school

Acknowledging the recursive interactions that exist and the way students, educators, schools, and systems learn, she asks readers to consider classroom learning and teacher professional learning. Making thoughtful connections, Birenbaum (In press) proposes a way to conceptualize the new assessment culture that is emerging in schools. This well-written, thoughtful chapter is an insightful analysis of current research and thinking.

Part of the chapter considers the assessment culture mindset from the perspective of seven indicators:

1.    ‘It's all about learning’ (learning-centered paradigm)
2.    Assessment drives teaching and learning
3.    Assessment means dialogue/interaction with the learner
4.    Assessment empowers the learner
5.    Diversity is desirable
6.    I/We can do it!
7.    Modesty (in assessment) is required

Each indicator is illustrated by quotes from students, teachers, and principals in schools with an assessment culture. For example, in the section ‘Assessment drives teaching and learning’ ateacher in the middle school is quoted, 

“I know if the students understood according to their questions, according to how they work and learn, I can say that for me the students are like a mirror [of my instruction] … so I have to look at the mirror and examine their understanding.”

This is a thought-provoking chapter that will have teachers and leaders alike rethinking the assessment culture in their classrooms and schools.


Reference:

To appear in: Wyatt-Smith, C., Klenowski, V. & Colbert, P. (Eds.) (In Press). Assessment for Learning Improvement and Accountability: The Enabling Power of Assessment Series, Volume 1. (Ch. 18, pp. 285-302). Berlin: Springer (http://www.springer.com/education+%26+language/book/978-94-007-5901-5)

Tagboard: https://tagboard.com/AforLConversation/160953

Wednesday 26 March 2014

Self-regulation, Co-regulation, and Learning to Dance


We’ve been hearing and reading more and more about self-regulation, its role in learning, in executive functioning, and so on. Self-regulation is an essential part of assessment research as it is a way to describe what learners do as they self-monitor their way to next steps in their learning. Researchers with an assessment lens – including the international delegates attending Fredericton, NB, in April 2014 such as Linda Allal (2007), Heidi Andrade (2013), Menucha Birenbaum (1995), Ernesto Panadero (2013), and Dylan Wiliam (2007)  to name but a few – have researched and written about assessment and self-regulation.

Co-regulation is another term I’m beginning to use – and those of you who read the research literature in French language communities may already be familiar with the term. I’ve reflected on the notion of co-regulation as I’ve observed teachers and students engaged in ‘assessment in the service of learning.’ As I've done so I’ve come to appreciate the way the term co-regulation invites us to consider and reflect upon the intricate nature of teaching, learning, and assessment. Or, as Willis & Cowie (In press) put it, “…the generative dance.”

Linda Allal (2011) discusses the difference between self-regulation and co-regulation. She writes, “The expression ‘co-regulation of learning’ refers to the joint influence of student self-regulation and of regulation from other sources (teachers, peers, curriculum materials, assessment instruments, etc.) on student learning (Allal 2007). One could also define it as processes of learning and of teaching that produce learning. The focus is thus on learning as the outcome of education and teaching is subsumed within the ‘co’ of ‘co-regulation’” (p. 332).

Consider this – as teachers engage students in self-assessment, goal setting, and self-monitoring their own learning in relation to co-constructed criteria, and then apply their growing understanding of quality and of process over time, Allal (2011) points out that we, “…activate the processes of metacognitive regulation” (p. 332).

As teachers go further and engage students in collecting evidence of their own learning, in student-teacher conferences and in parent-student conferences, students become more independent – they move from co-regulation to self-regulation (Allal, 2011). This isn’t a process that students do without support or as a result of some kind of scheduled ‘activity.’ Allal (2011) concludes by noting that with the support of teachers and in the interactive classroom environment a powerful relationship emerges – “a process of co-regulation that entails interdependency between self-regulation and socially mediated forms of regulation.” (Allal, 2011, p. 333).

As I was reading a piece submitted by Dylan Wiliam (2007), I was struck by this description of a mathematics classroom:

 “These moments of contingency—points in the instructional sequence when the instruction can proceed in different directions according to the responses of the student— are at the heart of the regulation of learning. These moments arise continuously in whole-class teaching, where teachers constantly have to make sense of students’ responses, interpreting them in terms of learning needs and making appropriate responses. But they also arise when the teacher circulates around the classroom, looking at individual students’ work, observing the extent to which the students are on track. In most teaching of mathematics, the regulation of learning will be relatively tight, so that the teacher will attempt to “bring into line” all learners who are not heading towards the particular goal sought by the teacher—in these topics, the goal of learning is generally both highly specific and common to all the students in a class. In contrast, when the class is doing an investigation, the regulation will be much looser. Rather than a single goal, there is likely to be a broad horizon of appropriate goals (Marshall, 2004), all of which are acceptable, and the teacher will intervene to bring the learners into line only when the trajectory of the learner is radically different from that intended by the teacher.” (p. 1088-89).

Doesn’t this sound like co-regulation? Students were working independently from the teacher yet the teacher was there – present – ready to bring emerging issues and questions back to the group to inform the learning of all. And, in doing so, providing a demonstration of ‘self-regulation’ or one could use the lenses of ‘scaffolding’ or ‘social construction of knowledge. It reminds me of a chapter written by Sandra Herbst (2012) that includes the transcript of a Grade 12 applied mathematics class taught by Rob Hadeth. Rob very clearly knows 'the dance' and how to help students become self-regulated successful learners. It is interesting to me how research follows practice - I suppose it must given that students and teachers together are continually forging new ground. Researchers are the ones that come along to help educators understand the magic being created or that could be created.

In this post I’ve just touched on a few of the articles related to self-regulation and co-regulation submitted by the International delegates. I encourage you to pursue this topic further and the references below are a great starting point.

References

Andrade, H. (2013). Classroom assessment in the context of learning theory and research. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 17-34). New York: SAGE.

Allal, L. (2011). Pedagogy, didactics and the co-regulation of learning: a perspective from the French-language world of educational research, Research Papers in Education 26(3): 329-336. To link to this article: http://dx.doi.org/10.1080/02671522.2011.595542

Herbst, S. (2013). Assess to success in mathematics. In A. Davies, S. Herbst & K. Busick (Eds.) Quality Assessment in High School: Accounts from Teachers. Courtenay, BC: Connections Publishing and Bloomington, IN: Solution Tree Press.

Birenbaum, M. (1995). Assessment 2000: Towards a pluralistic approach to assessment: In M. Birenbaum & F. J. R. Dochy (Eds.). Alternatives in Assessment of Achievements, LearningProcesses and Prior Knowledge (pp. 3-30). Boston, MA: Kluwer. http://link.springer.com/chapter/10.1007%2F978-94-011-0657-3_1

Panadero, E. & J. Alonso-Tapia. (2013). Self-assessment: theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electronic Journal of Research in Educational Psychology 11(2): 551-576. http://dx.doi.org/10.14204/ejrep.30.12200

Wiliam, D. (2007). Keeping learning on track: formative assessment and the regulation of learning. In F. K. Lester Jr. (Ed.), Second Handbook of Mathematics Teaching and Learning. Greenwich, CT: Information Age Publishing.

Willis, J. & Cowie, B. (In Press). Assessment as a generative dance: Connecting teaching, learning and curriculum. In C. Wyatt-Smith, V. Klenowski & P. Colbert (Eds.), Designing Assessment for Quality Learning: The Enabling Power of Assessment Series, Volume 1. Netherlands: Springer.



Thursday 20 March 2014

Teaching Learners to See


Recently I wrote a blog about quality and moderation. I used the example of a kindergarten teacher working with 5- and 6-year-old students by looking at samples of writing. Through this process the teacher is deliberately teaching about quality, deliberately teaching students the language of assessment, and deliberately teaching students how to self monitor their way to success. In our work with teachers at all levels (K-12) we have engaged in similar processes in mathematics, science, social studies, English, the arts, and so on. To put it simply, it is a way to teach students how to give themselves incredibly powerful specific feedback to guide their own next steps. And, because we do this work with all students, they can give powerful peer feedback about quality that supports learning (not marks, grades, or scores that get in the way of learning).

After reading my blog, Royce Sadler shared another article that he wrote that describes a similar process with post-secondary students. In this powerful article (2013) Royce Sadler writes,
“Feedback is often regarded as the most critical element in enabling learning from an assessment event. In practice, it often seems to have no or minimal effect. Whenever creating good feedback is resource intensive, this produces a low return on investment. What can be done? Merely improving the quality of feedback and how it is communicated to learners may not be enough. The proposition argued in Sadler (2010) is that the major problem with feedback is that it has been to date, and is virtually by definition, largely about telling.
Research into human learning shows there is only so much a person typically learns purely from being told. Most parents know that already. Put bluntly, too much contemporary assessment practice is focused on communicating better with students.
Teaching by telling is commonly dubbed the transmission model of teaching. It portrays teachers as repositories of knowledge, the act of teaching being to dispense, communicate or 'impart' knowledge for students to learn. Consistent with that was an early conception of feedback as 'knowledge of results' – simply telling students whether their responses to test items were correct or not. Telling is limited in what it can accomplish unless certain key conditions (touched upon later) are satisfied. By itself, it is inadequate for complex learning. Being able to use, apply, and adapt knowledge, or to use it to create new knowledge, requires more than merely absorbing information and reproducing it on demand.” (2013, p. 55)

Royce Sadler goes on to describe the process he used with a group of students and concludes with this statement,
“Much more than we give credit for, students can recognize, or learn to recognize, both big picture quality and individual features that contribute to or detract from it. They can decompose judgements and provide (generally) sound reasons for them. That is the foundation platform for learning from an assessment event, not the assumption that students learn best from being told. They need to learn to discover what quality looks and feels like situationally. They need to understand what constitutes quality generally, and specifically for particular works. Equally, students need to be able to detect aspects that affect overall quality, whether large or small, and understand how and why they interact. Students need a vocabulary for expressing and communicating both what they find and how they judge, at the least for that part of their evaluative knowledge they can express in words. Only after students have acquired a sufficient basis of appropriate tacit knowledge can they can understand the content and implications of a marker's feedback. At that point, feedback can be effective as learners become more discerning, more intuitive, more analytical, and generally more able to create, independently, productions of high quality on demand.”  (2013, p. 62)
I strongly recommend you find this very readable paper to enjoy. Royce Sadler and I sat beside each other at the International Symposium in Chester 2001. Although he not a member of the Australian team for the 2014 International Symposium in Fredericton, NB, you can get to meet him through his writing. His work continues to inform the research and 'in classroom' practical work of all of us interested in classroom assessment.

Reference:
Sadler, D. R. (2013). Opening up feedback: Teaching learners to see. In S. Merry, M. Price, D. Carless & M. Taras. (Eds.), Reconceptualising Feedback in Higher Education: Developing Dialogue with Students. (Ch. 5, 54‑63). London: Routledge.
Abstract. Higher education teachers are often frustrated by the modest impact feedback has in improving learning. The status of feedback deserves to be challenged on the grounds that it is essentially about telling. For students to become self-sustaining producers of high quality intellectual and professional goods, they must be equipped to take control of their own learning and performance. How can students become better at monitoring the emerging quality of their work during actual production? Opening up the assessment agenda and liberating the making of judgments from the strictures of preset criteria provide better prospects for developing mature independence in learning.

Wednesday 19 March 2014

Quality, Moderation, Professional Judgment, and Self-Regulation




When teachers use work samples with students to help them understand quality, they are inviting them to engage in a kind of social moderation process – a guided experience of analyzing student work. During this process, teachers guide students to understand the key attributes of quality in the work samples. [See Chapter 4 in Making Classroom Assessment Work (Davies, 2011) for a detailed description for teachers or Chapter 4 in Leading the Way to Assessment for Learning: A Practical Guide (Davies, Herbst & Parrott Reynolds, 2012) for leaders].

Over time this process helps students develop the language of assessment so they can self-monitor, self-assess, and engage in peer assessment. As students continue to engage in social moderation as part of the classroom assessment and instructional cycle, they learn to articulate what they’ve learned and share proof of learning with others – for example, through parent-student-teacher conferences.

This kind of classroom assessment and instruction is incredibly powerful. It deliberately teaches students how to self-monitor – to self-regulate – which, over time, leads to the development of powerful executive functioning skills. It is also a way to deliberately teach 21stcentury skills of analysis, synthesis, and critical thinking, to name a few.

Involving students in a social moderation process is more and more common as teachers come to understand the role of moderation in their own learning. For example, a teacher of 5- and 6-year-old children has a series of samples of writing and drawing from early development up to examples that are beyond where the most able writer in the class is currently working.

In small groups students gather to compare their day’s writing to the samples and talk about what they are currently doing that is similar to the sample and what is different. During this conversation of embedded instruction “next steps” ideas for subsequent work become clear. In this photograph you can see

Historically, moderation was a process used in large-scale assessment. There are different ways of going about the moderation process but, in general, it involves looking at student work with others, co-constructing criteria, developing a scoring guide, and selecting samples to illustrate quality. And then working to score student work and checking with others to ensure similar findings – a process of checking for inter-rater reliability.

Being involved in a process of social moderation has been shown to result in adults “learning to produce valid and reliable judgments that are consistent with one another and with stated achievement standards.” (Adie, Klenowski & Wyatt-Smith, 2012). It is also part of what caused the Assessment Reform Group in a publication titled, The role of teachers in the assessment of learning (2006), to say that teachers' professional judgement is more reliable and valid than external tests when they are engaged in looking at student work, co-constructing criteria, creating a scoring guide, scoring the work, and checking for inter-rater reliability. Teachers, even with students as young as 5 and 6 years of age are experiencing the same kinds of results with a classroom version of social moderation (Davies, 2012).

As part of my pre-reading for the International Symposium in April 2014, I’ve been enjoying articles by Linda Allal(2013) (written about in an earlier post) and by Lenore Adie (2013) and her colleagues, Val Klenowski and Claire Wyatt-Smith (2012). I’ve also revisited work by Graham Maxwell who will be in Fredericton, NB, in April 2014, as well as by a former International Symposium (2001) member, Royce Sadler, and a more recent article also focused on moderation.

These researchers are focused on what happens in the process of moderation and when systems deliberately engage teachers in learning about quality and gaining ‘informed professional judgment’ through the process of moderation. If this area is of interest to you, I encourage you to read their research and writing. Here is a selection of readings to get you started. And, of course, if you check the reference lists, you will find the “shoulders upon which they stand.” 

Recommended Reading: 

Adie, Lenore E., Klenowski, Valentina & Wyatt-Smith, Claire. (2012). Towards an understanding of teacher judgement in the context of social moderation. Educational Review, Volume 64, Issue 2, 2012.

Adie, L., Lloyd, M., & Beutel, D. (2013). Identifying discourses of moderation in higher education. Assessment and Evaluation in Higher Education. Volume 38, Issue 8, p. 968. http://eprints.qut.edu.au/view/types/article/2013.html

Davies, Anne, Herbst, Sandra & Parrot Reynolds, Beth. (2012). Transforming Schools and Systems Using Assessment: A Practical Guide. Courtenay, BC: Connections Publishing.


Hayward, Louise & Hutchison, Carolyn. (2013). 'Exactly what do you mean by consistency?' Exploring concepts of consistency and standards in Curriculum for Excellence in Scotland. Assessment in Education: Principles, Policy & Practice, Vol. 20 (1), 53 - 68.


Maxwell, Graham. (2010). Moderation of Student Work by Teachers. In: Penelope Peterson, Eva Baker, and Barry McGaw (Editors), International Encyclopedia of Education. Volume 3, pp. 457 - 463. Oxford: Elsevier. 


Sadler, D. R. (2013). Assuring academic achievement standards: From moderation to calibration. Assessment in Education: Principles, Policy and Practice, 20, 5‑19.
 
And the entire issue of: 
Assessment in Education: Principles, Policy & Practice. Volume 20 (1) 2013. Special Issue: Moderation Practice and Teacher Judgementwhich includes articles by Val Klenowski, D. Royce Sadler, Linda Allal, Claire Wyatt-Smith & Val Klenowski, Lenore Adie, Susan M. Brookhart, and others.





Monday 17 March 2014

But... Thou Shall NOT… Use Formative Assessments...


... as Part of the Summative Grade

A few weeks ago I was working with a group of secondary teachers on ideas connected to our latest book, A Fresh Look at Grading and Reporting in High Schools. At one point there was a murmur in the group. When I inquired about the murmur some of the teachers said, “But, we were told, 'Thou shall NOT use formative assessments as part of the summative grade.’”

The notion of evidence of learning – and what makes good evidence – is key when it comes to summative assessment. After all, everything a student does, says, or creates is potentially evidence of learning. What counts? It is all about purpose. Are you considering the evidence of learning in a formative way – to inform instruction? Or, are you considering the evidence of learning to determine how well and how much a student has learned? It is about purpose.

The evidence – observations, conversations, and products – that is used to determine the summative grade depends on the teacher. It is a professional decision. In A Fresh Look at Grading and Reporting in High Schools, Sandra Herbst and I write about the entire grading process but in this post I want to focus on one aspect of that decision-making process.

Should evidence of learning be excluded simply because it has been used to inform instruction during the learning? That is, should formative assessment information only be used for formative purposes? And, should summative assessment information be used only for summative purposes? Or, as the secondary teachers put it, “We understood that we were not to use formative assessments as part of the summative grade.”

Information is information. And, what we do with information depends on purpose. The Assessment Reform Group (one member, Gordon Stobart, is one of our International delegates) in a 2009 document titled, "Fit for Purpose” put it this way, “It should be noted that assessments can often be used for both formative and summative purposes. “Formative” and “summative” are not labels for different types or forms of assessment but describe how assessments are used.” (2009, 9)

One of the pre-reading documents by Ministry of New Zealand, submitted by the New Zealand team members for the International Symposium, puts it this way, “Sometimes assessment is referred to as being either “formative” or “summative.” The formative use of assessment information is an important part of everyday practice. It is a diagnostic process concerned with identifying achievement and progress and strengths and weaknesses in order to decide what action is needed to improve learning on a day-by-day basis. The summative use of assessment is concerned with “summing up” achievement at a specific point of time. However, these summations can be used not only to ascertain the level of achievement at a specific point of time but also to look back and consider what progress has been made over a period of time compared with expected progress.” (Ministry of Education, 2011, 14)

In summary, one needs to consider all the evidence of learning – the student’s entire learning journey – in order to better understand what and how much has been learned. When teachers make a summative assessment of a student’s learning, we engage in making an informed professional judgment.” And, to do so, we use all the information available to determine how well students have learned what they needed to learn, can do, what they need to be able to do, and can articulate what they need to articulate in relation to the standards or outcomes for the course or subject area.

Statements such as, “Thou shall not use formative assessment as part of the summative grade” are not helpful when ‘informed professional judgment’ is at work.

References:

Mansell, W., James, M. & the Assessment Reform Group (2009) Assessment in schools. Fit for purpose? A Commentary by the Teachingand Learning Research Programme. London: Economic and Social Research Council, Teaching and Learning Research Programme.

Ministry of Education (2011). Ministry of Education Position Paper: KO TE WHĀRANGI TAKOTORANGA ĀRUNGA, Ā TE TĀHUHU O TE MĀTAURANGA, TE MATEKITENGA. Retrieved March 17, 2014 Ministry of Education Position Paper: Assessment [Schooling Sector] http://www.minedu.govt.nz/~/media/MinEdu/Files/TheMinistry/MOEAssessmentPositionPaper_October11.pdf

Tagboard: https://tagboard.com/AforLConversation/160953

Saturday 15 March 2014

The Same Beast in an Updated Outfit? Distorted and Subverted AforL


Sue Swaffield (2011), in this well-written and interesting paper, reviews the emergence of assessment for learning in terms of its research roots and two major implementation projects. She then goes on to document how this work was represented differently when used as the basis for policy changes in England which, similar to those in other jurisdictions, appear to be a significant change yet are really - in my words -  ‘the same beast in an updated outfit’ - or as Sue points out, "distorted and subverted."

Those of us who are interested in how implementation initiatives can go ‘off track’ will find this paper informative both in terms of assessment for learning and how assessment for learning can be co-opted for other purposes.



To cite this article: Swaffield, Sue. (2011). Getting to the heart of authentic Assessment for Learning. Assessment in Education: Principles, Policy & Practice, 18:4, 433-449, DOI: 10.1080/0969594X.2011.582838.

Friday 14 March 2014

Co-constructing Criteria - Research Supports Practice!


Co-constructing Criteria - Assessment Research Supports Powerful Classroom Assessment Practices 

Since I co-authored my first assessment book, Together is Better: Collaborative Assessment, Evaluation and Reporting (1992) based on our school-based research, I’ve been writing about the power of co-constructing criteria in support of student learning and achievement.

Now, as I am reading the more than 100 papers submitted as pre-reading by delegates to the International Symposium, I am enjoying the number of papers that provide the research support for high-quality assessment practices such as the deep involvement of students in the assessment process.

One paper in particular, by Andrade(2013), has gathered together the research in support of co-constructing criteria. For example, when students are engaged in examining samples, using work samples, or “co-creating success criteria, and monitoring their own progress through self- and/or peer assessment” they learn more (Andrade, Du, & Mycek, 2010; Andrade, Du, & Wang, 2009; Ross and Starling, 2008).

Andrade(2013) goes on to write, “The quality of the success criteria makes a difference, of course. Citing Moreland and Jones (2000), Brookhart (2007) notes that formative assessment and the instructional decisions based on it can actually thwart the learning process if the success criteria are trivial (e.g., focused on social and managerial issues such as taking turns at the computer) rather than substantive (e.g., focused on procedural and conceptual matters related to learning about computers).”
I’m going to continue reading – as a quick scan forward shows that Andrade is addressing self-assessment, goal setting, feedback, and so on. You can hear Heidi Andradeyourself at Assessment for Learning: Canada in Conversation with the World in Fredericton, April 11 & 12, 2014. You can register here.

References
Andrade, H. (2013). Classroom assessment in the context of learning theory and research. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 17-34). New York. SAGE.
Andrade, H. (2010). Students as the definitive source of formative assessment: Academic self-assessment and the self-regulation of learning. In H. Andrade & G. Cizek, Handbook of formative assessment. (pp. 90-105). New York: Routledge.
Andrade, H., Du, Y. & Mycek, K. (2010). Rubric-referenced self-assessment and middle school students’ writing. Assessment in Education, 17(2), 199-214.
Andrade, H., Du, Y. & Wang, X. (2009). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students’ writing. Educational Measurement: Issues and Practices, 27(2), 3-13.
Andrade, H. G. (2001). The effects of instructional rubrics on learning to write. Current Issues in Education, 4(4). Retrieved from http://cie.ed.asu.edu/volume4/number4