Popular Posts

Monday, 7 April 2014

Assessment for Learning: Canada in Conversation with the World


It has been a delight to read the more than 100 papers submitted by the international delegates who are convening in Fredericton, NB, from April 8 – 12, 2014. As I’ve been reading, I’ve also been posting my reflections, connections, and summaries. I’ve written about a variety of topics regarding assessment for learning. Here are the topics of postings you might want to explore:



I look forward to reporting out to you the deliberations of our symposia. Remember you can keep in touch via:


Walk the Talk – Assessment Leadership in Action



It is important for leaders to know what quality assessment practice looks like, as well as understand the ways in which to support teachers to do this work. “As you consider supporting teachers in refining and renewing their classroom assessment practices, don’t be deceived by how simple it appears to be to involve students in assessmentfor learning. The ideas themselves are simple, but implementing them in today’s busy classrooms will take some time. One of your roles [as a leader] is to assure teachers that the time spent in improving classroom assessment will be well worthwhile in terms of student learning and achievement (Davies, Herbst, Parrott Reynolds, 2012a, p. x). Recent research has demonstrated the importance of working intentionally with school leaders to increase their own assessment literacy (James et al. 2008; Moss, Brookhart & Long (2013); Smith & Engelsen (2013); and Swaffield, S. (2013).
  
And yet in our work with schools and systems, we are reminded time and time again that leaders need to model assessment in the service of learning. As leaders we need to be prepared to demonstrate assessment for learning in action; “...[we] “walk the talk” along with the classroom practitioners. Remember that alignment build confidence, commitment, ownership, and “buy-in.” (Davies, Herbst, Parrott Reynolds, 2012b, p. 26)
  
As we’ve engaged in this work with many school systems across Canada, North America, and internationally this has been a key lesson for leaders if they are going to be successful in this work. “It isn’t good enough to say one thing and do another. As leaders, we need to be the change we want to see – including being a learner, a collaborative team member, a critical thinker, an effective communicator and a good person. Acting with integrity – that is, being aligned in words and actions – is difficult but essential. We need to be mindful at all times, modeling focus and dedication in order to lead others.” (Davies, Herbst, Parrott Reynolds, 2012b, p. 3)

It is also important to remember that when leaders put the responsibility for this work in the hands of those responsible for professional learning and ‘support from the side’ alone, the initiative will likely fail over time. That is, when the initiative hits the inevitable ‘implementation dip,’ (Fullan, 2001) the change initiative will be at significant risk. The implementation dip is only bridged if practice (on the part of everyone involved) actually changes. That can only happen when ongoing feedback is continually received from those in supervisory roles. (Davies, Herbst, Parrott Reynolds, 2012b, p. 79)


References:


Davies, A., S. Herbst & B. Parrott Reynolds. (2012a). Leading the Way to Assessment for Learning: A Practical Guide, 2nd Ed. Bloomington, IN: Solution Tree Press. Available in Canada through Connections Publishing www.connect2learning.com

Davies, A., S. Herbst & B. Parrott Reynolds.(2012b). Transforming Schools and Systems Using Assessment: A Practical Guide, 2nd Ed. Bloomington, IN: Solution Tree Press. Available in Canada through Connections Publishing. www.connect2learning.com

James, M., McCormick, R., Black, P., Carmichael, P. Drummond, M. J., Fox, A., MacBeath, J., Marshall, B., Pedder, D., Procter, R., Swaffield, D., Swann, J. and Wiliam, D. (2008). Improving Learning How to Learn – Classrooms, Schools and Networks. London, UK: Routledge.

Moss, C., Brookhart, S. & Long, B. (2013). Administrators’ roles in helping teachers use formative assessment information. Applied Measurement in Education, 26 (3): 205-218.

Smith, K. & Engelsen, K. (2013). Developing an assessment for learning (AfL) culture in school: the voice of the principals. International Journal of Leadership in Education: Theory and Practice 16 (1): 106-125.

Swaffield, S. (2013). Support and challenge for school leaders: Headteachers’ perceptions of school improvement partners. Educational Management Administration & Leadership, (October 1, 2013), 1-16.

 


Sunday, 6 April 2014

Reliability. Validity, and Evidence of Learning the Canadian Way – Classroom Assessment Basics


In Canada, for the past thirty years, provincial documents have considered student evidence of learning collected in classrooms by teachers in a very different way than evidence of learning collected for external purposes such as school or system-based data collection. For example, in 1989, the Primary Program in British Columbia used the social science research perspective as the model for collection of evidence of student learning.That is, teachers and students together collect products, conversations (records of student thinking), and observations of process. This way of looking at evidence of learning has been embedded in my work (for example, Davies, 1992, Davies 2012), in provincial curriculum documents across Canada such as B.C.’s Primary Program (1989; 1990; 2000; 2010), Manitoba’s assessment policy (2010), Ontario’s Growing Success (2010) and Nova Scotia’s “Back iNSchool Again’ guide for teachers (2013). It is based on the work of Lincoln and Guba (1985).

Given this powerful and long-term perspective on evidence of learning at the classroom level across Canada, it was interesting to read the pre-readings connected to this area of classroom assessment, particularly where they address reliability, validity, and evidence of student learning.

Reliability and Validity

The research by ARG (2007), the findings of researchers in Scotland (2011) and Alberta (Burger et al., 2009) are worth examining as all researchers found that ‘teachers' professional judgment is more reliable and valid than external testing...’ Parkes (2013) (United States’ team’s collection of pre-readings) and Maxwell (2009) (an Australian perspective) take different perspectives from one another. And yet, neither appears to have considered this emerging body of research.

If one considers reliability from a social sciences perspective then one addresses issues related to reliability – repeatable, replicable – by looking at the evidence of student learning collected from multiple sources over time (Lincoln and Guba, 1985). Maxwell and Cumming (2011), delegates from Australia, come close when they state, “Concerning reliability, continuous assessment would lead to more stable judgments of student achievement (through collection of more extensive information over time and consultative judgements among teachers). (p. 206)

Evidence of Learning

From a social sciences perspective, evidence of learning is a qualitative task – and a messy one at that – because teachers are, potentially, collecting evidence of everything a student says, does, or creates. As teachers have deconstructed curriculum expectations/outcomes/standards, they have learned to be strategic about what they collect as evidence of student learning. Further, they also are strategic about what students collect as evidence of their own learning in relation to what needs to be learned. This process of triangulation of data (Lincoln and Guba, 1985) supports classroom teachers as they design next instructional steps and later, when they are called to make a summative judgement. Heritage (2013) discusses how teachers generate evidence of student learning, the range of sources of evidence, quality of evidence as well as evidence of learning in the context of learning progressions. This paper reviews the variety of purposes for collecting evidence of learning (informing student and teacher’s ‘next steps,’ being able to see learning along-the-way and over time, and to help teachers respond to student learning during the actual learning time itself.

Heritage (2013) notes that Patrick Griffin (2007) argues that humans can only provide evidence of learning “through four observable actions: what they say, write, make, or do.” (page 9). This is the definition of triangulation – everything a student says, does, or creates (B.C. Primary Program draft, 1989; Davies, 2000; 2012). Heritage (2013) goes on to discuss a variety of researchers who try in different ways to do exactly the same thing – that is, account for the vast possibilities of evidence of student learning. 

In the end I think everyone interested in classroom-based evidence of learning will find that it is more helpful to acknowledge that the ways students show evidence of their learning can not be contained in definitions but rather is simply ‘everything a student creates, says, or does’ is potentially evidence of learning.

Theoretical papers discussing reliability, validity, and what counts as evidence of student learning in classrooms need to be revisited given:

1.    Classroom assessment is not a ‘mini’ version of large-scale assessment. Reliability and validity begins to be attended to when teachers plan assessment and instruction with the learning expectations in mind and plan to collect evidence of learning in relation to those learning expectations while attending to triangulation of evidence of student learning (products, conversations, and observations of process).
2.    Moderation isn’t only for large-scale assessment. When professionals are involved in both formal and informal processes of moderation with the purpose of coming to agreement about quality of student evidence, their professional judgement is more reliable and valid than external tests and measures (ARG, 2007; Burger, 2009; Hutchison, 2011).
3.    Evidence of learning is messy. The collection of student evidence of learning from multiple sources including products, conversations, and observations – triangulation (Davies, 2012) not only prepares teachers to design instruction minute-by-minute but it also provides the evidence of student learning needed to support summative judgements about student learning in relation to curriculum expectations/outcomes/standards for reporting purposes.

As I reflect upon the definition of triangulation of evidence of student learning – collecting products, conversations over time, and observations of process – embedded in numerous curriculum and assessment documents across Canada, I think the way Parkes (2013) and Heritage (2013) consider reliability, validity, and evidence of student learning is not helpful in our Canadian context. 

When one considers the sheer number of Canadian classrooms and jurisdictions where teachers are expected to exert their professional judgement for both formative and summative purposes, it is obvious that Parkes' (2013) and Heritage's (2013) research summaries reflect Canadian education’s past, not our present nor our future.

References:

 BC Ministry of Education. (1989; 1990; 2000; 2010).  The Primary Program: A Framework for Teaching. Victoria, BC; Queens Printers.
Davies, A. (2011). Making Classroom Assessment Work, 3nd Ed. Courtenay, BC: Connections Publishing and Bloomington, IN: Solution Tree Press.
Davies, A. (2000). Making Classroom Assessment Work. Courtenay, BC: Connections Publishing.
Heritage, M. (2013). Gathering evidence of student understanding. In J. H. McMillan (Ed.) SAGE Handbook of Research on Classroom Assessment, pp. 179-196. New York: SAGE.

Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage Publications.
Manitoba Ministry of Education. (2010). Provincial Assessment Policy Kindergarten to Grade 12: Academic Responsibility, Honesty, and Promotion/Retention. Winnipeg, MB: Manitoba Education.
Maxwell, G. (2009). Dealing with inconsistency and uncertainty in assessment. Paper delivered at the 35th Annual Conference of the International Association of Educational Assessment, Brisbane (2009, September).
Maxwell, G. S. & Cumming, J. J. (2011). Managing without public examinations: Successful and sustained curriculum and assessment reform in Queensland. In L. Yates, C. Collins and K. O’Connor (Eds.) Australia’s Curriculum Dilemmas: State Cultures and the Big Issues. Chapter 11, 202-222.
Nova Scotia Department of Education. (2013). Back iNSchool Again! Halifax, NS: Department of Education. https://www.ednet.ns.ca/
Parkes, J. (2013). Reliability in Classroom Assessment. In J. H. McMillan (Ed.), SAGE Handbook of Research on Classroom Assessment, Chapter 7,107-124. New York. SAGE.

Remember, if you want to keep track of some of the conversations connect via these links:

https://tagboard.com/AforLConversation/160953
Twitter: #AforLConversation
Twitter: @Anne_DaviesPhD



Saturday, 5 April 2014

Higher Education and Assessment for Learning - Research to Consider


I’ve been corresponding lately with Royce Sadler. Part of our back-and-forth has been related to assessment in higher education/post-secondary contexts. I did a post about one of his recent papers because it paralleled powerful students moderating quality - practices found in K-12 classrooms. He explained that while a huge amount of work has been undertaken related to formative assessment, there is very little focused on summative assessment and grading in colleges and universities. He also notes that this is a critical need. 

The International Symposium on Assessment for Learning in Fredericton, NB has at least two delegates submitting papers from a higher education context. Vince Geiger and his colleagues are working in the area of Mathematics in higher education. Harm Tillema and his colleagues have an ongoing research interest in peer assessment. I have included a selection of readings from all three of these writers/researchers below.

If you want to keep track of some of the conversations, connect via these links:

https://tagboard.com/AforLConversation/160953
Twitter: #AforLConversation
Twitter: @Anne_DaviesPhD

Authored or co-authored by Vince Geiger – member of the Australian Team

Geiger, V., Jacobs, R., Lamb, J. & Mulholland, J. (2009). An approach to student-lecturer collaboration in the design of assessment criteria and standards schemes. In J. Milton, C. Hall, J. Land, G. Allan and M. Nomikoudis (Eds.) ATN Assessment Conference 2009: Assessment in Different Dimensions (Proceedings of a conference on teaching and learning in tertiary education, pp. 137-145). RMIT University, Melbourne.

Goos, M. & Geiger, V. (2012). Connecting social perspectives on mathematics teacher education in online environments. ZDM: The International Journal on Mathematics Education 44 (6): 705-715.

Authored or co-authored by Harm Tillema – member of the Continental Europe Team

Pat-El, R., Tillema, H., Segers, M. & Vedder, P. (2013). Validation of assessment for learning questionnaires for teachers and students. British Journal of Educational Psychology 83 (1): 98-113. (First published online 6 December 2011  DOI: 10.1111/j.2044-8279.2011.02057.x).
Tanilon, J., Vedder, P., Segers, M., & Tillema, H. (2011, April). Incremental validity of a performance-based test over and above conventional academic predictors. Learning and Individual Differences, 21(2): 223-226.
van Gennip, N., Segers, M. & Tillema, H. (2009). Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educational Research Review4(1): 41-54.

Authored or co-authored by Royce Sadler – former member of the Australian Team

Sadler, D. R. (In press, due July 2014). Learning from assessment events: The role of goal knowledge. In C. Kreber, C. Anderson, N. Entwistle & J. McArthur (Eds.) Advances and Innovations in University Assessment and Feedback. Edinburgh: Edinburgh University Press.
Sadler, D. R. (2014). The futility of attempting to codify academic achievement standards. Higher Education, 67(3): 273-288.
Sadler, D. R. (2013). Opening up feedback: Teaching learners to see. In S. Merry, M. Price, D. Carless & M. Taras (Eds.) Reconceptualising Feedback in Higher Education: Developing Dialogue with Students. (pp. 54‑63). London: Routledge.
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35 (5): 535‑550.
Sadler, D. R. (2002). Ah! … So that’s‘quality’. In P. Schwartz & G. Webb (Eds) Assessment: Case Studies, Experience and Practice from Higher Education (Chap. 16, 130-136). London: Kogan Page.




Feedback for Learning in Science, Technol­ogy, Engineering, and Mathematics


Maria Ruiz-Primo and Min Li (2013) examined formative assessment from the perspective of research in the areas of science, technol­ogy, engineering, and mathematics (STEM) edu­cation. This chapter is based on a meta-analysis where the research papers were examined and found to be appropriate to be included – from 9000 to 238 papers. This focus allowed the researchers to examine the state of research into formative feedback in the classroom context.
This statement, “We argue that feedback that is not used by students to move their learn­ing forward is not formative feedback.” (p. 219) is a helpful one to teachers reflecting on their practice. This is a similar statement to one made by Dylan Wiliam and Paul Black (2009) although they were examining different work. Ruiz-Primo and Li (2013) give a detailed description of what formative feedback does look like in the classroom context elaborating on the following points:

1.    Be seen as a process strongly guided by the learning goal(s) that the teacher and students work toward.

2.    Actively involve students in the feedback process by engaging them in (1) defining the evidence of learning and/or success criteria (or goal or reference level) being targeted, (2) comparing the current or actual level of performance with the evidence or the criteria for success, and (3) using assessment information to improve their own learning to reduce the gap.

3.    Be considered as an instructional scaffold that goes beyond written or oral comments.

4.    Is specifically intended to improve learning outcomes (e.g., deepening conceptual understanding) and processes (e.g., reflecting on one’s learning and learning strategies or making new connections to what has been learned…)

5.    Ensure its usefulness by making feedback accessible and practical. Appropriate feedback helps students develop sufficient insights into their own learning and become self-critical, self-reflective, and self-directed…

6.    Consider different sources of information about students’ learning and understanding from highly formal (e.g., taking tests or quizzes, filling in a handout, or completing an investigation report) to very informal (e.g., questions, comments, observations, and evaluative and interpretative listening)

7.    Demonstrate, over time, alignment with a learning trajectory at least within a unit or module. That is, students will take actions based on feedback aligned to the trajectory used to design and map the instructional and assessment activities…

And, as they do so, they carefully cite the research that supports their statements. They explain that the definition highlights two characteristics:

1.    Students are essential players in the feedback process. They need to be partners in clarifying learning goals or success criteria, comparing current perfor­mance with success criteria, and using the infor­mation to formulate an action plan and as feedback providers and developers of actions to be followed.

2.    Feedback that supports learning is far more than oral or written comments to students’ performance. It involves looking for evidence of students’ learning on ongoing interactions and continually engaging students in discussions about learning and quality.

This chapter is well worth reading. I appreciated the thoughtful proposal of a framework for research in this area along with the frameworks implications from classroom practice. It is important that feedback be studied in the context of classroom learning and framing powerful research questions is an important part of that work.

As I read this chapter, I especially appreciated the emphasis on the role of the student – the learner. For example, “If [formative assessment] is exclusively in the hands of the teachers, it can be argued that students cannot be entrusted with managing their own learning, and self-regulation skills thus are less likely to be developed (Andrade, 2010) (p. 224).

This chapter is a ‘must read’ if you are working with educators and promoting high quality feedback to support student learning. You might also enjoy Dylan Wiliam’s (2007) chapter focused on feedback from an instructional perspective in mathematics classrooms.

Those of you attending Assessment for Learning: Canada in Conversation with the World in Fredericton, NB, will enjoy listening to and learning from Maria Araceli Ruiz-Primo. Keep track of the conversations at: https://tagboard.com/AforLConversation/160953
Or, check in on Twitter @Anne_DaviesPhD or #AforLConversation

References

Andrade, H. (2010). Students as the definitive source of formative assessment: Academic self-assessment and the self-regulation of learning. In H. Andrade & G. Cizek (Eds.), Handbook of Formative Assessment (pp. 90-105). New York: Routledge.

Ruiz-Primo, M. & Li, M. (2013). Examining formative feedback in the classroom context: New research perspectives. In J. H. McMillan (Ed.), SAGE Handbook of Research on Classroom Assessment (pp. 215-232). New York. SAGE Publications.

Black, P. & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1): 5 - 31.

Wiliam, D. (2007). Keeping learning on track: Classroom assessment and the regulation of learning. In F. K. Lester Jr. (Ed.), Second Handbook of Mathematics Teaching and Learning (pp. 1053–1098). Greenwich, CT: Information Age Publishing.





Assess to Learn - Impressive gains in student learning and achievement


Are you seeking more information and support for powerful professional learning using assessment for learning in support of numeracy and literacy?

Do you want to document impressive gains in student learning and achievement, and have teachers and schools report positive sustainable changes in teaching, learning, and assessment processes, practices and systems?
 
The New Zealand Ministry of Education’s professional development Assess to Learn project report documents the impact on teachers, students, and schools in New Zealand. The key outcomes of the project were to:

      improve student learning and achievement
      shift teachers’ knowledge and assessment practice
      develop coherence between assessment processes, practices, and systems in classrooms and in schools so that they promote better learning
      demonstrate a culture of continuous school improvement

This project has been underway since 2002. Evaluation has been ongoing since 2003 by national evaluators Dr. Jenny Poskitt (Massey University) and Kerry Taylor (Education Group Limited).

They have documented impressive gains in student learning and achievement, and teachers and schools report positive sustainable changes in teaching, learning, and assessment processes, practices, and systems. The full report, published in 2008, describes the project, summarizes their approach to professional learning, and documents the changes that emerged as a result of the focus on assessment for learning.

This report is of interest to those seeking more information and support for powerful professional learning using assessment for learning in support of numeracy and literacy.

Jenny Poskitt is presenting at Assessment for Learning: Canada in Conversation with the World.

Find out more about the conversation at https://tagboard.com/AforLConversation/160953


Exploring Professional Judgement New Zealand Style




There has been alot of interest in the notion of “informed professional judgement” as many Canadian educators experience a change in the balance between large-scale external assessments across a province and classroom assessment. As I’ve been reading the work of the various International experts in the area of assessment for learning, it has been fascinating to see how the notion of professional judgement is emerging differently in different jurisdictions. 

As educators, we can find ourselves “adopting a good idea” without having taken the time to examine the underlying assumptions and the context in which the idea first took root. For example, if one quickly reads the well-written chapter by Susan Brookhart (2013) titled, Grading, you might miss that she clearly notes  the findings are pertinent in the United States. You might assume that means they would be the same elsewhere. You would likely be wrong. The current assessment, teaching, and learning conditions in the United States are unique and not necessarily transferable to other countries, given their context.

In a recent post, I shared the work of some of the Australian team members in the area of moderation of student work. Australia has, during the past decade, begun to implement national testing. 

This work is fascinating and is of immediate interest to educators in Canada where many provincial/territorial jurisdictions also have external examinations. Here in Canada there are many jurisdictions that use the process of moderation with students, with teachers, across schools and across school systems. In the 1990s, British Columbia engaged in moderation processes as part of developing reference sets – descriptions of student learning over time – in a variety of areas such as reading, writing, numeracy, group communication, and technology. In the late 1990s, Ontario, British Columbia, and many individual school districts collected student work samples in subject areas such as writing, numeracy, social responsibility, and so on and engaged in a variety of moderation processes. The process itself varies and yet is typically described as incredibly powerful professional development.

Recently I’ve been reflecting on two papers focused on “Overall professional judgement” submitted by New Zealand team members - Charles Darr and Jenny Poskitt. While the process they describe also involves looking at student work, it is evolving in a very different context.

In the 1990s, New Zealand developed a powerful and extensive set of assessment exemplars that illustrated learning over time in a variety of subject areas and disciplines. During the past decade, New Zealand has moved to articulated national standards - however, there is no national testing. Instead of relying on national testing, they are seeking to continue the work towards overall professional judgement.”

Poskitt and Mitchell (2012) in their article titled, New Zealand teachers’ overall teacher judgements (OTJs): Equivocal or unequivocal? describe some of the work being done as teachers develop their understandings of National Standards and work towards common understandings of quality in relation to the standards. They write, “Teacher capacity to judge current and future performance is important. With multiple opportunities to gather pertinent information, teachers are best placed to make valid (unequivocal) judgements on student achievement when they have shared understandings of standards. Because standards are comprised of multiple criteria, not all of which are evident in samples of student achievement, teacher understanding of standards develops through professional conversations and moderation processes.”

In this article, after examining the International context, the history of this work, and the challenges that arise, they detail the research methods – case study – that they used. I appreciated this concluding statement, 

“In essence there is tension between teachers’ tacit knowledge (gut feeling), intra and inter professional judgements, and explicit knowledge. Although having clarity about the composition of OTJs and standards, appropriate evidence to underpin those judgements and using exemplars and verbal descriptors to support the judgements is likely to lead to greater consistency (Sadler, 1998; Gardiner et al., 2000), it is only throughmoderation processes that teachers will reach deeper understanding of standards (Wyatt-Smith et al., 2010). Deep or meaningful change in teacher beliefs and professional practice takes time, generally considerably longer than anticipated (Timperley, Wilson, Barrar, & Fung, 2007).” (Poskitt and Mitchell 2012, p. 73)

A second paper by Charles Darr (2014) describes the development of an online tool to support teacher judgements. Darr (2014) outlines the framework and the challenges of supporting teachers professional judgement with an online tool. As I read Darr’s paper, I was reminded of Malcolm Gladwell’s book, Blink:The Power of Thinking Without Thinking (2005) as well as the Assessment Reform Group’s 2006 report, The role of teachers in the assessment of learning.

The work focused on professional judgement has relevance to Canadian educators as it supports some of the work already being done and offers fresh ways of thinking about it as work in the area of professional judgement moves forward in Canada - in particular, the many projects across Canada exploring alternative ways to communicate to parents and communities about student learning. 

To read more about ‘overall professional judgement’ from a New Zealand perspective, you might want to read the papers by these New Zealand researchers - I've included links in the reference list below.

In closing, we have much to learn from our international colleagues from New Zealand and elsewhere. I am heartened that these ongoing international conversations support all of us to be critical and thoughtful consumers of research and ideas imported from other contexts. 

We will post glimpses of our deliberations at: https://tagboard.com/AforLConversation/160953

References

Assessment Reform Group. (2006). The role of teachers in the assessment of learning. UK: ARG.

Brookhart, S. (2013). Grading. In J. McMillan’s Sage Handbook Research on Classroom Assessment (Chapter 15). Thousand Oaks, CA: Sage Publications.

Darr, C. (2014).  The development of an online tool to support teacher judgements. New Zealand Council for Educational Research. Auckland; NZCER.

Gladwell, M. (2005). Blink: The Power of Thinking Without Thinking. New York: Little, Brown and Co.