Popular Posts

Wednesday, 20 March 2013

Preparing Diagnostic Assessments

Recently Melissa was on our members' site and asked me to respond to the following question, “Can you explain the best method of preparing a math diagnostic assessment?”

When one considers the best method of preparing a math diagnostic assessment, it is important to ask a few questions:

                  1. What is the purpose?
                  2. Who is the audience?
                  3. What kind of evidence (e.g., data or information) is needed?
                  4. How can you ensure for reliability and validity?
                  5. How can the findings be reported?

From a classroom assessment perspective, we think about diagnostic assessment as being about the gathering of “baseline data” through engaging learners in the tasks they are going to be learning more about. The purpose of diagnostic assessment from the classroom perspective is to understand what students know and what they need to know so instructional plans can be made with specific student needs in mind. Two examples:

Task: Students engage in representing their learning in relation to specific learning expectations/intentions. These tasks might be done by individuals, by small groups, or the entire class of students. There are many possibilities for performance tasks  based on grade level curriculum. Tasks could be anything from building patterns using manipulatives, problem solving and representing mathematical thinking in a variety of ways (words, symbols, graphs, equations and so on), or any other tasks that involve the application of mathematical concepts. Powerful performance tasks result in not only a product but also an opportunity to observe students and ask them to articulate their understandings. This collection of evidence from multiple sources collected over time (baseline tasks repeated more than once) provides for reliability and validity.

Test: Other times teachers take an end of unit test or quiz and ask students to do as much as they can and note which questions are easy, which are not too bad, and which are really difficult. It is helpful to ask students to use a common set of visual symbols (e.g., target) or colours (e.g., easy is green, moderate is yellow, difficult is red) to code the test items. Teachers explain to students that they don’t expect them to know everything because this is an END of learning test. Students are being asked to do the test so teachers will know more about what needs to be taught.

Other times the purpose of the diagnostic assessment is to identify trends and patterns across a large group of students so programs can be designed or to identify learning difficulties. These standardized diagnostic tests and tasks have their own quality standards. If these are the kind of diagnostic assessments you are interested in, you might want to read a column by Jim Popham titled, Diagnosing the Diagnostic Test.

Whatever you decide to do, think carefully about your purpose and ask yourself, “Do my planned next steps in terms of the diagnostic assessment support student learning?” If you can respond with a “Yes!” then proceed. If not, revise your plans. After all, diagnostic assessments are about supporting student learning first. Fulfilling the information needs of adults is a distant second purpose.

As teachers plan their classroom assessment in support of student learning, they find it helpful to build an assessment plan. You might want to use the end-of-chapter activities in Making Classroom Assessment Work to build your own assessment plan. This third edition will help you figure out which tasks could be a source of important baseline data for you and your students. I recommend you pay particular attention to the end-of-chapter activities for Chapters 3, 4, 5, and 9.

All my best,

PS Consider attending one of our summer Institutes in Canmore, AB or Fredericton, NB to find out more about diagnostic assessments and building an assessment plan.

Wednesday, 13 March 2013

Report Card Planning

Sarah is working on a project related to reporting. (A while ago I tweeted about an interesting blog by Andrew Campbell that you may also want to read.)

As I reflected on the questions she posed, I invited her to get in touch so we could have a longer conversation. And, I posted the following quick comment...

I'm happy to talk some more about reporting... especially when we conceive of reporting as a process rather than an event and when we think about how children can be involved in communicating evidence of their own learning. Technology is beginning to make it possible for students to take control of communicating the evidence of learning and for teachers to communicate their professional judgement in relation to grade level expectations... two thoughts come to mind...

1. The person working the hardest is learning the most...why shouldn't students be working harder (and smarter) when it comes to reporting?

2. Teachers professional judgement is more reliable and valid than external tests when they have been engaged in co-constructing criteria, looking at samples of student work, scoring that work, checking for inter-rater reliability, and so on....

What happens when we help students understand quality, learn the language of assessment, and self-monitor their way to success??? Even young children can do this! We have documentation. We have research evidence. Why not have students deeply engaged in collecting and sharing evidence of their learning?

Why don't you post your thoughts also? Here is the link again.


PS This is a topic we focus on during our summer Institutes and is often what people ask us to focus on as part of our sessions with schools and districts. Get in touch with me via Twitter (on this blog page) or through Kathy Burns at our office 1.800.603.9888/250.703.2920 if you want to find out more.