Tuesday, September 10, 2013

Some more DDI...(text heavy post...and not so much about art :-/ )

I can honestly say that I can't wait for September to be over!  There is just SO much paperwork that needs to be done and turned in, too much thinking I have to do to plan ahead for DDI, SLOs, LLOs, etc. that I think my brain is already fried, and it's only Tuesday!!!  (Boy, I'm sure missing have day 6 off at the moment...this being a full time teacher is going to wear me out! ;) )

Monday I stayed for a faculty meeting about DDI, data driven instruction.  The presenter came from our local Boces and has been working with other districts to begin implementing DDI as well.  The information she presented does make sense.  The first year of implementation is going to be a little more work because of the need to build test banks and such, but the concept behind DDI makes sense.  I figured I would just share a bit in this post about what was presented and how I think I'll use it with my high school art classes.

Data driven instruction is built upon four steps.  The concept comes from Paul Bambrick-Santoyo and his book Driven by Data.

1.  Assessment:  Teachers should be using interim assessments (my district is requiring us to do a test at the 10, 20, 30 and 40 week period...the 40 week being the final exam, and also in my case, part of my post-assessment for my APPR) throughout the school year to see what students have a good understanding on and to see what they are retaining.  Using a ticket out the door (or exit slip) and bell ringers are also forms of assessment that should be used to see what students are understanding.

2.  Analysis:  The second step is to analyse the data from the tests.  Ideally, if you're in a large district, you'd have a team of teachers who teach the same subject area that you can sit down with, or, if you're like my district, my principal and I will be a "team" to analyse my data.  As you look at the data, you should be saying, "Why did 70% of the students choose this wrong answer over the correct answer?" Well, maybe it was a poorly worded question on the test, or maybe the concept needs to be retaught.  On the other hand, you would hope to find that 90% or more of students would be getting correct answers.  (This is where eDoctrina will come in handy that I have to use.  Even though it's a lot of work to input questions, the analysis portion becomes easier because the program will automatically calculate the percentage of student answers for each question.  It even creates a bar graph for those people who need it visually!)

The presenter gave another example of how teachers in another district who have already implemented DDI last year analyse the data.  After a test has been corrected, they assign the following as homework:  "Go home and for each answer you got wrong, please write down and tell me why you chose that answer over the correct one."  This would essentially help you understand how the students heard the information that you taught.

3.  Action:  The third step is to take action.  Meaning, what and when are you going to reteach, and how are you going to reteach it so that students grasp the knowledge better?  Basically, it means you might need to change your teaching strategy.  This step is often the hardest step, especially for those on a time constraint because of state testing.  I think this part is a lot easier for us special areas to do because we generally don't have a state curriculum we are mandated to follow.

4.  Culture:  The final step has to do with changing the culture of student learning.  Basically, under DDI, you "own" your lessons for the first 10 weeks of school.  After that 1st interim assessment, the students "own" your lessons and drive the teaching and learning process...they essentially take charge of their own learning.


So, here's the first question this brings up...isn't DDI suggesting that we "teach to the test", which is what everyone hates doing?  Not necessarily.  Those interim assessments aren't supposed to be held against the students.  But then how are we supposed to get the students to take them seriously and actually try on these tests?  That's up to the teacher.  Our district suggested taking the interim tests as a homework grade instead of a test grade so that they are still held accountable, but it isn't going to make or break their report card averages.

Next, how should we be building our assessments, especially if we have extremely high achievers and low achievers in the same class?  The presenter suggested building the assessment for the bulk of the students.  (Hmmm, easier said than done?)

How many questions should be on these interim assessments?  The presenter suggested that for every standard or sub-standard you have covered up until that point, you should have 3-5 questions per standard.  (In eDoctrina, when I input my questions for the question bank, I can also attach a standard/sub-standard to each question.  Again, when it's time to analyse my data, it will automatically tell me the percentage of students that are doing well/poorly on each standard.  I plan on inputting all my questions from each unit test I give so that I can randomly pick a few from each unit for each interim assessment.)

Coming from the CSE angle... how are we supposed to accommodate everyone who needs it? My district has A LOT of students who require services from an IEP or a 504 plan...extended time, test read out loud, etc.  On top of regular class tests, quizzes and assignments, this just adds that much more stress to those services.  According to the presenter, if the accommodations can happen at the summative, it should happen at the interim.  For example, in NY, a math exam for the 3rd-8th grade is allowed to be read out loud to those who have that requirement on their IEP, but certain sections on the ELA are not allowed to be read out loud.  (This is not necessarily going to be a good thing for many districts...especially districts like mine that can't afford to hire more support staff due to the lower scores by the new state tests under Common Core.  My district is going to try and entice students to stay after for AIS services after school.  This would ensure they get the extra support they need without filling up the school day even more.  Our teachers would be paid extra for staying after to work with the students, which is cheaper than hiring more full-time staff.  Only problem is...students aren't mandated to take AIS, so we have to make them want to stay after.)

Wow, so all that being said, DDI is looking good and bad to me right now.  To be honest, checking student understanding at various points using cumulative assessments seems like a no-brainer to me.  I did this last year with my 5th graders and my color theory curriculum.  Each rubric had review questions from all of the previous lessons and units, and each rubric, there were more and more, but they were able to answer just about all of the correctly by the end of the year!  (Thus the reason they did really well on their post-assessment!)  I remember tests from high school that had review questions/bonus questions on them from previous units...

DDI does stink though, because now I'm having to administer more tests in art.  I do think tests are important because I want to treat art, to an extent, like an academic subject so that students take it seriously and actually try to learn and maintain the information I teach them.  But holy tests galore!  I'll have to work on getting a little creative with these interim assessments so that they have some performance based aspect to them...but that might be too much for my plate this year, I'd want to give it serious thought!

I just want to leave you with one more thing.   (And I promise, after this post, I WILL be posting some artwork and such that is starting to be finished up at school!  I still have only seen about half of my elementary kids...!) The presenter gave us this handout to try and explain why DDI needs to be done (and it comes from Bambrick's book):

TEACHER:  Listen; this data-driven education thing seems interesting and all, but why are we doing it?

PRINCIPAL:  Do you watch basketball?

TEACHER:  Sure.

PRINCIPAL:  During a recent high school basketball playoff game, the scoreboard completely malfunctioned midway through the game.  So, the refs kept the score and time on the sidelines.  As it came close to the end of the game, the visiting team was down by two points, but they did not realize it nor how much time was left.  The clock ran out before they took the final shot.

TEACHER:  That's not right!

PRINCIPAL:  Of course not.  If the scoreboard had been working, the entire end of the game could have been different.  So you'd agree that a working scoreboard is critical for sport events, correct?

TEACHER:  Of course.

PRINCIPAL:  At the end of the day, data-drive instruction is like fixing the broken scoreboard.  Relying on state tests is like covering up the scoreboard at the beginning of the game and then uncovering it at the end of the game to see if you won.  At that point, there's nothing you can do to change the outcome!  We use interim assessments to keep the scoreboard uncovered, so we can make the necessary adjustments to be able to win the game.

No comments:

Post a Comment