Search this site
Other stuff

All banner artwork by Brady Johnson, college student and (semi-) starving artist.

My latest books:

   

        Available now

       Available Now

Available now 

My book Machines are the easy part; people are the hard part is now available as a free download at Lulu.

 The Blue Skunk Page on Facebook

 

EdTech Update

 Teach.com

 

 

 

« 10 Things I Wish I Knew As A First Year Teacher | Main | Facebook - an educational resource? »
Tuesday
Oct072008

Is good data-driven decision-making possible?


Is data-driven decision-making possible in schools? I've long worried about it and am glad to hear Chris Lehman echo this concern in his excellent presention (excellent because I agree with almost everything he says?) at Ignite Philly:

Chris, you're singing the librarians' old songs about research and problem-based learning and presentation and authenticity. Great!

But back to data-driven decisions. Why are these nearly impossible to make well at a school level? From an earlier column, A Trick Question:

At last spring’s interviews for our new high school library media specialist, the stumper question was:

"How will you demonstrate that the library media program is having a positive impact on student achievement in the school?"

How did that nasty little question get in there with “Tell us a little about yourself” and “Describe a successful lesson you’ve taught”? Now those questions most of us could answer with one frontal lobe tied behind our cerebellums.

Given the increased emphasis on accountability and data-driven practices, it’s question all of us, librarians and technologists alike, need to be ready to answer - even if we are not looking for a new job or don’t want to be in the position of needing to look for one.

While I would never be quick enough to have said this without knowing the question was coming, I believe the best response to the question would be another question: “How does your school measure student achievement now?”

If the answer was simply, “Our school measures student achievement by standardized or state test scores,” I would then reply, “There is an empirical way of determining whether the library program is having an impact on such scores, but I don’t think you’d really want to run such a study. Here’s why:
  • Are you willing to have a significant portion of your students (and teachers) go without library services and resources as part of a control group?
  • Are you willing to wait 3-4 years for reliable longitudinal data?
  • Are you willing to measure only those students who are here their entire educational careers?
  • Are you willing to change nothing else in the school to eliminate all other factors that might influence test scores?
  • Will the groups we analyze be large enough to be considered statistically significant?
  • Are you willing to provide the statistical and research expertise needed to make the study valid?"
I surmised then that "No school I know of has the will to run such as study."

If test scores are the sole measure of "student achievement," there are indeed some things we in schools can be excellent at doing with data. We can identify individual students who perform below established norms and we can look at groups of students with certain characteristics (ELL, FRP, SpEd) and see how they compare with the norms. We can do trend tracking of such groups.

We are good at determining which groups need help. But what comes next is the "gotcha."

Schools are unwilling and unequipped to do controlled studies on the effectiveness of any single intervention over a period of time to improve the test scores of a school or group. The typical pattern is to throw as many changes into a curriculum as possible and hope something sticks.

Let's say our SpEd population is showing low reading scores. A building may well decide to:
  1. Increase the use of differentiated instruction
  2. Try a new computerized reading program
  3. Increase the SSR program
Pretty good strategies, huh? But here is the rub. What happens, let's say, if the groups scores rise. Any one of these interventions may have been effective. All of them may have created some of the improvement. Two of them in combination may have led to the improvement. The Hawthorne Effect might be in play and gains this year, might not be shown next year. Some may be effective, but take more than a year to show results. A completely extraneous variable may have been present (a new teacher or principal, perhaps).

Schools should not be tasked with doing research. This was what university lab schools are (were) for. Every school doing its own research on effective educational practices makes no more sense than every hospital being a research hospital and every student being a guinea pig.

I am not sure what the answer to this problem might be nor has anyone to date given me a good solution to this problem (if even willing to admit a problem exists.)  I do believe that carefully applied valid research can help teachers improve their instructional practices.

It just shouldn't be up to the practitioner to also be a researcher.

Your thoughts?

EmailEmail Article to Friend

Reader Comments (6)

So what were the responses by the school media specialists applying for the job? Did they have similar answers to yours? As a MLIS grad student soon to be facing such terrifying interviews, I am curious!
Thanks

October 8, 2008 | Unregistered CommenterRebecca Buerkett

I like the thought about researchers doing research but the problem is fear from schools. I found a neat intervention to work with my sped students and a friend who was on the univ. level wanted to work with me in order to study the intervention and the results. Then we would write a research paper together and publish it because I really thought other teachers needed to know how well this intervention worked. But the school put out a big memo saying all research like this had to be approved beforehand (and since we had already discussed this, it would not be approved) and mega forms would need to be filled out. Basically, it would take months to get it approved and by then all parties would be discouraged from continuing. How do you fight that battle?

October 9, 2008 | Unregistered CommenterPat

@ Hi Rebecca,

As I remember, there weren't very good ones. I do offer some alternative means of demonstrating a library's effectiveness at:

http://www.doug-johnson.com/dougwri/demonstrating-our-impact-1.html

http://www.doug-johnson.com/dougwri/demonstrating-our-impact-2.html

All the best,

Doug

@ Hi Pat,

I think the reasoning behind this is that there are some stringent guidelines for professional research involving "experimentation" on human subjects. I am guessing your district may be thinking of those.

Of course, we experiment on human subjects all the time in school. I guess we will just continue to do it on an non-academic (and less rigorous) level.

Doug

October 10, 2008 | Unregistered CommenterDoug Johnson

Doug,

Shouldn't every teacher, as part of their practice, take some time to be reflective, and ask themselves, "How do I know that what I'm doing is working?"
I agree that I wouldn't want the extensive research that you described... a three year period of testing, a control group who doesn't get any of the good stuff, etc. but I also think that when you implement something, there should be some sound reasoning behind it, and you should have SOME way of knowing if it worked.
For example... if, instead of me doing live booktalks, I instead posted podcast booktalks from myself AND students, which homeroom teachers could access whenever they wanted, so ALL my classes could hear them, rather than just the ones coming to the library that day... I SHOULD be interested to see if that affects circulation of material.
Does it increase, decrease or keep the circulation of books the same? I can gather some data regarding that WITHOUT the serious, heavy duty data collection methods to which you referred.
I think some "data collecting" is almost necessary if you are a reflective kind of person and I think we need to be.

October 11, 2008 | Unregistered CommenterJanice Robertson

Hi Doug.

Chiming in from the halls of the ivory towers, I can confirm that the amount of paperwork and hoop jumping to get into a school is discouraging at best, a deal-breaker at worst. I think this is a real cause of the lack of research that takes place in schools. I think one of the bottom lines is that no one wants to have a research team come in and be found lacking. And yet, we all are (researchers included.) Teacher librarians that I know welcome the opportunity to learn, but the administrators generally put a stop to it, unfortunately. The teacher research / action research movement is valuable, and part of the reason is because it addresses the access problem head on, as well as the issue of being "found out."

We do have to find a way to solve this problem though. And I think site-based research that directly benefits the program participating is important, no matter who conducts it. I dont think there really is such a thing as a universal best practice - any research we do cant be holistically applied to other contexts and work the same way. Humans bring too many variables. But we can learn from specific contexts, publish, and adapt it to other contexts to see its effect.

October 12, 2008 | Unregistered CommenterAngie

Oh, if only No Child Left Behind were truly just a transportation program... In my experience, I have seen hundreds of teachers have to take their limited time for staff development and spend it in exhaustive data analysis workshops--all because we have a system in which it now seems only the numbers matter, not the students. I've seen hundreds and thousands of dollars spent on data-driven decision making seminars, workshops, and conferences. I myself have sat in on some of these until my mind is numb from thinking about SMART goals, AYP, and RIT scores.

I think most people I know went into teaching to teach, not to be data analysts. While I agree the ability to evaluate student progress from a variety of assessments and activities is central to the ability to help students achieve, I can't help but think the emphasis on data has been overdone. Wouldn't it be great if there were enough resources and staff in our schools so that some of the core data analysis could be done for the teachers? That way, teachers could focus on what is meaningful - using assessment results, combined with their personal knowledge of the students in their classrooms, so that they can spend their time on making the necessary adjustments to their instructional practice and working with the students to help them improve?

The same holds true for school library media programs. Why can we not use common sense and be safe in just assuming that a program that helps students embrace reading, teaches them how to be ethical, discerning users of information and builds student technology literacy skills is a good thing that is a worthy component of any educational program? I think most of the school library media specialists I know would be delighted by the opportunity to dedicate themselves to working with their students rather than having to invest significant amounts of time and effort gathering and analyzing data to justify their existence.

October 13, 2008 | Unregistered CommenterMary

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>