Open letter to Sarah Carlin Ames

2:09 pm

Note: Peter Campbell was a featured guest on today’s OPB news talk show Think Out Loud, along with Portland Public Schools PR person Sara Carlin Ames and a US Department of Education spokesman. With his air time severly limited (OPB host Dave Miller gave disproportionate time to Ames and the DOE flack), Campbell was still able to make excellent points. The show will be rebroadcast this evening at 9pm, or can be heard via podcast. –Ed.

Hi, Sarah. It was good to finally meet you in person today.

I wanted to follow up and challenge you a bit on your claim that what is being done at schools like Rigler and Clark is “working.”

Here’s a comment from the blog for today’s show from a Title 1 teacher:

Regarding schools improving and stepping up to the standards: I teach in an elementary Title 1 school and we have made many changes to our school math and reading curriculum and to our schedule in order to meet the state benchmarks. The way we have done it is by having students spend a big part of their school day preparing for the state tests. Students spend at least four months of the year being drilled on how to take and retake the online tests. They are pulled out of class to go and spend one on one time with an adult who listen to them read the test out loud or who will read the non-reading tests aloud to them. With all this help, students who are struggling in the classroom are able to pass the state test and make our school scores look good.

Drilling for the test means that there is now very little time for students to participate in art classes, science projects, or book projects. Our school scores are improving because as teachers we are getting much better at teaching to the tests and finding out ways to make the students pass them.. Please give me the old educational system back. This is the one where students questioned, researched, explored, created, worked on projects, . . .

The teachers that I talk to in PPS tell me similar things.

One of the major reasons my wife and I elected to pull our daughter out of PPS is for precisely these reasons — a test-centric curriculum that leaves little time for things that we consider essential to a well-rounded, developmentally-appropriate, engaging learning experience.

I don’t necessarily blame PPS for this problem. I think you and I clearly agree that NCLB is largely to blame. But I urge you and your colleagues to take leadership positions on this issue and inform the public about what’s really going on in our schools and how we can work together to change federal policy. I urge you to take public positions on the real source of the inequity that exists in our schools — poverty — and encourage the public to lobby local, state, and federal officials to take action. Together, we can make positive change for all our kids.

But if the public keeps hearing that things are peachy from you and your colleagues, then NCLB is never going to go away. And that would be a terrible, terrible thing for our kids.

Respectfully,

Peter

P.S. – have you read Linda Perlstein’s book called Tested? If not, I highly recommend it. Perlstein is the former education reporter from the Washington Post. She chronicles the year-long experience of a school outside Baltimore in its efforts to make and maintain AYP. Although the school is “successful” and makes AYP, what happens to the students and the curriculum is heart-breaking. So much for these approaches “working” . . .

Share or print:
  • Facebook
  • Twitter
  • email
  • Print

Peter Campbell is a parent, educator, and activist, who served in a volunteer role for four years as the Missouri State Coordinator for FairTest before moving to Portland. He has taught multiple subjects and grade levels for over 20 years. He blogs at Transform Education.

filed under: No Child Left Behind, Standardized Testing

follow responses with RSS

43 Responses

  1. Comment from Terry:

    I listened to the entire program this morning, Peter. Despite your limited air time, you made your points well. Extraordinarily well.

    I eagerly await Sarah Carlin Ames’ reply.

  2. Comment from Steve Buel:

    Nice job of nailing it, Peter. This is another area where the inequities of PPS come into play. In the upper middle class schools it doesn’t take all the test prep to have a huge percentage of kids pass. Teach them a few of the tricks and bingo they pass. Lots of time left for a more well-rounded education. (Though it wouldn’t surprise me if a few of those schools overdo it also.)

    I have taught language arts in the 7th grade, though mostly have dealt with the Washington (WASL) tests. But the idea is to teach students how to take the test for sure. All the talk about if you teach your curriculum, well then, the kids will all pass is just huey. But educators mouth it and I think some of them really believe it. One of the real shames of the whole system is that testing children to see how much reading instruction they need based upon how well they are reading is just lost. The tests aren’t an accurate measure of a kids reading skills. Also the dark side to the tests is not just that music, P.E., art, recess etc. is limited by the need to get kids “test ready”, but also that the tests actually detract from the kids who really need reading help getting what they need. So much time is spent making sure kids who can read adequately (and now should be branching out in fields such as history, geography, sciences, and languages) are test ready it takes the time from kids who genuinely can’t read.

    Gotta admit though,even with all my experience,h I was blown away by the teacher commenting that they work 1 on 1 with each kid going over the test. Geez, is every school going to do that now? These things have a way of spreading like California wildfires.

    Darn poor kids anyway. If we didn’t have them to mess with, this district would be running so well.

  3. Comment from Wacky Mommy:

    I didn’t really think it was appropriate for the moderator to joke with OPB correspondent Rob Manning about “slumming it” at the end of a program where high-poverty schools were being discussed.

    NCLB needs to be completely dismantled. No “tweaking” required — just ax it. It’s class warfare.

  4. Comment from Robb Cowie, PPS Communications Director:

    Anyone who claims Sarah Ames doesn’t care about poor kids, doesn’t know Sarah.

    We all have a right to take a hard look at whether NCLB or other laws are achieving their stated goals. But it’s not appropriate for school district staff — teachers, principals, communications office staff, or even the superintendent — to express their personal views about legislation when they are speaking on behalf of the school district.

    School officials can be clear about the impact that policy choices have on our ability to educate kids and we do that frequently, on NCLB, school funding and other issues. But that doesn’t mean we get to use our official roles to encourage people to “lobby” as Peter asks. If that seems insufficiently courageous, think about how you would feel if someone from PPS went on OPB and urged NCLB’s reauthorization in its current form.

    I didn’t hear Sarah ever once say that things are “peachy” for our schools under NCLB. On the other hand, we have schools in Portland that are serving disproportionate numbers of low-income and minority students that are meeting the NCLB standards. No matter what any of us personally may think of the fairness of those standards, how our schools measure up is information that the public has a right to know.

    And our schools are not meeting benchmarks by drilling students on the tests for hours each day. Just because they may do that in Baltimore, Washington state or elsewhere, doesn’t mean that’s what’s happening in Portland. If you want to see why our schools are meeting benchmarks, come to Lane, Rigler, Clark, Cleveland and find out what strategies they’re using.

    It’s great to have a debate about NCLB and to ask hard questions about the law and the performance of our schools under the law. But let’s have the discussion based on real information, not assumptions about our schools or the people who work on behalf of them.

  5. Comment from Steve Rawley:

    Robb, thanks for taking the time to participate here. I think Peter is asking the district — through its spokesperson — to take a lead on informing the public about what’s going on, not the spokesperson alone.

    From Ames’ OBP performance, one gets the impression that the PPS position is that NCLB has “good intentions” and, despite it being “odd” in its implementation, PPS is doing just fine with it.

    What’s interesting to me, despite the disproportionate air time given to the DOE flack and Ames to say otherwise, every single teacher who called in or left comments on the Think Out Loud blog asserted that testing and test prep steal significant classroom instruction time.

    Every parent who pays close attention to their kids schooling knows this to be true, too.

    I understand that Ames can’t make personal statements in public forums, but the district certainly can take a stand. So perhaps Zarwen is correct that this should be taken up with the school board.

    But isn’t the communications department open for two-way communications?

    It would be great if PPS communications staff didn’t appear reflexively defensive (and even contrarian) in the media, and if they would act as liaisons to the community, not just PR people for the district.

  6. Comment from Steve Buel:

    Robb, everyone claims to care about poor kids, and I really think the district is finally beginning to somewhat recognize the problems, but the record of neglect by PPS in Portland’s poorest neighborhoods since Matt Prophet left as superintendent has been well documented. And no one says PPS schools spend hours a day drilling to get ready for the testing. But the role test prep plays in pulling time and money from other important areas can hardly be disputed. So, if you start with the premise that Portland has neglected its poorer neighborhoods educationally and that test prep and test money have some very serious negative educational impacts then the district sending forth arguments that don’t start with these two premises can only be seen as murky at best.

    This is the bulk of the concerns people on this blog and elsewhere have had relating to the equity argument.
    It is one thing to say there is an equity problem or that the testing has some harmful effects, it is another to state the actual equity problems and the harmful effects of the testing and make a clear statement PPS plans to address these problems.

    Now you could make a case Carole Smith is beginning to do this, but it will not get done in the end without the approval of The Portland School Foundation, Stand for Children, and The Oregonian editorial board. Sounds kind of absurd, and that is the problem, it is absurd.

  7. Comment from Sarah Carlin Ames:

    I read this post and the comments yesterday and wasn’t quite sure how to respond. So I slept on it, and my boss beat me to the punch.

    These are really important issues and I hope that we can continue to explore them without anonymous personal insults.

    Peter and I had what I thought was a candid and respectful email exchange yesterday, hours before he posted his email to me as an “open letter” on this blog.

    Since you have his original note, I might as well post the response I wrote back at noon:

    —————

    Peter, thanks.

    I know that my kids’ experience in PPS doesn’t match the drill description the teacher posted, but we have many different schools, and yes, my kids are in some of those with fewer kids from low-income homes. One of my goals this school year is to visit more schools and learn more about how the policies play out in classrooms daily. . . . . Assessments (and grades, attendance, graduation, etc) are how we keep score. But the numbers and stats can’t capture what’s really going on in classrooms — I like to hear from teachers directly (as does our superintendent).

    Poverty is a huge factor for our kids. Kids who move from rental to rental (or couch to motel to homeless shelter), who have to stay home to watch their little brothers and sisters, who have to work after school to help feed the family, who don’t have any role models of high school graduates (let alone college grads) in their lives — we know they have a tougher time in school and need more support. We need to enlist all sorts of groups and leaders, as you suggest, to really attack these issues and help these kids succeed (not just pass benchmark).

    Sarah

    ——————-

    For those of you who missed the show, you can listen to the segment on-line at:

    http://www.opb.org/thinkoutloud/

  8. Comment from Peter Campbell:

    Sarah – I decided to post my message to you as an open letter because I thought it was a good idea to make this exchange public. Thanks for writing back, and thanks for posting your response to me.

    Here is what I wrote back to you in response. When you get a chance, please reply. I appreciate the opportunity to engage in this kind of substantive dialog.

    Sarah – thanks for your candor. May I make a suggestion? How about PPS offering a parent/community workshop or public forum on NCLB some time this fall? I’d be happy to help in organizing it.

    From the district’s perspective, you can explain the sorts of things that were brought up in today’s show, e.g., why the heck is Lincoln on the federal watch list? You could explain what the watch list is why it was created. But you could also be critical of it and say why it does not paint an accurate picture. I think the public would be interested in an open, honest accounting of the differences that exist between the low-income and affluent schools and how NCLB exacerbates these differences. Finally, you could urge people to take appropriate action and offer an out-of-the-NCLB-box vision of what PPS would look like without NCLB. For example, the money that is currently being spent on bussing kids from “failing” schools to other schools in the district could be spent on things that would really make a difference for the kids and teachers at these schools. (By the way – would you be able to get that figure for me, i.e., how much does the district spend on bussing kids from one school to another for not making AYP? The public needs to know these kinds of details.)

    We can still focus on the well-intentioned goals of NCLB, i.e., closing the educational achievement gap. But PPS could take a leadership position on this issue and offer ideas about what it would really mean to leave no child behind. It would mean sticking your necks out a bit. But I think our kids are worth it, no? As Carole enters Year 2 of her tenure, I think it would be appropriate for her to start talking about a vision of education in PPS that dreams beyond the constraints of NCLB. Such a vision could inspire the public to take appropriate action, e.g., approve a certain forthcoming bond measure . . . (hint, hint).

    Let me know if you’re interested in talking more about this idea. Again, I’d be happy to help in whatever way I can.

    Best wishes,

    Peter

  9. Comment from Peter Campbell:

    One of the many things I wanted to say yesterday (but did not have time to say) was in response to Sarah’s characterization of the role of assessment in instruction. I’m something of an assessment wonk, and value formative assessment as an integral part of good teaching and learning. That said, assessment is only valuable if you are measuring things that are worth measuring.

    My concern with PPS — and most other “data-driven” approaches to learning — is that you spend most of your time measuring lots of things that are not very valuable. Take, for example, what Sarah said:

    “Some of these early childhood assessments can give teachers some amazing information . . . you know . . . this child . . . doesn’t get this group of skills or . . . the whole class is having trouble with rhyming vowel sounds.”

    I agree that the assessments that are being given now can give information about discrete skill mastery, e.g., rhyming vowel sounds. But the problem is that the entire curriculum is dictated by these assessments, so teachers end up focusing on discrete skill mastery and then measuring whether or not children have mastered these skills. What gets left out are those things that cannot be measured, e.g., passion for learning, curiosity, critical thinking, etc. Oh, sure. You try to fit these things in. But Kindergarten teachers in PPS are being asked to do more and more of this kind of “data-driven instruction.” And some schools — Chief Joseph for example — are giving tests like the DIBELS (a 1-minute assessment that supposedly measures fluency but amounts to a speed reading test, complete with a stopwatch — no kidding) even though they are not required to do so. So who has time for the non-tested stuff, the stuff that doesn’t fit on a spreadsheet or a bubble form?

    So there’s this infatuation with skills and skill measurement divorced from the bigger picture of learning. As one of the teachers who called in to the show said, you end up with kids who are great at things like identifying rhyming words but who are not interested in reading books or who are incapable of thoughtful analysis.

    Ideally, we’d have confidence that our curricula were well-designed and that our kids could ace the state tests without any additional preparation for it. However, as it stands, we spend September through March getting kids ready to take tests. So are we assessing our schools or are we assessing our test preparation efforts? Are the curricula and the test preparation efforts the same thing? If they are the same thing, that’s troubling. If you can only assess that which can be measured, then you’re likely to teach only that which can be assessed. But as Einstein once said, “Not everything that counts can be counted, and not everything that can be counted counts.”

  10. Comment from Peter Campbell:

    Robb – thanks for your comments. I’d like to offer a different take on your comment:


    But it’s not appropriate for school district staff — teachers, principals, communications office staff, or even the superintendent — to express their personal views about legislation when they are speaking on behalf of the school district.

    There are dozens and dozens of teachers, administrators, school board members, and state education officials across the country who have taken public positions that are extremely critical of NCLB. Here are just 2 examples:

    1) A DuPage County Illinois school district — Carol Stream Elementary District 93 — considered not administering mandatory state exams to students who haven’t yet mastered English. District 93 officials said they were willing to break the law this spring to shield students from the frustration and humiliation of taking an exam not designed for them. “The board believes it’s appropriate to do that,” District 93 Superintendent Henry Gmitro said. “While there may be consequences for the adults in the organization, we shouldn’t ask kids to be tested on things they haven’t been taught.”

    2) Several school districts around the country joined with the National Education Association, the nation’s largest teacher’s union, and filed a lawsuit against the federal government. The suit rightly contents that, amongst many problems with NCLB, one of the most egregious aspects of this horrible law is that it asks states to pay for all the extra testing out of their own budgets. This was an important act of defiance. The law suit was initially thrown out. But in January, the U.S. Court of Appeals for the Sixth Circuit agreed with the NEA and the other plaintiffs that states and local districts simply can’t be required to spend their own money to comply with the federal law.

    I am urging you and your colleagues on the school board to take similarly courageous stands on behalf of the children of PPS.

  11. Comment from RichW:

    NCLB or not, PPS has failed miserably to achieve equity, especially as it relates to North Portland Schools. Parents are voting with their feet by taking advantage of the transfer policy. This means that poorer schools get poorer as their neighborhood school populations transfer elsewhere. This results in a downward spiral where the poorest of families are stuck with underfunded schools. I just don’t understand why PPS, and especially the communications department continues to give positive spin to PPS when this elephant has been sitting n the parlor for years now. I don’t see any positive action happening to solve this problem. You guys are confortable fiddling with your formulas and

    Sara and Robb, if you really want to be part of the solution, start extolling the virtues of N/NE Portland schools, including Jeff High. That too would be positive spin thay doesn’t jive with reality, but neither is the positive spin you guys ar placing on NCLB.

    Sara, especially, Peter did not attack you persoanlly. He attacks your positions. This is one of the other bad things about PPS. You guys can’t take criticism and keep defending your positions no matter what. The problem has lessened only a bit since we got rid of Phillips and at least PPS communications no longer make personal attacks on its critics, at least not openly. I refer to the infamous Rieke prent who PPS Communication once said he “crawled out from under his rock”. But PPS needs to attackthe equity problem with full force and I think the Communications department should be leading the charge. If not, we activists will continue to rally to reconstitute the board and PPS staff with people who can solve this equity problem.

    RichW

  12. Comment from Wacky Mommy:

    Rich, I call it the death spiral. In Portland, it truly is a death spiral for the Madison, Marshall, Roosevelt and Jefferson clusters. And I am not happy about all the finger-pointing, calling out which groups, specifically (ELL students, students with IEPs, etc.) are not “testing well.” Trying to shame the “irresponsible” parents who are not getting their kids to school, thus causing the AYP scores to drop.

    Parents in our North Portland neighborhood get calls instructing them to bring their kids to school, unless they’re throwing up or running high fevers. I only wish I was being sarcastic, I’m not. We get phone calls.

    We are naming by name, basically, which kids are “just not keeping up.” That is not “no child left behind.” That is shaming and ostracizing.

    It also puts pressure on the high-achieving kids. They are told by their teachers where they’ve “placed.” (“So and so tested higher than me, I was second, though.”)

    I am not happy about any of this.

  13. Comment from Zarwen:

    Wacky makes a good point. Few parents are aware that they have the legal right to “opt-out” their children from all this testing! Not that it will make any difference in what they are taught in the clasroom, but I know one Laurelhurst parent who opts-out her daughter every year as her way of protesting this NCLB crap. She complains of endless phone calls and even being followed around the school by the principal, being pressured to change her mind. Pathetic.

  14. Comment from Peter Campbell:

    One more thing I wanted to say on assessment (sorry – I said I was an assessment wonk and I meant it . . .)

    This is a LONG post, but stick with it, OK?

    Good assessment begins with the end in mind, i.e., you start off with what you want students to know and be able to do. So let’s imagine for a moment what that might look like. Here’s a starter:

    1) we want all students to be able to read and love doing so
    2) we want all students to have a fundamental grasp of numeracy and mathematical thinking
    3) we want all students to be able to write persuasively on a variety of subjects

    You might also wish to nurture and develop certain qualities in students, e.g., curiosity, compassion, creativity, confidence.

    So now the question becomes: how do we determine that students know these things, can do these things, or have acquired these qualities? The trick is to use assessment to help students know them, do them, and acquire them. In other words, good assessment is indistinguishable from good instruction. Good assessment drives instruction because it provides rich, meaningful information that both students and teachers can use — teachers to improve their instruction and students to improve their learning.

    So here’s the litmus test for the current battery of assessments that have hijacked our curricula here in PPS: do they provide rich, meaningful information that both students and teachers can use?

    As I said in an earlier post, I would say the answer is “no” because the assessments produce information that is shallow and disjointed. But even worse, some of the assessments used produce information that is simply wrong.

    To make this case, let’s consider the DIBELS test. The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) is a set of one-minute measures: recognizing initial sounds, naming the letters of the alphabet, segmenting the phonemes in a word, reading nonsense words, oral reading of a passage, retelling, and word use. The measures are used to assess phonological awareness, the alphabetic principle, accuracy and fluency in reading connected text, vocabulary and comprehension.

    I know the DIBELS is currently administered at Chief Jo. Not sure which other schools in the district use it, though similar measures are used at other Title 1 schools. Students who do not meet the expected benchmark are given the DIBELS over and over, i.e., the test becomes the exclusive means by which progress in reading is measured.

    Jay Samuels, a professor of education psychology and of curriculum and instruction at the University of Minnesota, served as a member of the National Reading Panel and coauthored the fluency section of the panel’s report. The NRP’s report has become the gospel on how reading is to be taught in this country, so Samuels’ opinion carries some weight. Here is what he recently wrote about the DIBELS:

    –begin excerpt from Reading Research Quarterly (2007-10-01)–

    The DIBELS’s battery of tests, which are used to assess more than 1,800,000 students from kindergarten to grade 6, aim to identify students who may be at risk of reading failure, to monitor their progress, and to guide instruction. With the widespread use of DIBELS tests, a number of scholars in the field of reading have evaluated them, and not all of their evaluations have been flattering. For example, Pearson (2006, p. v) stated,

    “I have built a reputation for taking positions characterized as situated in ‘the radical middle’. Not so on DIBELS. I have decided to join that group convinced that DIBELS is the worst thing to happen to the teaching of reading since the development of flash cards.”

    Goodman (2006), who was one of the key developers of whole language, is concerned that despite warnings to the contrary, the tests have become a de facto curriculum in which the emphasis on speed convinces students that the goal in reading is to be able to read fast and that understanding is of secondary importance. Pressley, Hilden, and Shankland (2005, p. 2) studied the Oral Reading Fluency and Retelling Fluency measures that are part of DIBELS. They concluded that “DIBELS mispredicts reading performance much of the time, and at best is a measure of who reads quickly without regard to whether the reader comprehends what is read.”

    If Riedel’s conclusion that administration of subtests other than Oral Reading Fluency is not necessary for prediction of end-of-first- and second-grade comprehension, in combination with the critical evaluations of DIBELS by some of our leading scholars in reading is not enough to raise the red flag of caution about the widespread use of DIBELS instruments, I have an additional concern about the misuse of the term fluency that is attached to each of the tests. Because each of the tests is labeled as a fluency test, it is only fair game to see if that term is justified. I contend that with the exception of the Retell Fluency test, none of the DIBELS instruments are tests of fluency, only speed, and that the Retell Fluency test is so hampered by the unreliability of accurately counting the stream of words the student utters as to make that test worthless. Let us not forget that, in the absence of reliability, no test is valid.

    –end excerpt–

  15. Comment from Peter Campbell:

    The number of schools labeled as “failures” is only going to increase over the next several years.

    Check out this timeline from the OR DOE (see p. 13).

    The 2007-08 academic targets were ten points higher than the previous year, and look at the huge increase in the number of “failing” schools. There’s another huge bump of ten points in 2010-11, when 70% of all kids have to be proficient in reading and math. Then, the very next year, it goes up to 80%. Then, the next year (2012) to 90%. So from now to 2010, with the threat of even more schools being labeled as “failing,” the district will have no option but to focus on raising test scores. And raising test scores means test prep, in whatever form the district spins it. The curricula — esp. at the elementary level — are driven by skills that are tested on the state tests. Skills mastery over the year is measured in regular skills-based assessments. These assessments inform instruction. Ergo, curriculum = test prep.

  16. Comment from clarification, please:

    Didn’t the state of Oregon create these tools? I may be wrong–I was out of state for a few years–but aren’t these tools the outgrowth of the Katz Plan that also gave birth to the ill-fated CIM and CAM. Did we adopt the existent tools for NCLB or did we begin to use another set of tests?

    My point being, should the tests be chucked in favor of others that measure the same abilities but in a way that is less damaging to classroom learning.

    If I am wrong about the origin, please let me know.

  17. Comment from Terry:

    The state creates its own tests, but they are NOT an outgrowth of the Katz plan.

    The original CIM required students to demonstrate learning with performance-based tasks, not standardized multiple choice subject area tests. The state legislature, in its infinite wisdom, replaced the original cross disciplinary performance-based –or authentic– CIM assessments with the current paper and pencil tests in the mid 90’s.

    Standardized testing was a staple of school accountability in Oregon long before No Child Left Behind came along.

  18. Comment from Steve Buel:

    Geez, Terry, I see them as an outgrowth of the Katz plan. The tests themselves, not the testing itself. Accountablility, BABY.

    But one change that is big time different than from the years ago testing. (Heck, I remember testing as a child.) was that we test on grade level now. We used to test by subject. I was in 4th grade and read at a 9th grade level. Now a 4th grader’s test shows the % of 4th grade material the 4th grader knows. Huge difference. The old way rewarded moving children ahead and letting them backfill as they got older. The new way doesn’t honor backfilling. Big mistake actually. Holds a lot of kids back.

  19. Comment from clarification, please:

    I first saw the Katz plan in about 1991-1992. It was not yet implemented (passed?) and I had to request a rough-draft copy. I was appalled by it, though it was embraced by some. It had a whole list of criteria that never made it to daylight, including way more school days per year.

    On my first read-through I was disturbed by the testing, increase in school year, and the two-tiered system of CIM and CAM that seemed to shuttle some kids into a non-academic Junior and Senior years.

    Again, I have not seen this document in about 16 years. It was a pretty shabby and ill-advised work, though, as I recall.

  20. Comment from Peter Campbell:

    Steve B. – RE: grade level . . .

    What many (most?) parents don’t realize is that the “your child needs to be at grade level” conversation that’s dominating all other conversations at the local neighborhood school is trumping the older, more appropriate notion of “meet the child where he/she is.” Nowadays, children are expected to be in specific places at specific times in their development. Never mind that a boatload of evidence suggests that young children develop along an extraordinarily broad continuum. If junior is not jumping through Milestone X at Time Z, then sound the alarm. It’s useful to consider the Finns in this conversation, who begin formal reading instruction at age 7 and who, according to the PISA test, produce the top-ranked readers in the world.

    As my friend and colleague Monty Neill of FairTest said, “The long history of tracking in the US also suggests that students who enter pre-K or K “behind” will be assumed to be less capable of learning and thus put in “slower” classes through which the gap in learning outcomes will expand. “Intelligence” tests have long played that pernicious role, complemented by “achievement” tests. Through these instruments, race and class effects are instrumentalized as “scientific” or “objective.”

  21. Comment from Peter Campbell:

    One other thing on “grade level”:

    The results of norm-based tests are “normalized,” i.e., some questions are thrown out because everyone got them wrong and others are thrown out because everyone got them right. The results are then broken into a “normal” distribution. In psychometric terms, a “normal” distribution resembles a bell curve: some people score really high, some people score really low, but most people score right in the middle. This middle part is the average or mean score. This middle part is considered “normal.”

    What’s most intriguing about this “normal” distribution is that it is not normal at all. As I said, the test questions are tweaked in such a way so that a bell curve is created. It’s like carving a duck from a piece of wood: you carve everything away that is not a duck until the duck emerges from the wood. Same thing with norm-based tests: you carve away everything that is not a bell curve until the bell curve emerges from the test data.

    We lose sight of how “normal” distribution of data points is accomplished. We then come to naturalize this process and begin to conflate “normal” bell curves with “normal” children. In so doing, we fetishize the middle quintile and believe it somehow relates to some touchable, empirical reality. But all it reflects is the result of a statistical sleight of hand.

  22. Comment from Terry:

    One last comment on Katz’ visionary Education Plan for the Twenty-first Century. It’s old news now, although it shouldn’t be.

    First, Steve B., there’s testing and then there’s testing. The sorts of assessments envisioned by the Katz plan had nothing to do with “accountability”. In fact they did what good assessments should do –seamlessly mesh with instruction. Here’s one simple example, still used today –the writing sample.

    When a teacher reads a student’s essay, the teacher evaluates the student’s writing skills. But the writing of the essay itself is part and parcel of the instructional process. and so is the teacher’s feedback.

    That’s a far cry from a multiple choice test.

    Secondly, the kinds of assessments promoted by Katz demanded that students think and work across discrete curricular lines. In short, they encouraged integrated thinking and instruction.

    That’s at the heart of genuine school reform.

    I was teaching middle school when the Katz plan was laid out. I, along with my reform -minded colleagues, was excited by the possibilities.

    Unfortunately, Measure 5 was passed at the same time. It’s been downhill for schools ever since.

  23. Comment from Steve Buel:

    Terry, the tests may have been better — PPS was doing their own and had control at that time — but the high stakes idea was there. Also, you remember that there were two parts of the plan — one was solely academic and the other was vocational. Neither was to be implemented without the money, but somehow the academic testing was implemented but not the vocatational. Wonder why that was ???????????? Let me guess. Might it have had something to do with the idea that most upper middle class parents (who pretty much run PPS) don’t give one hoot about vocational programs? Oh, that sounds so cynical. If that theory held then Benson should be slowly deteriorating away …..wait it is. A revelation.

  24. Comment from Steve Rawley:

    Just a friendly reminder to commenters to please read and agree to the comment policy before posting comments, particularly the part that says “Be civil….” This means no personal attacks, please.

    Thank you!

  25. Comment from SchoolMarm:

    My now 6th grader has been ‘opted out’ of testing since she started school. Her siblings have followed her. Each year I have explained to their teachers that they are welcome to assess my children in the classroom every day of the week if they feel it will help them as teachers. It is not a child taking a test that I oppose, but the baggage that comes with NCLB. We have had some amazing teachers who are heart broken over how many dozens of classroom days are spent working AT the test, not TOWARDS the test. Weeks of computer lab time is forfeited to testing. This year I will actually count the hours my kids are not active in the classroom due to ‘test preparation’ and I will get back to you. My 3rd grader’s teacher is collaborating with me on this project since she is so heart sick over how NCLB has broken our classrooms. My guess is my 3rd grader will lose about two weeks of school due to ‘test preparation’ time.
    The worst part to all of this is that all of this money spent on testing does nothing to add seats in Head Start, lower class size, provide proper nutrition, or make one single difference to how we nurture and educate children in poverty.

  26. Comment from Zarwen:

    That’s because, like so many other “school improvement” plans, its actual purpose is to channel resources into well-off schools.

  27. Comment from Terry:

    I applaud you, SchoolMarm, for refusing to subject your children to the abuses of NCLB’s testing regime.

    This question remains, however: When will we hear from the school board on the failures of NCLB?

  28. Comment from Peter Campbell:

    SchoolMarm – I think the notion of “test prep” goes beyond the 2 weeks that you mention. The curricula — esp. at the elementary level — are driven by skills that are tested on the state tests. Skills mastery over the year is measured in regular skills-based assessments. The frequency of these assessments varies with each school and with each kid, but Title 1 schools test low-income/minority kids defined as “at risk” of failure, i.e., not likely to make the AYP benchmark. So these kids get lots of tests over the school year. These tests inform instruction, i.e., the information these tests yield then determines what gets taught in the classroom. Ergo, curriculum = test prep because everything is defined by what gets tested.

  29. Comment from marcia:

    I just wanted to chime in on the early childhood assessments. We have always used assessments to determine where kindergarten kids are at. One difference now is that teachers were ordered over the past couple of years to use every assessment in the book, whether they provided useful information or not. Also, teachers must report the data to the district on a data sheet which is also time consuming and not useful to my instruction. Weeks and weeks of instructional time is wasted since all these tests must be administered individually. ..not an easy task when you have 25 or 30 five year olds in the room who are then left to monitor themselves. Chaos reigns.

  30. Comment from Steve Buel:

    Marcia, it would seem we could use a little common sense on these assessments. Why do you think the district can’t figure it out?

  31. Comment from marcia:

    I’d say we were using common sense before the district decided to intervene.

  32. Comment from Peter Campbell:

    Marcia – can you say more about the assessments that you’ve used in the past and how the current assessments differ?

  33. Comment from marcia:

    They are basically the same, but the teacher was given more autonomy on choosing which assessments were of value. Some are redundant; some are almost impossible to administer. So I would choose the ones which made sense and gave me the information I needed. Over the past couple years we were required to administer all the tests listed in our literacy notebook, and report the data…(for what purpose?) and some schools also added dibels…which has been called the”pedagogy of the absurd.”
    After many teachers complained about the work load and loss of instructional time, the district set up a group to look at the issue last year. At the end of the year, we were allowed to administer less tests, however, once again, these were not the ones that I would have chosen or which would provide my first grade teachers with the information they wanted. And of course, we still needed to report the data.

  34. Comment from Peter Campbell:

    Marcia – thanks for the clarification. What’s your take on the push-down of the curriculum, i.e., the Kindergarten curriculum is now the 1st grade curriculum, and what used to be Kindergarten is now pre-K.
    I wrote about the history of the term “Kindergarten readiness” here on my blog, so I won’t repeat what I said here.

  35. Comment from Peter Campbell:

    woops! I meant to say the 1st grade curriculum is now the Kindergarten curriculum.

  36. Comment from marcia:

    Got your meaning. My take is..if a child is ready to read and write, that’s wonderful…I’ll take them there. I love to teach reading and writing. And we have a good time with it. I had a great percentage of my class reach benchmarks beyond grade level last year.. they were a pretty high performing group. However! If they are not ready …(kids develop differently…they might just be moving more slowly, or they might have undiagnosed learning disabilities, etc etc) then as it stands now it shows up on our data sheet as a percentage of kids not at benchmark…(accountability and all…and we..the teachers are just naughty naughty!!) and that is just wrong. It didn’t used to be that way. There is no room for the individuality of the learner now. I try to celebrate whatever level the child is at. If we are sharing our writing and one kid is writing a novel..that’s GREAT! If another one figured out how to sound out “dog” that’s GREAT! too.

  37. Comment from Peter Campbell:

    Marcia – can you say more about what happens to the kids who are not “at benchmark”? The research I’ve read on this reveals that a disproportionately large percentage of kids not “at benchmark” are low-income minorities. Based on the research I’ve done, the teachers I talk to, and my own experience in the schools I’ve been a part of, the kids not “at benchmark” — again, mostly low-income minorities — are taught in a more test-centric manner, i.e., the kids take more assessments during the year and what they are taught and how they are taught is largely determined by these assessments. Has this been the experience that you and your colleagues have had?

  38. Comment from marcia:

    Peter, I am not sure about more assessments in our grade level at our school. But maybe, depending on the kid. In the primary grades at our school, the focus is on identifying the kids who need help and trying to get them assistance. This might entail more testing, for example, screening to see if they qualify for assistance in the learning center. Kids who are struggling with reading are given additional help from the reading specialist. They would still be given the same assessments as the other kids, although I might check in with them more frequently in some areas to see how they are progressing.

  39. Comment from Peter Campbell:

    Marica – how often is the DRA – Developmental Reading Assessment – given at your school? At others? I know that schools like Chief Jo elect to give students extra assessments like the DIBELS. And, according to Ken Goodman, a reading expert and one of the godfathers of whole language instruction, the tests have become a de facto curriculum in which the emphasis on speed convinces students that the goal in reading is to be able to read fast and that understanding is of secondary importance. Not sure exactly how this plays out in PPS as a whole and at each school in particular. Would love to hear from more teachers on this.

    The district’s Comprehension Scoring Guide has the expectation that (1) all kids are supposed to be at the same level at the same time and (2) that something as complex as comprehension can be reduced to either “adequate” or “very good” via a generic scoring guide that asks children to do things like recall events from a story, recite key details, answer literal questions, and make inferences. They are given points for each correct answer. Note also that “fluency” is defined as being able to read quickly without making mistakes. I would quibble with this definition.

  40. Comment from clarification, please:

    By mistake, do they simply mean reading the words aloud correctly or identifying them, or do they require a paraphrased understanding of the statement. If a student cannot tell you the meaning of the words, or at least express a contextual understanding, then there is clearly a mistake in comprehension of the words. Reading is communication, after all.

  41. Comment from marcia:

    Peter, in kindergarten I do the DRA three times. In first grade and second they do it four, but I believe they only report data to the district three times. At our school we use Strageies that Work and Mosaic of Thought to guide our teaching of comprehension. We start talking about inference and making connections in Kindergarten. It isn’t all about decoding…but I think it depends on the school. Decoding is only one small part of reading. I had a little guy this year who just whizzed through a level 16, which is first grade benchmark for the DRA, but he couldn’t tell me anything about the story. So that’s where I focused my instruction with him, and encouraged his parents to do the same.

  42. Comment from Peter Campbell:

    Marcia – I love both “Strategies that Work” and “Mosaic of Thought.” In fact, I’m using both books as guides for how to teach my daughter to read and write! Your school is lucky to have the ability to use this approach and to limit testing to the level you mention. At Chief Jo (and other schools), reading and writing are much more phonics-based and worksheet heavy, thanks to the adoption of the new Scott Foresman curriculum, Reading Street. Testing is also much heavier and plays a greater role in determining the fate of kids as learners.

  43. Comment from marcia:

    Yes, I understand that. Hopefully the SF frenzy will vanish with time.