WakeEd

The WakeEd blog is devoted to discussing and answering questions about the major issues facing the Wake County school system. How will the new student assignment plan balance diversity, stability, proximity and stability? How will Jim Merrill replace Tony Tata as the new superintendent of the state's largest district? How will voters react to a $810 million school construction bond referendum on Oct. 8 ballot? How will this fall's school board elections impact the future of the district?

WakeEd is maintained by The News & Observer's Wake schools reporter, T. Keung Hui. While Keung posts information and analysis on the issues, keep us posted on your suggestions, questions, tips and what you're doing to cope with the changes in Wake's schools.

Choose a blog

School board kills the Effectiveness Index

Bookmark and Share

The Wake County school board voted 5-4 tonight to immediately eliminate use of the Effectiveness Index while also unanimously agreeing to make EVAAS the primary data tool for schools.

School administrators said that the Effectiveness Index would have been ended in June. But members of the fractured board majority came back together to say they wanted to kill the program now.

The original resolution on the table called for making EVAAS the primary data tool and for ending any additional allocation of resources for the Effectiveness Index.

But school board member John Tedesco proposed a friendly amendment saying that the Effectiveness Index should be ended right now instead of June. He cited errors with the Effectiveness Index.

School board member Kevin Hill questioned why you'd pull a tool out of the hands of educators, which drew applause from the audience of diversity policy supporters. The audience applauded anytime a board member said something positive about the Effectiveness Index.

Hill said they should let the Effectiveness Index die a natural death.

But Tedesco argued that Effectiveness Index has created false positives that could have caused thousands of Wake students to be put in remedial services who may not need them.

Board member Carolyn Morrison said that the more evaluation tools you have, the better off you are

School board vice chairwoman Debra Goldman then proposed an alternative motion to separate the Effectiveness Index from the resolution. Her proposal was to have a separate vote ending the Effectiveness Index immediately.

A prior vote failed 5-3 on Oct. 5 with Goldman saying she could have supported it if the wording was different.

Tedesco seconded Goldman's motion tonight.

"Let’s end it now and get more of our people moving forward in the right direction," Tedesco said.

In the ensuing 5-4 vote, all the Republicans voted for Goldman's motion and all the Democrats voted no. As she voted no, board member Anne McLaurin said that the Effectiveness Index was a "good system but not perfect."

The board then returned to the motion on the agenda and unanimously designated EVAAS as the primary data tool.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

 Poverty counts increase in

 Poverty counts increase in select schools

 

The number of schools where more than half of the students qualify for free and reduced lunch has drifted noticeably higher this year in a handful of schools.

 

More than half of the students in 31 of the district's 163 schools now qualify for subsidized lunches, according to figures compiled by the district. In five schools, the figure exceeds 70 percent. At the same time last year, one school reported a free and reduced lunch rate above 70 percent. The rate exceeded 50 percent in 25 schools in 2009-2010.

 

The district's overall rate this year is 32.4 percent compared to 31.2 percent in 2009-2010.

 

 

 

Year

Schools above 70%

Schools above 60%

Schools above 50%

District average

2009

1

8

16

31.2

2010

5

6

20

32.4

 

Students qualify for the district's subsidized lunch program based on income and family size. Documentation is not required, but the figures are considered an accurate indicator of poverty by school officials.  Students in higher grades are less likely than children in lower grade to sign up for the program.

 

The number of schools where the poverty rate was less than half the district average remained steady">32.4

 

In Context by WEP

And this is operating under

And this is operating under the old Diversity Policy, right?

Shouldn't be a surprise ...

Shouldn't be a surprise ... we are getting poorer and there are not enough affluent people to go around anymore ...  

uhm....that would be a

uhm....that would be a yes.....

Until you see the schools,

Until you see the schools, that might be incorrect.  For one thing, I'm pretty sure the diversity element was eliminated from Policy 6200.

Also, didn't the new board make several node changes last year?  I would think it's at least possible that those changes impacted these numbers.

And at least some of it is just the way it goes.  Carroll Middle was 49.4% F&R last year.  This year, their enrollment is down 20 kids and their F&R went up to 52.3%.  They have 20 fewer kids total, but 10 more F&R.

East Millbrook went from 49.1% to 54.4%, which may very likely be the result of lifting the diversity element from magnet selection.  I'm certainly not saying that's a bad thing, but it would make the coy comments about the old diversity policy incorrect.  York Elementary had 171 F&R kids last year, and it has 220 this year, which is probably from the same result.  They actually grew by 49 kids, and apparently all 49 were F&R.

That's 3 of the new over 50% group, and the other two are both East Wake sub-schools.  If you did East Wake HS as one school, it would be under 49%.

Mary Phillips, Mount Vernon, and River Oaks account for 3 of the 5 >70% schools, and they have 316 kids combined.  So that's not really news either.

I agree that some of these

I agree that some of these increases are due to SES being eliminated from the magnet selection process and some are due to changes that occurred in the assignment plan.  Some of them don't make sense to me yet but I'm sure that like the Washington Elem instance, there will be a reason to explain some of them.

There isn't really a 'smoking gun' with this info at this point. 

At Risk, E.Index, subversion of diversity policy, conspiracy

People often point to the reality of cultural behaviors as justification for the models of handling the education of various groups differently.  The test scores of gifted children after receiving varied instruction, the achievement gap, different suspension/discipline patterns, and a hundred other things all form feedback loops in our perception channeling the way we think we should proceed in public education.

Of course its always of concern that INDIVIDUALS manifest traits that stand in the way of receiving their own education and the family situations of individuals may fall far short of the best forms of support.  Of course these things are always issues.

But be aware that the constant chant of ethnic/racial domination in all locations and all times runs very much like we hear here - in the channels of describing the characteristics of GROUPS as if they were individuals.  All the worst sins of humanity are carried out in this vein.

This is the big lie Wake County has drifted progressively into under the diversity policy and that the Effectiveness Index is entirely predicated upon.

Relatively minor elevations in the documented frequency of INDIVIDUALS with any particular "bad" trait within an (always) artificially defined or ascribed demographic group is what properly qualifies that demographic to be statistically described as "at risk" of having that trait.  This “difference” between trait frequency within populations can be as small as a couple of percentage points, and the population can still be defined properly as “at risk.”

In the case of the Effectiveness Index, mere demographic membership in a group with a couple percentage points different frequency of INDIVIDUALS is seen as sufficient justification to add a negative adjustment for expectations of ANYONE in that demographic.  Its statistically valid.  But its procedurally wrong.  Individual members of the demographic are not really at increased risk as an individual.  But the Effectiveness Index goes even further.  It adds negative adjustments/expectations for other groups like physical disabilities.  It even adds negative adjustments – altering the definition of what is acceptable/effective instruction for ANYONE sitting in a room/school with enough other people of the suspect demographics.  This is not just dangerous but simply wrong to think of people this way.   Morally wrong.  By itself this practice defines everyone in the suspect GROUP as TOXIC – a thing to be avoided because it AUTOMATICALLY defines being with them in a classroom or a school or a district as INCREASED RISK. Then by treating victims of this classification in different ways, the proof of the risk is assembled.  The Effectiveness Index has no self corrections – it is an achievement gap maintenance machine – a black box designed to keep things the same as the moment in which it was crafted.

Most importantly, the thing that escapes so many peopleis that that the traits of the demographic simply belong to a few more individuals in one group than any other.  The traits of individuals in THE REST of that same demographic are NOT going to become the "bad" trait at any moment.  These statistical associates are absolute innocents – these are the “false positives” that Tedesco mentions in this meeting – children with perfectly acceptable traits that are treated differently because of demographic group membership.

This misunderstanding of risk, and any resulting treatment of people as DIFFERENT is quite simply a form of social domination – it is the CREATION OF DIFFERENCE where none exists with the direct effect of using that ascribed difference for what turns out to be systemic suppression aimed at an entire demographic.

Calling something “at risk” is just a statistical expression of slight difference in frequencies.  We would be outraged to find that we were given dangerous chemotherapy or a pre-emptive surgery simply because our second cousin had cancer.  But we routinely do far worse than this to the children of Wake County because of the way we have been thinking about demographics.  Over and over, the traits of a few individuals are attributed to the entire population and the ENTIRE POPULATION of the demographic is considered somehow different from the members of all the other demographic groups at large.

The most insidious part of this mental gymnastic is the subsequent identification of ANYONE in the demographic as eligible for the treatmentseligible for dangerous mediations or damaging exclusions  – mediations that should be aimed at only the few INDIVIDUALS in the demographic that are qualified – exclusions from opportunities they are fully qualified to have.

Ascription to this BELIEF SYSTEM is why our drop out programs are full of young black men who are otherwise high scoring and well-behaved (but who subsequently drop out in droves after being misplaced in damaging programs), this is why our remedial instructional programs are FULL of THOUSANDS of young black and hispanic children who are high scoring and well-behaved (but who subsequently suffer precipitous drops in their proficiency after placement in these programs).  And this is why placement into rigorous courses is so extraordinarily racially and economically biased – why as many as 85% of minorities have been unable to get into classes they are fully qualified for.  These things are going on TODAY in WCPSS.  These things are endemic in our schools at large because we (and much of the rest of the nation) think about the things we do to INDIVIDUALS using faulty ancient cultural thought patterns - putting to work what we think are the essential traits of populations.  Thus you cannot speak of the access problems of a certain demographic without hearing someone cite the “characteristics” of that group.  This is the standard routine of domination.

And this is why the Effectiveness Index is poison - because people simply do not know how to handle these statistical expressions in terms of designing practices.  WCPSS has not known how to handle these statistical expressions.  Their employment of the Effectiveness Index and the Diversity Policy has been nothing short of genocidal.  These flaws stem from people’s cultural understandings.

This is all that the leadership of WCPSS were ORIGINALLY guilty of.  They honestly have believed that individual children who are merely members of certain demographic groups are themselves less likely to be able to learn and are thus properly treated for damaging interventions.  They honestly believed that the best thing to do with certain demographic groups is move them around under the heading of a dangerous substance.

The shuffling itself had become a noxious practice.  Thousand of poor children spent a year is this school, then a year in another, some attending as many as three middle schools as the district frantically moved them around to balance the increasingly sickening equations resulting from thinking the children, not their instruction, was the problem.  Where the district ran into political trouble is when they started to move children whose parents had social power to service the same equation because they were running out of powerless victims.  Then the revolt took form.  None of this has anything to do with the original laudable goals of the diversity policy.

Conspiracy.  Now, because their defense of these ideas turned increasingly aggressive and illegal – because the leadership WCPSS has become conspiratorial in their fear of exposure – they must be replaced.  Its now ever more clear that the tools and the leadership must be replaced.  That is what this board meeting was about - moving towards the accountability that has so vigorously been resisted.

Taking the other side, cheering the Effectiveness Index as if it was some political icon, is strongly indicative of socially suicidal ignorance.

E&R fought entrance of EVAAS into this district every step of the way for years.  Casually, without seeming overly interested, I’ve collected  tales of careers sabotaged, evidence fabricated and planted, employees harassed and fired, and outside critics illegally slandered and directly attacked in the name of defending these practices.  Several years of internal and external attacks have continued to the present - even to this week.  The core is rotten and it is dangerous to confront these people.  It tells you something that the last ditch effort resisting accountability was to finally agree that EVAAS should be used but that the “rusty knife” should die a natural death and that depriving the “educators” of this tool was a mistake.  Follow that rhetoric back to its source and you have the heart of the conspiracy.

All this tells you that this improving situation will instantly revert if the pressure is taken off.  Bad intent persists.  It is important to press forward.

Keeping the spark warm

Thank you Lurker, for keeping the spark alive. 

Toxic labels and how children are not treated as unique blossoming individuals, those have been particular torches of mine. I appeciate your taking the time to put it in your own context. 

Expert Issues Warning on

Expert Issues Warning on Formative-Assessment Uses

As educators across the country focus attention on designing new and better ways to gauge what students are learning, they risk distorting the meaning and practice of formative assessment and squandering its potential to enhance teaching and learning, an assessment expert warned on Wednesday.

Margaret Heritage, the assistant director for professional development at the National Center for Research on Evaluation, Standards and Student Testing, or CRESST, at the University of California, Los Angeles, appeared on a panel to discuss a new paper intended as a reminder of what formative assessment should be.

Her comments were aimed directly at two groups of states  that are working to design assessment systems for new common standards that have been adopted so far by 41 states. Since the new tests stand to exert a potent influence in classrooms across the country, the type of tests they produce—and the way they purport to gauge student knowledge—is the subject of keen attention.

Ms. Heritage argued that the two consortia lack “the right mind-set” because they depict formative assessment as sets of tools, or “mini-summative” tests.

Referring to a body of work that sought to define formative assessment during the past two decades, including the influential 1998 article, “Inside the Black Box,” by Paul Black and Dylan Wiliam, she said formative assessment is not a series of quizzes or “more frequent, finer-grained” interim assessments, but a continuous process embedded in adults’ teaching and students’ learning.

A teacher uses formative assessment to guide instruction when the teacher clearly defines what students should know, periodically gauges their understanding, and gives them descriptive feedback—not simply a test score or a grade—to help them reach those goals, Ms. Heritage said. Students engage in the process by understanding how their work must evolve and developing self-assessment and peer-assessment strategies to help them get there, she said.

Ms. Heritage’s comments echo others’ concerns that the meaning of formative assessment has been hijacked as the standards movement has pressed states into large-scale testing systems. The result, Ms. Heritage said, is a “paradigm of measurement” instead of one of learning.

While summative tests can provide valuable information for decisions about programs or curriculum, she said, the most valuable assessment for instruction is the continuous, deeply engaged feedback loop of formative assessment. Channeling money into building teachers’ skills in that technique is a better investment in student achievement, she said, than paying for more test design.

Technique Misunderstood?

Michael Cohen, the president of Achieve, a Washington-based group that is the project manager for the Partnership for the Assessment of Readiness for College and Careers consortium, said his organization’s primary aim is to design a better summative assessment, and is not creating formative assessments as part of that package. By clearly defining performance standards, however, his group’s summative tests can “provide a context” for good formative-assessment practice, Mr. Cohen said during the panel discussion.

The executive director of the other consortium, called SMARTER Balanced, said his group envisions formative assessments not as a tool, but as a “way of doing business, a way of interacting with students,” so it is designing a set of resources for teachers to use in that instructional feedback loop.

“If that point wasn’t made clear in our proposal, that’s an unfortunate misunderstanding,” Joe Willhoft said in a phone interview. “One of the three legs of our plan is exactly that: professional development,” along with interim assessments and adaptive summative tests.

Other members of the panel, organized by the Council of Chief State School Officers, which is supporting states as they design new assessments, urged better training of preservice and in-service teachers in using formative assessment. Many teachers think they are using the technique, but they fundamentally misunderstand it, said Stuart Kahl, the co-founder of Measured Progress, a Dover, N.H.-based assessment designer.

“There are teachers who say, ‘Oh, I do formative; I quiz them every day,’” he joked.

Sarah McManus, the chief of testing and policy operations for the North Carolina education department, said her agency is helping teachers learn formative assessment by posting modules online. But states must devote resources to thorough, ongoing professional development to build the skills in their teachers, she said.

Mastering formative assessment carries profound implications for changing teaching from a top-down process to a more collaborative one, said Caroline Wylie, a research scientist with the Princeton, N.J.-based Educational Testing Service who also appeared on the panel.

“This is not a follow-the-pacing-guide sort of teaching,” Ms. Wylie said.

A teacher quoted at the end of Ms. Heritage’s paper captures the essence of the paradigm shift Ms. Heritage has in mind.

“I used to do a lot of explaining, but now I do a lot of questioning,” said the teacher. “I used to do a lot of talking, but now I do a lot of listening. I used to think about teaching the curriculum, but now I think about teaching the student.” 

You need to save this post

You need to save this post for when they try to kill Blue Diamond. Blue Diamond is quizzes for formative assessment. Effectiveness Index is summative assessment.

ugh, don't even get me

ugh, don't even get me started on Blue Diamond......but you make a good point!

No. Don't get ME started on

No. Don't get ME started on Blue Diamond. It is not as harmful systemically as the Effectiveness Index or the Poverty Training. But it is a huge waste of money, for a system created by the son of an E&R employee in his spare time. The report E&R then did to determine its effectiveness is nearly incomprehensible, but concludes that students who pass a Blue Diamond test have a 50% chance of passing EOG or some such nonsense that would mean it isn't very predictive. But they conclude that therefore it is predictive of whether students would pass EOG. For me it is just a huge expensive ..."what????"

Are Tests Biased Against Students Who Don't Care?

......http://www.theonion.com/video/in-the-know-are-tests-biased-against-students-who,17966/

thank you

Since late 2005 the federal Department of Education wrote guidelines that state:

“The accountability model must establish high expectations for low-achieving students, while not setting expectations for annual achievement based upon student demographic characteristics or school characteristics.”  

The point being that the education system should not have different expectations for students based on race, gender, or economic status.

The effectiveness index uses both student demographic information and school characteristics and has been used to evaluate programs and teachers in Wake County.

In 2006 North Carolina was one of the first states to have its accountability model approved. Wake County did not comply and until yesterday continued to use a system that violated both federal and state mandates.

Thank you very much John Tedesco and Deborah Prickett for working hard to eliminate the Effectiveness Index.

Measurable Effects

A couple people asked about the measurable effects of Effectiveness Index. There are plenty. A major role of E&R is to do evaluation reports to identify effective practices, then share that information with the schools. They use the Effectiveness Index to identify effective practices. For example, see page 8 of www.wcpss.net/evaluation-research/reports/2009/0831ms_alg1.pdf

Nearly all their "research" on effective practices has used the E.I. as the metric for determining effective. So, a program would be identified as effective even if kids did not grow academically, or possibly even if harmed if they were in a high poverty school and had any of the characteristics that adjust expectations downward.

Our best hope is that no one paid any attention to their reports.

??

Tried reading through the pdf.  What I got from it was it identified teachers who had high residuals as those with the best practices and those with the lowest residuals as the worst then looked at characteristics of those with best versus worst.  The conclusions were:

  • All middle school Algebra I teachers had a positive attitude overall toward their students and their teaching assignment; yet top middle school teachers were significantly more positive than bottom middle school teachers. Bottom teachers were distracted by the students who were misplaced and lacked the math prerequisites or the study skills necessary for highest performance. Top teachers focused on the positive qualities of each student, expecting all students to rise to their high expectations.
  • Top teachers used a variety of instructional methods. There was less lecture and more use of whole group discussion and small groups than in bottom teacher classes. Overall there was more student ownership of their learning.
  • Top teachers assumed knowledge of basic algebraic skills and taught an enrichment-filled course at an invigorated pace. Students were exposed to both curriculum and beyond-curriculum topics.

Are you saying these conclusions were inaccurate becuase the EI was used to initially identify high performing versus low performing?  

Research

I can't answer for KLanders.  However, I think that it is impossible to make a determination about these conclusions if the Effectiveness Index is used to evaluate the teachers.  It isn't the initial identification of high performing versus low performing students that is the problem.  Likely that is based on actual past performance.  It is the identification of high versus low performing teachers.  (I wasn't sure which one you meant.)

The problem is that in establishing a goal for the students in order to determine if they have made progress, the Effectivness Index takes the student's last test score, adjusts it for demographics, and establishes a goal for that student.  The success or failure of a program is then measured by the "residual" which is the difference between the goal and the student's actual performance.  But because the goals for poor students, for students in a school with a high level of poverty, and for special education students include a downward reduction in their goals, you can get "false positives".  A program (or school or teacher) can appear to be doing well even if the students make little progress because the goals have been adjusted downward.  We don't know how the teachers would have been evaluated under a system that did not adjust the goals.

A system to measure teachers

A system to measure teachers without those demographic adjustments could just as easily yield "false negatives", couldn't it?

Is it better to think a teacher is doing a good job when they aren't or to think they aren't when they are?

If we can not find a way to

If we can not find a way to give teachers credit (financial, professionally, a pat on the back) for taking on the hardest assignments teaching the kids with the most need human nature will cause many to gravitate to affluent schools with guaranteed success and constant affirmation.   There may be enough information in the kid's past test scores that ED / Race / gender / etc. don't matter .... what matters is that teachers get credit for moving kids who normally score 50% to 70% and not hammered for why they are at 70% while the affluent school always scores 95% whether they have a teacher or  watch movies.

My biggest problem is not

My biggest problem is not the teachers. It is the intervention programs and access to opportunities. Lets say an excellent hardworking teacher is assigned to teach a remedial curriculum to top scoring level 4 EVAAS predicted to succeed kids, and then over a year of this the kids become low scoring and do much worse than EVAAS predicted, and become low scoring. The kids are poor, the school is high poverty. So, Effectiveness Index adjusted the expectation down, and says the fact that the kids lost ground and are now worse off is fine. This is only fair to the teacher because it is not her fault the kids lost ground. They got remedial curriculum so naturally they lost ground. The program is effective because the outcome is as the EI predicted.

Or, low income kids score lower in math. This is as expected because they do as predicted by the Effectiveness Index. If they were to have access to the best math classes, they would score higher. But, they do not need access because there is no problem. They are scoring just as we expect, as predicted by the Effectiveness Index.

If we quit using Effectiveness Index to judge teachers, but continue to put Level 4 kids in remedial classes, and keep low income kids who meet the math placement criteria out of the top math classes, the teachers are going to look bad even if they are good. So, we can either keep the Effectiveness Index so the teachers don't look bad when it is not their fault, or we can start providing interventions and opportunities based on individual student data, not subgroup membership.

Demographic effects

I think the extent to which a student's score is affected by demographics is already incorporated into his or her past score.  Therefore adjusting it downward makes no sense.

When we are evaluating programs, both false positives and false negatives are a problem.  We can't select among programs unless we really know which ones are effective.  So, no, I don't think it's better to "think a teacher is doing a good job when they aren't or to think they aren't when they are".   Both are problematic.  But the EI methodology is likely to create the former type of errors. 

But This Key Point.....

I think the extent to which a student's score is affected by demographics is already incorporated into his or her past score.  Therefore adjusting it downward makes no sense.
 
....is entirely speculation.  It's based on the very, very big - and as far as I can tell, unproven - assumption that any demographic delta between test scores remains constant over time.  I.e.,  that the RATE of change of test scores between demographic groups is identical.    If the rate of change varies due to demographic factors (or, more likely, other factors that are themselves correlated with or influenced by demographic factors) then the idea that two kids with identical past test scores should be expected to have the same future test score is mathematical nonsense.
 
It's *possible* that the assumption is true, but it seems highly unlikely to me given what we know.  Seems to me that the idea that the gap between poor and rich students grows bigger between the day the start kindergarten and the age where 60% of one group is dropping out while 95% of the other is graduating....seems to me that the self-evident reality is that the gap gets bigger over time. So it should require really solid evidence to make that assumption a constant gap.   And we've seen abslutely zero published evidence to that effect - at least that that I've been able to find.
 
 
 

But the gap gets bigger in

But the gap gets bigger in part because we don't expect as much from the lower income or minority kids as we do their weathier or white counterparts.  If we aren't challenging the lower income kids to achieve what they are capable of (or beyond), then they will fall further and further behind the 'wealthy' kid who is steered towards the advanced curriculum.  Kids will generally rise to the expectations that you set for them.  I don't think that most kids are driven to exceed the expectations. 

I cannot even fathom how excruciatingly boring school would be if you were a level IV student being placed in the lower track classes.   

Jenman ... I just don't

Jenman ... I just don't think changing a computer program will make a difference ... we saw how FVMS discriminated against minorities and only placed 30% of the qualified candidates while E. Garner placed 95%  (tried to remember % ... may not be exact) ... so there is a human element working at the teacher, principal, counselor level that needs to be addressed.

It takes more

I agree.  It takes more than just a fairer way to place students.  There has to be accountability so that the people involved cannot discriminate against minority and ED kids.  By the way, I would be cautious about the numbers in that report.  Two schools are missing from the report and the number of qualified students changed from the September report (13,637) to the October report (11,314) so I don't know how reliable any of that data is.   (I think FVMS was at 56.7% but your point about the disparity between the schools was well taken.)

When I mention the disparity

When I mention the disparity to my wife she thought it might be rooted in "making the numbers" ... I guess principals want a high Algebra EOC score and the best way to achieve that is by making sure they keep all the potential failures out of the class and go with 100% sure thing ... 

I might agree

I might agree except that, from the numbers that they gave us, it appears that more than 2000 students whose predicted probability of success is lower than 70% were placed in these classes.  (In September, we were told that 10,313 students were enrolled while in October, we were told that 8,177 of the enrolled students were above the 70% cutoff.)  I don't have much confidence in the numbers we were provided because of the discrepancies in the total number of qualified students.  However, if there is any validity to what they have told us, many students with lower probabilities of success are being enrolled while others (more than 3000 of them), whose probabilities are higher, are not.  I don't doubt that the desire to "make the numbers" could be a factor.  But something else must be going on as well.

I really don't understand how they could have prepared a report and left off Holly Grove Middle and Mills Park Middle.  They are new schools but surely they have students who predicted above 70%.  This factor alone makes me skeptical about all of the numbers.

I'm with you.  It is surely

I'm with you.  It is surely a factor, but I don't think it's the major one.  It's not as if all the kids getting left out of Alg are in the 70-75% range.  Wasn't it WF-R MS that discovered the highest scoring kid on the 5th grade math EOG wasn't recommended/placed in Advanced 6th grade math? 

Absolutely there is a human

Absolutely there is a human element that needs to be addressed.  That's why it's imperative that we follow up on the math placement to make sure things are going the way they should.  We need to make sure that the reasons why qualified kids aren't getting placed are valid and not based on things such as stereotypes.

I agree that changing the computer program won't automatically make things better.  But it is crucial to get rid of the computer program that builds in the prejudices.  Also, keep in mind that there were people advocating for kids to be placed properly for several years.  I think it was klanders who said that there were no objective criteria for determining who should get placed in the advanced math track.  So advocating for kids when there were no official criteria was getting nowhere--they could simply fall back on the teacher's recommendation that a particular child wasn't ready for whatever reason.  That is why there has been the push for a standard like EVAAS.  Now we can say that this kid meets the official criteria and there better be a damn good reason why he's not being placed appropriately. 

Before we didn't even have a way to see the kids who weren't being placed.  I know I'm not explaining it very well, but I think you can get the idea. 

Having objective criteria will still not change some teachers and principals attitudes & beliefs about how much certain kids can achieve, but it is a huge step in the right direction.  Just because it won't fix the entire problem doesn't mean that we give up on it.  We have to keep pushing and advocating for all of these kids.

Yes.

Yes.

But, if a kid's demographic

But, if a kid's demographic situation is affecting his test scores, logically I would think it also would affect his growth the following year (if all other things were equal....basically if they are sitting in the same class).

If two kids are sitting side by side, and one has demographic factors working against him and the other doesn't, the fact that they don't end up at the same point when the school year ends may not be indicative of the teacher's ability.

What people are trying to say is that teachers would look at kids through EI and not set the same goals for all kids in the class.  What I believe (and I will until I get a lot more proof than people on a message board saying it) is that teachers look at each kid and try to help that kid develop as much as they possibly can.

If two kids are sitting side

If two kids are sitting side by side, and one has demographic factors working against him and the other doesn't, the fact that they don't end up at the same point when the school year ends may not be indicative of the teacher's ability.
---------------------------------------------------------------------------------------
One of the problems with this is that there are more than demographic factors that can work against a student.   And I would argue that in many cases the fact that a student is poor or lives in a poor neighborhood doesn't work against him at all in terms of academic ability and success.
 
Moving often, having an alcoholic or drug addict parent, suffering the death of a parent or sibling, going through a divorce, having an abusive parent, etc. can all work against any student whether they are rich or poor.
 
So take your example above.  Two kids start out at the same point.  One is poor but has a good home environment and parents who pay attention to what is going on in school.  The other lives in Preston but his parents are going through a horribly messy divorce and spend so much time sniping at each other that they don't pay much attention to what's happening with the kids.  The poor kid doesn't gain as much ground as the 'rich' kid, but that's ok because the poor kid shouldn't be expected to because he has demographic factors working  against him.  Both teachers will be deemed effective.  Does that seem right to you?  It doesn't to me. 

Individual information

The EI is based on assumptions about the effect of demographics on a child.  It is not based on the individual child's previous rate of progress.  That's why EVAAS is a better system.  It looks at multiple years of a student's performance to make an individual projection.

I look at it this way.  Suppose a child enters the 5th grade reading two years above grade level.  If the child left 5th grade still reading two years above grade level, I would consider that the teacher did an okay job.  If the child continued to make above average progress and was now reading  three years above grade level, I'd want to evaluate the teacher as having done well.  Conversely, if the child slipped to only reading one year above grade level, I'd be concerned.  Of course, individual situations can affect a child's progress which is why EVAAS needs three years of teacher data to evaluate effectiveness.

What makes no sense to me is to expect that the child above will be reading only one year above grade level merely because he or she is poor, attends a school with a lot of F&R students, or is in special education. After all, this student has done very well in the past.  Why expect that this will not continue?   Looking at the student as an individual with a past record of performance seems to be much more sensible and now we have the tools to do it. 

Aren't EVAAS and EI both

Aren't EVAAS and EI both based on standardized test scores?  What relevance does reading level have with EVAAS or EI information?

I think the problem I have with this whole thing is that people are taking a tool that is intended to show generalized information (class or school level data) and criticizing it because of individual data.

I believe it is (was) supposed to be a way to measure two teachers from different schools (or different schools altogether) against one another.  I don't believe that measurement is as simple as looking at their kids' test scores and seeing whose students did better.
 

Look at the nonsense in this

Look at the nonsense in this report, where they use EI for individuals. This is total nonsense. www.wcpss.net/evaluation-research/reports/2008/0803teacher%20_newslet_vol1.pdf

It looks like they removed the full report, or maybe I just can't find it. The full report explains how they found students with positive residuals and identified them as the successful at-risk students.

EVAAS uses several years of data on several tests, and EI uses only last score on one test.

How is this nonsense?

Teachers and/or families can secure tutoring or mentoring for students at school or in the community. This is particularly important for LEP students who
are still learning English, given that schools’ ESL programs (especially at the elementary level) focus primarily on a specific language arts curriculum
rather than on providing students with help for classwork or homework.

Parents and guardians can help children succeed in school. They can provide a place for homework, check on homework completion, limit television and
video viewing, and show that they place a high value on their child’s learning.

Students with multiple academic risks clearly can achieve academically. We hope this newsletter provides ideas that help students facing multiple risks
to succeed in school. By working together, parents and teachers can influence students’ personal, social, and academic skills to make a positive difference in
their success in school and beyond.

 

Nonsense?

 

 

 

How insulting. First, they

How insulting. First, they identified kids as "successful" despite having "multiple academic risk factors" if they had positive residuals. This is explained in the full paper, which used to be there. They could have lost ground.

Students who belong to a subgroup that is supposed to be failures, yet they are not failures, or maybe are failures but are doing better than we expect, are said to be "resilient."

What if they had said "students from upper-middle class families clearly can achieve academically." And then told how these suggestions might help.

In the full paper they explain what kind of teaching led to success for these skanky little kids who we expect nothing of. It turns out it was quality teaching. They do better when they have quality teaching. They must be resilient.

Halleluyah

Halleluyah. Thank you, you careful and dedicated Don Quixiotes, trying to do the impossible. You did it. You know who you are. You have done a heroic job here and I fully understand and appreciate it. This is one of those breathtaking background dramas that came out with a happy ending - at least, going forward. Thank God. 

The above paragraph is directed at those participants who are *NOT* in elected positions, but, for those who are, who facilitated - I appreciate the fact that they saw with clear eyes what had been presented to you. 

(edited to correct the meaning)

About time

I have never seen so much effort to cling to something so backwoods.

only in a school system with no past accountability

Certainly ignorance was used as cover, but  we should never forget there were a few people who fully understood what they were doing with this intentionally skewed methodology.  It did a specific kind of work that a few people who understood it wanted done.  Only in a school system with no past accountability, one that can just spin out any story that they want to a gullible apathetic or politicized base, could we let a homemade pretend statistic damage so many children for so long.

This is the beginning of a new era of accountability.  I, for one, am not going to forget who brought us here.

What are the measurable

What are the measurable effects of this decision?   In Sept of 2011, what scorecard, pass rate, graduation rate, AP participation can I look at to see if this made a difference?

There really isn't one is there?

There really isn't one is there.   Because there is no clarity about exactly how it was applied before, there won't be any measurable impact.   There may be impacts, we just won't really be able to quantify what they are.   Any increases in math scores or Algebra 1 pass rates would be attributed to implementation of EVASS (or is it EVAAS?)  among a broader number of students in the 2010-2011 school year.

Hmmmmmmm

I expect an increase in math

I expect an increase in math scores and a slight drop in Algebra 1 pass rates. Will see.

This took far too long but

This took far too long but at last it is gone--that's a big step in the right direction.  Thank you Debra Goldman, Deborah Prickett, JT, RM, and Mr. Malone. 

"The audience applauded

"The audience applauded anytime a board member said something positive about the Effectiveness Index."

This level of ignorance is more stunning because it is so partisan induced.  

This audience has no idea what they are applauding. ------------------------------------------------------------------------------------

My thoughts exactly.  This whole thing makes me sick.

Vampires don't die "natural" deaths

"Hill said they should let the Effectiveness Index die a natural death."

Hill is no more than the main conspirator's mouthpiece.  Sinister, joined at the hip.  If any of these people cheering the Effectiveness Index had even the slightest idea what they talking about in terms of effectively false assumptions, effectively evil methodology, and effectively bad intent towards minorities and low income students, they would realize that a wooden stake through the heart is as close to a natural death as the Effective Index will ever have.

"School board member Kevin Hill questioned why you'd pull a tool out of the hands of educators..."    "Board member Carolyn Morrison said that the more evaluation tools you have, the better off you are."

I can hear someone else's voice coming out of these mouths.  And what thinly veiled ignorance.  All I can say about that logic is that I assume Hill and Morrison will not object if one of their future surgeons concludes the same about supplementing his sterile scapels with rusty dull knives and adding bloody used bandages to the sterile gauze they would prefer.  More tools must be better, after all.

"The audience applauded anytime a board member said something positive about the Effectiveness Index."

This level of ignorance is more stunning because it is so partisan induced.  The terrible things the Effectiveness Index does to students are NOT HARD to understand.  You have to not want to understand it in order to not get it.  What is hard to understand about E&R using the Effectiveness Index to generate target scores for students that are BELOW the level of content mastery they already score?  Is that "just another tool for educators?"  This audience has no idea what they are applauding.  They apparently would just as easily applaud Heinrich Himmler's plans even if it were for their own children.  This "tool" is just as surely genocidal as his devices. 's p[lans 's 's   Ths 

"School board vice chairwoman Debra Goldman then proposed an alternative motion to separate the Effectiveness Index from the resolution. Her proposal was to have a separate vote ending the Effectiveness Index immediately."
 Thank you, Debra. 

Which will now be referred

Which will now be referred to as the "Ineffectiveness Index".

When your goal is to "appear" to be a model school system, it is so much easier to lower expectations of those you believe to be incapable of functioning at a higher level than to come up with a plan to help them reach that level.  So much for understanding poverty.

How did the Effectiveness

How did the Effectiveness Index impact WCPSS' appearances?

Have they released EI results to the public at some point?

I have never met a single school employee who says that EI was used for individual student evaluation, but people here say it was like it's a fact.  The NC School Report Cards tell you what kind of a district you have, good or bad.

I would say that if assuming that certain risk factors could impact achievement and putting those risks into a formula is bad, then completely ignoring that there are risk factors that could impact achievement and pretending they don't exist should be bad, too. 

I agree in recognizing risk

I agree in recognizing risk as well as protective factors but didn't Wake lower expectations of those students?  I've had students come to my office, upset, because they were told by a couselor that a certain goal of theirs was unatainable due to their lack of capability.  The student I have in mind had a high goal but certainly not unattainable. 

Was that counselor using the

Was that counselor using the Effectiveness Index or had that counselor simply looked at the student's performance?

If you don't know the answer to that question, you're making assumptions.

Personally, I have a hard time believing the counselor referenced the EI and decided based upon that one indicator that the kid had no shot.

Based on my understanding, the Effectiveness Index was not used in the manner you suggest.  Picking out one specific case and trying to pass it off as a reason it was a failure is not really a fair thing.  It was used to measure schools against schools and teachers against teachers.  I haven't seen a single person connected with the schools say that it was used for placement or assessment of individual students.

If you're measuring teacher against teacher and school against school, I don't think it's horrible to make adjustments based on tendencies.  There could be a million reasons why certain ethnic groups and economic groups perform at different levels.  But, the fact is that they quite often do, and that isn't just a WCPSS phenomenon. 

If you and I were each going to teach a kid basketball, and one of us had a kid who was 6'6" and the other had a kid who was 5'8", a head-to-head game might not be the best indicator of our respective coaching abilities.  The 5'8" kid could be amazing, and the 6'6" kid could be uncoordinated, but most of the time the 6'6" kid would outperform the 5'8" kid.  If I were always coaching the 5'8" kid, I would be upset if another coach was considered better than me simply because he was always coaching kids who were 6'6".

In that example, the EI would attempt to make all the kids seem 6'1".

Cars View All
Find a Car
Go
Jobs View All
Find a Job
Go
Homes View All
Find a Home
Go

Want to post a comment?

In order to join the conversation, you must be a member of newsobserver.com. Click here to register or to log in.

About the blogger

T. Keung Hui covers Wake schools.
Advertisements