Daily Kos Alleges Research 2000 Fraud

This is, needless to say, some pretty big news on the polling front. You probably recall that several weeks ago (after the Arkansas runoff, but apparently motivated primarily by 538’s pollster rankings) Daily Kos severed its relationship with its pollster, Research 2000. Today, based on a study by three prominent statistics experts, Daily Kos is alleging that something is seriously amiss with Research 2000’s polling, suggesting that the conclusions do not seem to reflect truly random polling. While the discrepancies seem most obvious in the weekly tracking polling and not state-to-state polling, Daily Kos has disavowed all numbers produced for it by Research 2000.

While the investigation didn’t look at all of Research 2000 polling conducted for us, fact is I no longer have any confidence in any of it, and neither should anyone else. I ask that all poll tracking sites remove any Research 2000 polls commissioned by us from their databases. I hereby renounce any post we’ve written based exclusively on Research 2000 polling.

The gist of it is (as you might expect) best explained by Nate Silver, by excerpting the key graphics from the prepared report. The graphics show how R2K’s weekly favorable numbers for Obama always seemed to move from week to week, usually by a small amount… which isn’t indicative of a normal distribution. By contrast, Gallup’s numbers form a very normal-looking bell curve, with a change of 0 being the modal amount of week-to-week change. The researchers who performed the poll also found discrepancies in rates of appearances of odd and even numbers (shades of Nate’s takedown of Strategic Vision there).

Greg Sargent has details on the lawsuit that will be filed in short order by Daily Kos against Research 2000. For his part, Research 2000 head Del Ali tells TPM that he stands “unequivocally” behind every poll he produced, and is denying the allegations.

Needless to say, we at SSP have very much relied on the supposed quality of Research 2000’s data, and will be watching further developments in this matter with great interest.

84 thoughts on “Daily Kos Alleges Research 2000 Fraud”

  1. If the accusations are in fact true how did R2K think they were going to get away with it? Wow, just wow. Do they think people are stupid or something. I hope this Ali guy has some really good lawyers because he is going to need them. Daily Kos is like our sister blog so this is very disturbing. Does anyone know if Kos is going to poll anymore? Who will they use, PPP? I hope so.  

  2. There already is a perception that pollsters are full of it.  I am sure this does not help much.

    It really does not surprise me.  The R2000 numbers have been highly erratic for months.

  3. …since statistics can often show strange results.  I’d like to believe a firm that has such a compelling interest in producing accurate results for their clients wouldn’t play with their numbers beyond the traditional massaging based on what they expect the turnout model will be.

    Kos is a pretty smart guy so I doubt they’d make a move into the legal realm if they didnt think they had a firm footing.  Unless it’s a paper lawsuit meant more to cover themselves and the negative publicity from having to distance themselves from their own pollster than to actually try to go to trial.

  4. nailed some races where other pollsters have missed.  Take AK-Pres in 2008 as an example.  So it seems that his horcerace polling is honest but just shaky in general.  The evidence is only against his polling for the weekly favorable numbers, though Silver suggests that something was off with his tracking poll (even though Silver told everybody when it first came out in September 2008 that people shouldn’t be so quick to question it even though it was  more favorable to Obama/Biden at that time than other pollsters).

    It’s hard for me to believe that this guy would be honest with his horcerace polling and then just cheat on the weekly tracking.  I guess you can argue that there is no accountability for weekly tracking since it cannot be tested until 2012, so he saw an easy way to cheat.

  5. Fraud is a serious allegation and Kos is a Northwestern Law school grad (not sure if he is licensed to practice law or even ever practiced law), so I’m guessing he’s thought about what has happened.

    To a newbie lawyer like me the fraud case may be harder to prove than defation; however, if it turns out he can’t prove scienter, can R2k turn around sue him for defamation?

    Obviously this is a legal question, but can you imagine a court case where the evidence basically centers around comparing polls with election results?  

  6. So this is the R2K counterattack, ref http://www.fivethirtyeight.com

    Copy of cease and desist letter included in the link.

    Nate says no:

    I emphatically stand behind any statements I have made about Research 2000, and will be constrained by nothing other than my common sense and my professional integrity in any comments I should elect to make about Research 2000 in the future.

  7. http://bluegrasspolitics.blogi

    Del Ali, president of Research 2000 in Olney, Md., said he could not respond to the specific allegations Tuesday and referred questions to his attorney, who did not return a call seeking comment.

    “I can tell you, we’re fine. What we’re going to reveal, that will be the end of the Daily Kos,” Ali said. “I can say, it has to do with people owing money.”

  8. I assume he will try to take down Kos, whether or not there is anything to say. This won’t end well for anyone, irrespective of fault.

    I will say that if male and female opinions of Obama start out both even or odd on week one, the chances that they will move the same amount when subjected to the same political forces is not the same as the chance that two randomly selected quantities will end in even or odd digits. But two hundred something out of two hundered somthing plus two certainly does sound “monkey-fuck ridiculous.”

    Could they have done this: step one, use one set of data to determine overall movement of the electorate, step two, use a different set of data, or an unreliable subset of the same data, to test for gender disparity in the movement of public opinion? And then so consistently failed at step two, or so stubbonly assumed a null hypothesis against weak efforts to prove gender variation in the movement of public opinion, that they always reported “no difference” by gender in the movement of public opinion?  

  9. I don’t even know how anything can be proven in this.  Just because something stands near realtive impossibility in statistics does not (and cannot on its own) demonstrate fraud.

    I’m also one to believe that usually when people commit fraud in their field of work, they either go for very easy stuff thats small potatoes and not that important (think the penny skimming scheme in the movie Office Space) or overly-elaborate schemes that no one could ever detect.

    To me this seems like they took something overly easy to detect that was very important, which isn’t usually a footprint one finds with fraud.

    Unless an insider comes forward, I have no idea how you prove fraud in this instance.

  10. And to be honest with everyone, I haven’t done anything with that degree in the last 10 years.  I don’t have recent experience to make a sound judgment on whether fraud or neglect was involved in these numbers.

    At first glance, the analysis provided by Kos indicates that the numbers provided by R2K should not be relied on.  Upon further review, I feel that these numbers, especially on Obama’s approval ratings, do not properly reflect normal regression over a period of time.  Specifically, the approval numbers lack the proper static that would reflect the electorate’s opinion on Obama.  

    That being said, I’m withholding judgment on whether R2K committed fraud.  I think it would be easier to prove that there was a level of neglect (not fraud, since neglect can be deemed unintentional) than fraud.

    I’m keeping an open mind, but I’m not open to relying on R2K numbers in the past or present.

  11. This is good stuff:

    Link:  http://firstread.msnbc.msn.com

    Here’s a little secret: Good polls are expensive to do, and if you’re seeing a particular organization doing a slew of polls, you’ve got to ask: (1) how reliable are those numbers, or (2) where is the money coming from to conduct those polls? Nowadays, on the state level, we trust the polling we’re getting from campaigns and state parties (although not necessarily those polls that are made public) more than the numbers we see from some non political polling organizations.

    Our policy to you on state polling:  One policy we’re going to institute ourselves to make sure you have an idea of everything that we know is this: When we report a public poll on the state level, it will be because we think those numbers are reflecting what we know is going on in the race. We’ll let you know if a pollster has a good reputation in that state, has a good track record (because a good pollster in one state doesn’t mean they know the nuances of another).

    And, btw, First Read later approvingly cites the latest Quinnipiac OH-Sen poll and the new Reuters/Ipsos CA-Gov and CA-Sen polls, saying all those polls are consistent with private polling.

    I’ve recently been gravitating away from viewing robopolls as reliably as live caller polls.  And the First Read blog post reinforces the point for me.  Chuck Todd said awhile back that if robopolls were so good, then campaigns would use them–and they don’t.  I always figured the political media are privy to good private polling on both sides, and what they say about the state of a race reflects that.  It’s a given that private polling is better than public polling because parties and candidates must have the most accurate data possible.  They make their most essential campaign decisions based on their own polling, unlike campaign junkies like us who are hungry for data merely to feed a personal hobby.

  12. at Bleeding Heartland, Drew Miller (Bleeding Heartland founder, by the way) checked the results from polls R2K did for KCCI-TV in Des Moines this year and found the same even/odd pattern. On every question, the responses for men and women were either both odd numbers or both even numbers.

    Mark Blumenthal’s post on this yesterday is a must-read. He asked a forensic data guru to look at the analysis:

    [Walter] Mebane says he finds the evidence presented “convincing,” though whether the polls are “fradulent” as Kos claims “is unclear…Could be some kind of smoothing algorithm is being used, either smoothing over time or toward some prior distribution.”

    When I asked about the specific patterns reported by Grebner, et. al., he replied:

       None of these imply that no new data informed the numbers reported for each poll, but if there were new data for each poll the data seems to have been combined with some other information—which is not necessarily bad practice depending on the goal of the polling—and then jittered.

    In other words, again, the strange patterns in the Research 2000 data suggest they were produced by some sort of weighting or statistical process, though it is unclear exactly what that process was.

Comments are closed.