Comparing ways of rating congresspeople

There are a variety of ways to rate congresspeople, and I will cover several, but I’ll spend most of my time on the method I think best.  It’s seriously geeky, but I give a nongeeky summary, and then I give links to the geeky parts.

Many organizations rank congresspeople.  In the Almanac of American Politics, they include ranks from mny.  Each of these organizations looks at votes on their particular issues, and sees how each congress person votes (for their position or against it).  I am not going to talk more about these individual organizations.  

I will discuss three ways of ranking or rating congresspeople, they are used by a) National Journal  b) Progressive Punch  and c) Keith Poole and his associates.  I think the last is the best.

National Journal ratings does the following for the House, and similar for the Senate:

House members are assigned separate scores for their roll-call votes on key economic, social and foreign-policy issues during 2008. The members are rated in each of the three issue categories on both liberal and conservative scales, with the scores on each scale given as percentiles. An economic score of 78 on the liberal scale, for example, means that the member was more liberal than 78 percent of his or her House colleagues on the key votes in that issue area during 2008. A blank in any cell in the table below means that the member missed more than half the rated votes in an issue area. Composite scores are an average of the six issue-based scores. Members with the same composite scores are tied in rank. (C) indicates a conservative score; (L) indicates a liberal score.

If you sort on “composite”, you’ll see one issue: There are a lot of ties.  The top 12 representatives are all tied.  In the senate there are fewer ties.  But how does Bernie Sanders rank as tied for 13th most liberal, and with almost the same rating as Clinton?

The details of how they rated the congresspeople are for subscribers only, but they do have this snippet:

A panel of National Journal editors and reporters initially compiled a list of 167 key congressional roll-call votes for 2008 — 79 votes for the Senate and 88 for the House — and classified them as relating to economic, …

So it seems like they averaged a bunch of votes.

Progressive Punch rates people on the percentage of correct votes, and it offers ranks based on all voeertes, crucial votes, and votes on particular issues.  It is kept up to date, which is a major plus.  This has some advantages and disadvantages.  According to their methods, the three most progressive senators are: Roland Burris, Kirsten Gillibrand, and Edward Kaufman.  Huh?  Well, all 3 have 100% ratings.  Even for Senators that have been in for a while, there are anomalies: Is Sherrod Brown really as liberal as Bernie Sanders?  One problem is revealed when we see that Ted Kennedy has a very low rating for 2009-10: They don’t deal properly with missed votes.  If we look at “Crucial Votes” for “lifetime” Jack Reed is rated as the most progressive senator among those who have been in the Senate for at least one full session.  

The way they came up with scores is summarized here. Briefly, they first identified a few “hardcore progressives” in the Senate and the House.  The ‘overall’ ratings are based on votes in which a majority of those progressives voted against a majority of the Republicans.  The problem here is that all votes are weighted equally, and this isn’t right (see below).  


The crucial votes are a subset of those, specifically:

The votes used to calculate the scores in the “Crucial Votes ’09-’10” column are a subset of the overall votes that qualify according to the Progressive Punch algorithm described above. They show the impact that even a small number of Democrats have when they defect from the progressive position. These are votes where EITHER progressives lost OR where the progressive victory was narrow and could have been changed by a small group of Democrats voting differently.

 This is better, but it’s not as good as more sophisticated methods.

Why not? Well, the good people at Progressive Punch recognize the problem: Not all votes are equal, even among those that are ideological.  Some are easy wins, some are lost by a lot.  But they dichotomize this into “crucial” and “noncrucial” when there is really a continuum.

The site is great for looking into past votes of congresspeople, and it’s great that they keep it up to date, but there is one better method.

That is the method used by the people at voteview.  The software and methods are the best, but it’s not the most user friendly site in the world.  They describe two methods of rating congresspeople: NOMINATE and Optical Classification.  Both are based on using every vote and attempting to place legislators in a way that maximizes the ability to predict how they will vote.  Both work really well: Optimal classification works a bit better, but takes more computer time; NOMINATE (if I understand it correctly) allows placement of issues as well as politicians.  With a single number for each congressperson, you can predict, with 95% accuracy, how they will vote on any bill.

One question is whether a single dimension (liberal to conservative) is enough to accurately classify people.  For most periods in American history, it is.  In the 1960s, a second dimension (racial attitudes) added a lot to the accuracy, but, right now, one dimension does very well.  You can see how OC works in one dimension.  It predicts 95% of the vote correctly.  Note that the things that look like fancy script L (or the old sign for pound) are supposed to be less than or equal to signs.

I am not going to duplicate the example in that link, but I’ll try to explain it a bit more (you might want to open it in another window).  The diamonds are legislators, the spades are ‘cutting points’ for nine votes, each with a different number of “ayes” and “nays”.  The Ace of Spades is a vote with only one “aye”, the two of spades has two “ayes” and so on.  Now, we attempt (first iteration) to place legislators correctly per the votes.  That gives the diagram listed after 2.  Then we re-order the cutpoints, as shown in step 3, and repeat the process.  

(end geekiness)

How do these methods compare?  I am not going to compare all the senators and reps, simply because I can’t figure out an easy way to copy the data into a spreadsheet.  But let’s take 5 well-known Senators from the 110th Senate:  Feingold, Schumer, Bayh, Specter and Coburn.

             OC rank                PP lifetime    NJ 2008 comp.      

Feingold -     most liberal           20th           37th

Schumer -      16th most liberal      16th            7th

Bayh  -        51st most liberal      45th           51st

Specter -      56th most liberal      59th           53rd

Coburn -       101st most liberal     71st           92nd



(there are 102 ranks in OC because of senators getting replaced …e.g. WY has Enzi, Barasso and Thomas).  I couldn’t find Progressive Punch for the 110th, so I gave lifetime ratings.

Which do you think is most accurate?

By what margin will Bob Shamansky win?

View Results

Loading ... Loading ...

8 thoughts on “Comparing ways of rating congresspeople”

  1. DW Nominate may be more accurate, but it is so much more difficult to work with that I prefer Progressive Punch.  Maybe I misunderstand DW Nominate, but I do not see that they have lifetime scores for congresspeople.  From what I can make of their numbers, they also simply rank order, rather than rate on an absolute scale.  Since there can be clusters, rank ordering really doesn’t tell you how much better one is than another.  PP allows you to sort columns which is very handy and also is completely up to date for the current congress.  Their website is also much easier to navigate and understand.

  2. I’m most comfortable with Progressive Punch.  Scoring on missed votes does seem to skew the results for both Presidential candidates and for those who are absent due to illness.  Tim Johnson’s score took a big plummet last session.

    If predictability matters, one weakness is the voting of the few in the House who switched parties.  Scores generally are way too high for Dem to Rep switchers and too low for Rep to Dem switchers.

    I probably use Progressive Punch a minimum of three times a week.  IMO a major improvement was the addition of congressional district numbers in the present system (previously only states were listed).

    Caucus listings (Blue Dog, New Democrat, CBC, CHC, RMP, RS) would be a plus.  I waste a lot of time and paper doing separate lookups.

    Maybe if I was more comfy with DW Nominate, I’d use it more.  

  3. Here are average scores by party for several classes within the House:

    Blacks   92.83 (D),  none (R)

    Women    91.95 (D),  7.26 (R)

    Hispanic 88.45 (D),  9.51 (R)

    White M  84.77 (D),  7.44 (R)

    There is overlap.  Maxine Waters is a woman;  she is black. Ileana Ros-Lehtinen is a hispanic woman who double dips on the Republican side.

    Asians are a small enough group that they were not listed although Asian women would be counted.

    Black and Hispanic Democratic members were taken from the cbc and chc rosters (with those not in congress dubtracted).

    This is the sort of thing that can be done easily in a few hours using a few pieces of paper and excell with Progressive Punch.

    One can’t get an average from DWN; simply a median.

    Btw, the reason for much of the discrepancy among Democrats lies at the top of the scale and the bottom, not the middle.  White males occupy only three of the top 22 spots (14%) but about 60% of the spots overall for Democrats.  At the bottom of the chart, white males occupy 85% of the bottom 54 (ranked 201 to 254).

    About 41% of Republican women had OK scores for a Republican (from 9.64 to 15.86).  The other 59% were just awful.  Marsha Blackburn’s 1.38 beat out Virginia Foxx (2.15), Michelle Bachmann (2.40) and Cynthis Lummis (2.94) for the worst.  Trailing that list, Sue Myrick (3.32), Jean Schmidt (4.41), Mary Fallin (4.59), Kay Granger (4.61), Lynn Jenkins (4.85), and Cathy McMorris Rogers (5.14) fell into the bad category.  Jenkins is a moderate how?

  4. Averaging several of the interest group ratings in the Almanac of American Politics (ADA, 100 – ACU, etc.) produces a composite ideological rating.  The advantage of using these group ratings is the votes are selected by the organizations as being key ideological tests.

Comments are closed.