Skip to contents

This data is one of 87 sets of ballots from the Tideman data collection, as curated by The Center for Range Voting.

This set of ballots was collected in 1988 by Nicolaus Tideman, with support from NSF grant SES86-18328. "The data are records of ballots from elections of British organizations (mostly trade unions using PR-STV or IRV voting) in which the voters ranked the candidates. The data were gathered under a stipulation that the organizations involved would remain anonymous."

The ballots were encoded in David Hill's format, and have been converted to the preference-vector format of this package. Candidates have been renamed to letters of the alphabet, for ease of comparison with Table 3 of Tideman (2000). Note: the DOI for this article is 10.1023/A:1005082925477, with an embedded colon which isn't handled by the usual DOI-to-URL conversions.

As noted in this table, it is a very close race between candidates D, F, and B in the final rounds of a Meek count of a53_hil.

Tideman's implementation of Meek's method excludes B (on 59.02 votes), then elects D in the final round (on 88.33 votes) with a margin of 0.95 votes ahead of F (on 87.38 votes).

In v1.0, stv(a53.hil,quota.hare=TRUE) excludes F (on 56.418 votes), then elects D in the final round (on 79.705 votes) with a winning margin of 0.747 votes ahead of B (on 78.958 votes). The result of the election is the same but the vote counts and winning margins differ significantly; so we conclude that stv(quota.hare=TRUE) in SafeVote v1.0 is not a reliable proxy for Tideman's implementation of Meek's algorithm.

Future researchers may wish to adjust the quota calculation of vote.stv() so that it is no longer biased upward by a "fuzz" of 0.001, to see if this change significantly reduces the discrepancies with Tideman's implementation of Meek.

It would be unreasonable to expect an exact replication of results from two different implementations of an STV method. We leave it to future researchers to develop a formal specification, so that it would be possible to verify the correctness of an implementation. We also leave it to future researchers to develop a set of test cases with appropriate levels of tolerance for the vagaries of floating-point roundoff in optimised (or even unoptimised!) compilations of the same code on different computing systems. We suggest that a53_hil be included in any such test set.

We note in passing that B.A. Wichmann, in "Checking two STV programs", Voting Matters 11, 2000, discussed the cross-validation exercise he conducted between the ERBS implementation of its voting rules and the Church of England's implementation of its voting rules. In both cases, he discovered ambiguities in the specification as well as defects in the implementation.

Usage

data(a53_hil)

Format

A data frame with attribute "nseats" = 4, consisting of 460 observations and 10 candidates.