Tag Archives: automated election

HALAL Statement: Smartmatic ballot production software was not certified


Last April 30, in response to a request by candidate Joey de Venecia III, the Comelec made public a set of documents relating to the source code review of the Smartmatic software conducted by SysTest Labs Inc. One of the documents was “Certification Test Summary for AES May 2010 Rev. 1.00”, dated March 8, 2010. It summarized another SysTest report “Final AES Certification Test Report for the Smartmatic Automated Electon System (AES)”, which the COMELEC has not made public yet.

The Summary may help explain the recent spate of PCOS failures to read local votes which, Smartmatic admits, is due to their error in configuring the ballot design. Citing the problems SysTest had earlier found in its source code review, the Summary listed several “compensating controls” that were essential in mitigating the Smartmatic software problems that SysTest had identified.

In one compensating control, SysTest was very explicit: “The Ballot Production tool was not subjected to the full certification process; therefore it should not be utilized in the May 10, 2010 election process.” (Summary, p.6) Given the ballot printing problems of the COMELEC, from the misalignment of the ultraviolet security mark to the misconfiguration of the ballot design, HALAL asks the COMELEC and Smartmatic to clarify if they utilized Smartmatic’s Ballot Production tool despite the explicit warning of SysTest.

HALAL notes that the March 8 SysTest summary only gave a conditional endorsement of the Smartmatic software. HALAL further notes that the summary was submitted one month past the AES Law deadline for the legally-required certification “categorically stating that the AES … is operating properly, securely and accurately”. This was the SysTest recommendation in the summary (p.7): “Assuming the abovementioned [compensating] controls are put into practice and that the AES is properly configured, operated and supported, SysTest Labs finds the Smartmatic Automated Election System to be capable of operating properly, securely and accurately and therefore recommends the system for certification and use in the May 10, 2010 election.”

Instead of the categorical statement required by the AES Law R.A. 9369, SysTest’s conditional endorsement was premised on the crucial assumption that all “controls are put into practice”.

According to the AES Law R.A. 9369, the COMELEC Technical Evaluation Committee must “certify, through an established international certification entity, … categorically stating that the AES, including its hardware and software components, is operating properly, securely, and accurately, in accordance with the provisions of this Act based, among others, on the following documented results: 1) … ; 2) … ; 3) The successful completion of a source code review; 4) … “

Given all the problems cited in the Feb. 9 SysTest report (HALAL’s analysis of this report is attached), and the explicit warning in the Mar. 8 SysTest summary report against using Smartmatic’s ballot production tool, it is clear that no certification should have been issued to the Smartmatic software because it would put our national elections at an unacceptably high risk.

Smartmatic machines are not so smart after all

We are spending P7.2 billion to lease these “smart automatic” machines. It turns out that they are not so smart after all. In fact, they seem downright stupid.

They can’t recognize a check mark or a cross. They can’t recognize ballpen or pencil marks. They need full, dark shadings to be convinced that you want to mark an oval. Isn’t that stupid?

When the security marks were misaligned by a mere one to two millimeters, the machines had trouble finding them. They were making so many mistakes that Smartmatic decided to forget “smart automatic” and go back to manual instead. They will just give election inspectors ultraviolet lamps; the inspectors will shine the lamp on each ballot and decide after an ocular inspection if the ballot is authentic or not. Still better than a dumb machine that can’t find the security mark.

A few days before the May 10 elections, these “smart automatic” machines are supposed to be unsealed for a final test in the field by election inspectors. Reports are now flooding in that many can’t read some of the marks, and can’t count some of the votes. Read the reports:

For the sake of our elections, let us all hope and pray that these problems will be solved before May 10.

HALAL analysis of recently-released SysTest source code review

Halalang Marangal (HALAL) recently obtained a copy of the SysTest report on the source code review of the Smartmatic software that it conducted Oct. 26, 2009 to Feb. 9, 2010. This review was the basis for the Comelec concluding that the Smartmatic Automated Election System will count our May 10 votes properly, securely and accurately. The SysTest report and related documents may be downloaded here.

HALAL’s conclusion, after scrutinizing the SysTest report, is that the Smartmatic software should NOT have been certified. We should not have put our national elections at risk given the clear warnings of SysTest about problems in the Smartmatic software.

You may download the HALAL analysis here.

The issue: failure of automation, not failure of election

The issue is not a failure of election, but a failure of automation.

Failure of election” is a narrow legal term describing a rare situation. The Omnibus Election Code defines it as a situation in which “the election in any polling place has not been held on the date fixed, or had been suspended before the hour fixed by law”. The suspension may also occur “after the voting and during the preparation and the transmission of the election returns”. The definition further requires that the failure “would affect the result of the election”, or “results in a failure to elect”. It has occurred in barangays or towns but never on a national scale.

Although some have raised its possibility in 2010, they were probably using the term loosely and were not aware of its legal definition.

Thus, Chairman Melo could say, with a straight face, that failure of election was “pure fantasy”. He is using its narrow legalistic definition. If voters were able to cast their votes and Comelec proclaimed a winner, there was no failure of election.

The issue in 2010 is the high risk of a failure of automation. This is what was raised by Halalang Marangal, an election watchdog which includes former Senator Wigberto Tañada, former Comelec Commissioner Mehol Sadain, and retired General Francisco Gudani among its convenors. We had in fact estimated the probability of failure as of March 8 at 75%, and we have seen no reason to substantially revise that estimate. We still consider the risk of failure “unacceptably high”.

Let me define what we mean by a failure of automation.

Election automation is a failure if the time it takes to determine the winners in the election is not significantly shorter than the manual method, or if the fraud that has chronically attended our elections is not significantly reduced.

Let me review the basis of our assessment that the election automation had a 25% chance of success. (You can find the details in Jarius Bondoc’s April 5, 7 and 9 columns in this paper.)

A March 8 full-page ad by Smartmatic in major national dailies had claimed “a vote of confidence” on the election automation project it was implementing in the Philippines. Smartmatic had claimed substantial achievements in the five sub-systems that comprised the whole Automated Election System (AES).

But when we scrutinized carefully the Smartmatic ad, we found gaps, delays, problems and at least one glaring false claim (“successful field tests and mock elections”).

In the Hardware sub-system, Smartmatic claimed they have completed the delivery of the machines, but glaringly omitted reference to testing. Clearly they have not tested the machines thoroughly. Neither did they have the time to do so. Former Comelec commissioner Mehol Sadain tells us that in 2004, it had taken them three months to fully test 1,990 automated counting machines. If deployed, some of the partially-tested machines are bound to cause problems on election day. We also found that Smartmatic had bought 21% more memory cards than necessary. In the wrong hands, these could be loaded with false precinct results and substituted for authentic cards. Because of these and other problems, we estimated the probability of success of this sub-system at 80%.

In the Software sub-system, we noted that no local stakeholders had managed to conduct a proper review of the source code, because of the Comelec’s obstinate refusal to implement the clear intent and letter of the law. We also noted that the Comelec released no certification documents or full report that would support its Feb. 9 claim that Systest Labs had completed its audit/review of the AES. Since time had run short for a thorough review, we estimated the probability of success of this sub-system at 70%.

For the Logistics sub-system, we cited media findings about the questionable capacity of the forwarders chosen by Smartmatic to deliver election paraphernalia throughout the Philippines. We estimated the Logistics probability of success at 80%.

For the Transmission sub-system, we cited among other things the 70% signal coverage in the Philippines, as Smartmatic itself found out. Smartmatic had transmission problems even within Metro Manila, suggesting poor quality of transmission equipment. We gave it 70%.

For the Ballot Printing sub-system, we cited the confidential Comelec memo which warned that it was impossible to finish ballot printing on time, given the rate they were printing them. We gave it 80%. The Comelec subsequently brought in a fifth ballot printer, raising its capacity by 20%, and making it possible – if no further glitches happened and the printing went on non-stop – to meet its April 25 deadline.

To get the overall probability of success of the entire AES project, the sub-system probabilities of success must be multiplied together. Yes, multiplied together, not averaged. And not just the lowest figure – the weakest link – either. Check it with your calculator: .8 x .7 x .8 x .7 x .8 = .25 or 25%. Note that we see the glass as one-fourth full, not three-fourths empty. We are optimists to a fault, not doomsayers.

So many things can go wrong with the AES that Murphy’s Law is bound to kick in. Like a toss of two coins, Chairman Melo is betting that two heads will come up. He bet P7.2 billion of the taxpayer’s money. If you count the whole election budget, P11 billion, all in.

Chairman Melo’s bet will lose 75% of the time. That makes failure of automation the issue in 2010.

The time is very short, but we still have a possible solution. Details in subsequent posts.

PCOS machines in Philippine automated elections: failure rates, error rates

According to the news, two of the twenty PCOS machines in Hongkong stopped working for a while. That is a 10% failure rate.

Cesar Flores of Smartmatic claims they expect a PCOS failure rate of 0.3- 0.5%. However, vendor claims must be taken with a grain of salt, more so if their goods were hurriedly made in China. The claim is also belied by Smartmatic’s own plans: they are deploying 8% of the total machines for backup. So, they must be expecting up to 8% of the machines to fail, which is more consistent with the failure rates in Hongkong.

The actual PCOS failure rate is, in fact, a big unknown.

First, it appears that Smartmatic had done most of the testing, not COMELEC. Due diligence requires COMELEC to do acceptance testing. Any buyer must double-check delivered goods before signing a receipt acknowledging that it was received in good working condition. Especially since the Smartmatic deliveries involved P7.2 billion of taxpayers’ money, the machines should have been independently tested if they meet COMELEC specifications as detailed in their contract. Those that didn’t meet specs should have been returned for replacement. If deployed, these can cause trouble during election day itself, as they did in Hongkong.

The 0.3-0.5% PCOS failure rate that Smartmatic claims is not backed by properly-witnessed test stats and is contradicted by the 10% failure rate reported in Hongkong and Smartmatic’s own preparations to replace up to 8% of machines that may fail on election day.

Second, the test stats have remained inaccessible to third parties like political parties, election watchdogs and media. Transparent test stats minimize potential insider collusion (as in the ballot secrecy folder contract), which can result in overpricing or payments for sub-standard equipment. Transparency also minimizes the possibility that insiders will selectively assign machines, depending on their quality, in order to cause trouble in targetted regions or provinces in favor one candidate or another. Just imagine if problemmatic machines or modems are selectively assigned to Aquino, Villar, Estrada, or Teodoro bailiwicks — whoever are disfavored.

At least five PCOS test results are so important that they should be publicly known:

  • Mean time between failures (MTBF). This is the average time a PCOS machine stays operational. Knowing the MTBF and the mean time to repair (or replace), we can determine the average failure rate. Instead of actual statistics, we have today media-reported field anecdotes and unsubstantiated vendor claims.
  • Average rejection rate of valid ballots. This is a specific but important case, when the PCOS stays operational but rejects a valid ballot. In Smartmatic demos, field tests and mock elections, the rejection rates were inordinately high, far above COMELEC specs.
  • Scan error rate. Just as PCOS machines fail, they make mistakes too. A PCOS scanning error can be a false positive (registering a vote that is not there) or a false negative (missing a vote that is there). When the PCOS is adjusted to read lighter shades, false positives increase because even a slight smudge may be falsely registered as a vote. When adjusted to read darker shades only, false negatives increase, because lightly or partially shaded ovals may be missed by the PCOS. Each machine has to be calibrated towards that ideal spot which minimizes the total errors from both false positives and false negatives. Based on COMELEC specs, this total should be lower than .005%, or five scanning errors for every 100,000 marks (at most one error per 1,000 ballots). Unfortunately, the calibration may change in transit or under environmental stresses like heat, humidity, or mechanical shocks. A PCOS machine that rejects valid ballots has, in effect, very high false negatives, because it misses all the shaded ovals, each representing one vote, in those rejected but valid ballots.
  • Transmission error rate. Because of ambient electrical and electronic noise, transmission is more susceptible to error than scanning, and therefore demands high quality equipment. That Smartmatic modems had transmission problems even within Metro Manila does not speak well of their quality. A poor quality modem is hopeless and should be replaced.
  • Battery backup life. COMELEC specified at least 16 hours of backup. A good quality control engineer would insist on batteries lasting up to 20 hours under test, a 25% margin for coping with unexpected operating and environmental extremes.

Smartmatic had earlier claimed it was testing 2,000 machines a day. Compare this to the three months it took COMELEC to thoroughly test some 1,900 automated counting machines in 2004. Even granting that the 2004 testing was done at a leisurely pace, the huge difference still makes one wonder how thorough the PCOS testing was.

In particular, the PCOS scan error rate is very important. If the error rate is, say, 5%, and the presidential winner’s margin is less than 5%, then we will again find ourselves in political limbo. In 2004, GMA’s supposed margin over FPJ was 3.48%. COMELEC specified .005%, which is quite low. But it doesn’t look like COMELEC actually measured each machine’s error rate. That is not possible when testing 2,000 machines a day (the necessary statistical test requires more than 1,700 test ballots per PCOS).

Each PCOS should pass various COMELEC tests before it is accepted, paid for, and deployed to a polling place. And stakeholders should have access to all the test statistics, including the number of machines that stopped operating, the number of valid ballots rejected, and the actual number of falsely registered voter choices, to prevent insiders from accepting bad machines and selectively assigning these to targetted areas.

Without the test statistics, we can only guess which is closer to the truth, the 10% failure rate shown by the machines in Hongkong, or the 0.3-0.5% failure rate claimed by Smartmatic.

It is not too late. COMELEC can still order the release to media of these test statistics, and improve its credibility before the voting public.

May 10 election automation: can data-substitution happen during transmission?

There are several entry points for election cheats under the election automation project of the Comelec. We will focus here on the transmission phase.

Every precinct counting machine (PCOS) is supposed to transmit its electronic Election Return (e-ER) to the three upstream servers: the municipal canvassing server, the KBP-PPCRV-political parties server, and the Comelec central server, in that order.

The risk of reverse data flow (RDF)

Why does the municipal server come first, and the Comelec central server last? Here’s the risk if the Comelec central server comes first: suppose that during transmission a reverse flow of data actually occurs? That is, instead of receiving data, the central server instead sends to the PCOS, overwriting the latter’s authentic election results with fraudulent data coming from the central server. We will call this the Reverse Data Flow (RDF) risk. It can only happen if both the PCOS and the central server had earlier been programmed to do so, upon receipt a certain command (for instance, if the PCOS receives a certain string of characters from the central server). We know that Smartmatic machines have such capability and can be commanded to accept incoming data, because it happened during the 2008 pilot in ARMM. (Those in the industry call this the Wao incident. Wao is a town in Lanao del Sur Province.)

If this RDF risk materializes, then, when the PCOS subsequently connects to the municipal and the KBP-PPCRV servers, it will now be uploading not authentic data but the fraudulent data it received from the central server. To be effective, RDF needs to occur on the first connection to the outside (presumably with the central server). Why? Suppose the authentic data from the PCOS manages to get out on the first connection to the municipal or KBP-PPCRV server. If the central server subsequently manages to load the PCOS with fraudulent data through RDF, then discrepancies will show up between the municipal and central data files that will be harder to cover up.

RDF can also occur between a PCOS and a municipal server, but this means the cheats would have to take control of many municipal servers, instead of a single central server, to achieve a similar impact. Thus, RDF through the central server is simpler and easier to cover up, if cheats were to attempt it. This is why it is extremely important for the PCOS to connect to the municipal server first, and the Comelec central server last.

How to make the PCOS connect to the central server first

A security flaw in the implementation of the transmission sequence exists, that can be exploited by cheats. In its Resolution No. 8739, the Comelec instructs the Board of Election Inspectors that if the PCOS is unable to connect with the municipal server after three tries, then the BEI should try sending to the KBP-PPCRV server instead. And if that doesn’t work either, they should try sending to the Comelec central server next. Then, back to the municipal server, in round-robin fashion. The revised general instructions (Comelec Resolution No. 8786) keeps this round-robin approach.

Hence, if there’s a way to intentionally block municipal servers and the KBP-PPCRV server from receiving a PCOS transmission for a while, then the PCOS will end up connecting with the Comelec server first, setting up the conditions for the RDF problem.

Remember the 5,000 cellphone jammers reportedly imported into the country? They suit this purpose perfectly. The 5,000 are more than enough to cover the 1,631 city/municipal servers throughout the country, plus the KBP-PPCRV server. If this method of cheating were to be attempted, the cheats will probably not operate in every city and municipality but only in selected municipalities where they can achieve maximum impact with minimum of disruption. Areas where there is no credible opposition might be good candidates for such an operation.

Another possibility is swamp the target server with a Denial of Service (DoS) attack through the Internet, long enough to get the PCOS to try the central server first. After the RDF operation, which will only take the expected few minutes (except that data will be flowing in the opposite direction), things can go back to normal at the municipal and KBP-PPCRV servers.

How can this method of cheating be prevented?

Several measures are necessary to prevent or detect this method of cheating:

1. The source code of the PCOS as well as the servers must be opened for scrutiny and review. RDF can only happen if there are programs in the PCOS and the server instructing them to make it happen. A thorough code review may be able to determine if such rogue programs exist, as long as they are not camouflaged or hidden very well.

2. The Comelec central server must be accessible for close observation at all times, to all stakeholders, especially political parties and non-partisan election monitors such as media and citizens’ groups. This will make it more difficult to set up the Comelec server for an RDF operation or to install new software at the last minute for doing so.

3. The BEI must be under strict instructions not to attempt connection to the Comelec central server until the data has been transmitted to their upstream municipal server and the KBP-PPCRV server.

4. Print more than 8 ERs before any transmission is attempted, to give more minority parties access to one of the pre-transmission ERs. Under current Comelec instructions, only 8 ER copies will be printed before transmission and the remaining 22 after transmission, so only the dominant majority and minority parties (the dominant minority designation is still being contested between the NP and the LP) get a copy each of the pre-transmission ER. Another copy goes to PPCRV, which however has announced no concrete plan so far to do a parallel count, and still another gets posted on a conspicuous place at the precinct level. These first 8 copies are extremely important for detecting RDF.

5. Specifically instruct the BEIs and official watchers to ensure that the ERs printed before and after transmission are identical and to record this fact as well as any discrepancy in the BEI minutes.

Bypassing precinct election officials in the May 10 automated elections: open invitation to fraud

If there is still doubt whether or not the Philippines is heading towards a chaotic election, Comelec Resolution No. 8786 erases all doubt.

This resolution promulgated on March 4, 2010, is entitled “Revised General Instructions for the Board of Election Inspectors (BEI) on the Voting, Counting, and Transmission of Results in Connection With the 10 May 2010 National and Local Elections”. It amends or revises earlier Comelec Resolution No. 8739 (the original General Instructions for the BEIs) to “fine tune the process and address procedural gaps.”

This resolution directs the Board of Election Inspectors (BEI), the committee of three teachers who run the elections in every precinct, to press “No” when the automated counting machine asks them to “digitally sign the transmission files with a BEI signature key”.

Perhaps I should repeat that to make sure you, dear readers, don’t miss it: the BEIs have been instructed by the Comelec not to digitally sign the electronic ER before it is transmitted to higher level servers for canvassing and consolidation.

The provision is in Sec. 40 of the Revised GI, “Counting of ballots and transmission of results”, page 27.

Here is how Comelec spokesperson James Jimenez explained why the BEIs were instructed not to sign the electronic ERs: (Read the full story here.)

But Comelec spokesperson James Jimenez said the instructions did not mean that there would be no digital signatures in the transmission of the votes.

Jimenez said the instructions simply removed one step in the transmission process in order to minimize human intervention and further protect the results of the vote.

The digital signature of the machine is already encoded in the device, he said, and that the digital signature of the BEI is also entered into the machine before the voting.

Signature imbedded

“From the start, the digital signature is already in the machine … Since it is there, the minute the machine stops counting, it starts printing, it starts transmitting. The teacher does not need to enter the process,” Jimenez said.

“That minimizes the possibility of the results being tampered with,” he added.

Jimenez said that the digital signatures would be read by the machines receiving the voting results because they are already in the signal that was transmitted.

The Comelec is apparently still fixated at minimizing human intervention. They still don’t realize (or maybe, they perfectly do?) that it may be possible to minimize human intervention, but never to eliminate it completely. In any automation project, there are always points of human intervention — the design engineers, the programmers, the maintenance or repair technicians, the operators, and and a few others. By minimizing human intervention, they are actually minimizing the number of people that need to be in on a conspiracy, that need to be bribed, or are potential witnesses. In fact, the more people watching what is actually happening, the harder it is to cheat.

With this Comelec resolution, the BEIs have have lost control. They have been sidetracked. The whole automated election process is now completely under the control of a single foreign entity, Smartmatic, and the machines we are leasing from them. They generate the passwords and digital signatures, they encode the digital signatures within the machine (or most probably in a keychain-size device, which is read by a sensor in the counting machine), they transmit the data, and they certify the correctness of the passwords and digital signatures. In a business setting, this is equivalent to merging in a single person the functions of vendor, machine operator, accountant, cashier and auditor — an open invitation to fraud.

Most election fraud are inside jobs. The gaping security breach created by Comelec Resolution No. 8786 has made it much easier for a few insiders to manipulate the results of the May 10 elections.

Question: is this Comelec resolution the product of gross stupidity or malicious intent?

Automated elections in the Philippines: 25% probability of success as of March 8

On March 8, Smartmatic came out with full-page ads in several national newspapers, claiming that the Automated Election System (AES) project they are implementing for the Philippine government have the people’s vote of confidence.

We at Halalang Marangal sat down to discuss the ad and realized that all the information contained there, analyzed carefully and taken together, actually meant that as of March 8, the AES probability of success had become unacceptably low. we even tried to be generous in our assessment, and gave the company some benefit of the doubt (where it was possible to do so!), but the numbers still led to a low probability of success.

That parenthetical comment was necessary because I found it incredible that Smartmatic would claim successful field tests and mock elections when media had reported many cases of rejected ballots and transmission problems even in Metro Manila. If Smartmatic can blatantly lie about this in public, then it can lie about anything. Smartmatic has also imposed a blackout on statistics about the scanning accuracy of its machines.

Anyway, I won’t keep you in suspense. You can download the presentation now. (HALAL analysis of AES risk of failure – as of March 20).

Back to blogging: the mathematics of election audits

This is just to get into the habit again. I haven’t posted anything for sometime, due pressures of work and study. With this post, I intend to become a regular once more.

I made a presentation yesterday Feb. 27 at the Institute of Mathematics in U.P. Diliman, before some 15 mathematicians. The presentation was arranged by Dr. Jose Ma. Balmaceda, director of the Math Institute where I am also lecturer. Former COMELEC Commissioner Mehol Sadain and former COMELEC IT Department head Ernie del Rosario also attended. My topic was the determination of sample sizes for a post-election audit. Halalang Marangal (HALAL), of which I am secretary-general, is recommending to the COMELEC to adopt Confidence Level Targetting (CLT) instead of Fixed Percentage of Precincts (FPP) for setting the sample size in the random manual audit (RMA) that is required by law for the May 2010 Philippine elections. Current proposals today for the sample size range from 200-plus (the number of legislative districts, which is what the law says) to 1,600-plus (the number of councilor districts). All proposals, except ours, are based on fixed audits.

The biggest problem of fixed audits in setting the sample size is that confidence level (one minus the significance level) of the result will be unpredictable. If the win is a landslide, the sample size may be unnecessarily large. But if it is a close contest, the sample size may be too small.

The advantage of Confidence Level Targetting is that it takes the winning margin into account, when setting the sample size, so that the confidence level reaches the desired level. To give an extreme example: if the winning margin is just one vote, even if only one precinct is left out of the sample (that is, all precincts except one are audited), the result of the audit will remain inconclusive.

How is Confidence Level Targetting done?

Here’s the basic process:

1. Adopt a target confidence level L. HALAL recommends 95% (the same level which is typically used in establishing scientific “truths”). The American Statistical Association on the other recommends 99% (which will take longer and will also be more expensive). I can be happy with either. Let us say 95%.

2. Determine the winning margin M. This is the difference in the votes received by the winner and the nearest loser. In case of multi-slot contests like senatorial or council elections, use the difference in votes received by the last among the winners and the first among the losers. The audit starts with the hypothesis that this margin is the result of cheating.

3. Estimate the average number of false votes V a presumed cheat will try to gain in a single precinct. The idea is: any gain higher than V will be too obvious. Lower than V, the results are still plausible enough. So, V is the highest false gain that a cheat will dare attain in one precinct. Of course, this will still vary from precinct to precinct, so we take V as the average target of the cheat per precinct. In my presentation, I assumed that V = 500. That is, the cheats will target a false gain of 500 votes in the precincts where they will operate.

4. From M and V, we can compute the minimum number of precincts P that the cheat must have operate in, to get a total false gain of M+1 (that is, to overcome the lead of the presumed true winner):

P = (M+1) / V


P*V = M+1

That is, the cheat targets an average false gain of V in P precincts, in order to get a total false gain of M+1.

This false gain may be attained in several ways. By simply adding zeroes to the votes of the favored candidate, by removing digits from the true winner, or by vote shifting. In local parlance, “dagdag-bawas” or padding-shaving.

Now, we have an estimate P of the number of bad precincts.

5. We must now compute the sample size which will give us a very high probability (95%, if a target confidence level of 0.95 is adopted) of drawing at least one of these bad precincts. The formula is:

S = [N – (P – 1)/2] * [1 – (1-L)^1/P], where

S is the sample size

N is the total number of precincts (75,471 clusters for the May 2010 elections)

L is the target confidence level (and 1-L is the significance level of the test)

P is the estimated number of bad precincts

I took this formula from existing literature on the mathematics of election audits, specifically from an article by Aslam, Popa and Rivest (2007). I can explain the details if there is any interest. In whole procedure is explained by Dopp (2009).

6. We now have the desired sample size. If we draw this number of precincts randomly (with emphasis on randomly, that is, every precinct has an equal chance of being selected as every other precinct) from the total number of precincts, we are 95% sure we will get at least one bad precinct. The ballots in the drawn precincts are then counted manually, the votes tallied, and the results of the audit compared with the machine results for discrepancies.

7. There are two possibilities, and at this point, language becomes important. So, note carefully the words and phrases I use.

The first possibility is that we don’t find any precinct with a discrepancy as large or larger than V. None. Then, we can now conclude (note the language now!) with 95% confidence that there was no cheating that was significant enough to change the outcome of the election. In the parlance of statistics, we have falsified the null hypothesis (that the cheating changed the outcome of the elections).

The second possibility is that we find at least one precinct with a discrepancy as large or larger than V. In this case, the audit results are inconclusive. (Sorry if that sounds a little bit counter-intuitive, but that’s statistics). The, our conclusion will be (note the language!) we cannot confidently assert (at the 95% confidence level, if you insist on being quantitative) that there was no cheating significant enough to change the election outcome.

In other words: we had started with the hypothesis that the cheating was significant enough to change the election outcome. If we find no bad precinct in the audit, as described above, then we can confidently conclude that the hypothesis was false, the winner is the true winner, and s/he can be proclaimed. But if we find at least one bad precinct, we cannot confidently conclude that significant cheating occurred.

8. To confidently conclude that the cheating was significant enough to change the outcome, we need a different approach. More on this later.

Despite Obama’s victory, problems with electronic voting machines should not be ignored

With Obama’s landslide victory over McCain in the 2008 U.S. presidential elections, I hope the problems of electronic voting will not be buried under the euphoria. U.S. media had been filled with all kinds of problems involving voting machines. These problems clearly indicated a trend of errors favoring McCain. There were so many reports in so many states that there seemed to be a machinery of cheating in place to make sure McCain would win.

Search the Web for “electronic voting machines in 2008 U.S. elections” and you will get these reports. Note that the search term given is completely neutral and does not include leading words like problem, error, failure and so forth. Yet, the bulk of the reports on the Internet are about problems associated with voting machines.

If we summarize the 2008 U.S. election experience from the perspective of clean and honest elections, this is how I’d put it: the threat of cheating came from those who controlled the electronic voting machines, and it was the massive turnout, the landslide for Obama, and the vigilance of U.S. election integrity activists which stopped the cheats from succeeding.

We were in a similar situation exactly ten years ago, in 1998, when the landslide victory of Joseph Estrada prevented any cheating effort by the administration party Lakas-NUCD although there were clear indications that the machinery to do so was in place.

We were not so lucky in 2004, when cheating was so rampant and brazen that President Gloria Macapagal-Arroyo herself was caught on tape micro-managing it. Yet, the whole system, including the business community, sections of the Church and even citizens’ watchdogs, colluded to cover up the cheating, probably because they thought “anyone but FPJ” would have been better.

I sure hope Philippine election authorities will get the correct lesson out of the U.S. 2008 experience.

Electronic voting, electronic cheating?

When I was awarded a six-week research fellowship by the University of Oxford’s Internet Institute, I chose to focus on electronic voting. (The term more commonly used in the Philippines is “automated elections”.) My research confirmed my initial suspicion that electronic voting and counting machines bring their own set of troubles. I realized that the COMELEC, as well as the media and the public, should therefore take extra steps to ensure the integrity of automated elections.

One of the things I did was review the experiences of countries that had earlier automated their elections. And I found well-documented cases of problems, errors and failures (download: Automated elections: voting machines have made mistakes too).

These cases included: uninitialized machines, which made ballot stuffing possible; votes not counted or lost; candidates’ votes reversed; contests not counted; ballots not counted; the wrong winner comes out; allowing voting more than once; vote totals that exceed the number of registered voters; negatives votes; unauthorized software replacement; and other problems.

I traced these troubles to deep-seated causes that were inherent with complex technologies, such as: software bugs, which are always present even in high-quality software; hardware problems such as miscalibration; environmental stresses that may worsen hardware problems; poor or flawed design; human errors; and malicious tampering. Since these factors were inherent with complex technologies, we can expect the electronic machine troubles to persist.

In my research, I also found out that insoluble problems associated with direct-recording electronic (DRE) voting machines have already led to their phase out in some states of the U.S.

I also compiled typical costs for DREs and optical scanners (download: The cost of automating elections), and found that DRE technology was much more expensive to implement that optical scanning. (However, because an increasing number of states are junking DREs, their prices are expected to go down, as they are dumped into the Third World.)

Halalang Marangal (HALAL), an election monitoring group that I work with, has already submitted two specific recommendations to the COMELEC as a result of my Oxford study:

1. Use double-entry accounting methods in election tabulation (download: Double-entry accounting in election tallies)to minimize the clerical errors that plague the COMELEC’s current single-entry tabulation system; and

2. Conduct a transparent post-election audit of machine results (download: Post-election audits using statistical sampling), by manually counting ballots from a random sample of precincts to confirm if the electronic voting machines are giving us correct results.

Given the reported problems in the August 2008 ARMM elections, which seem to confirm these troubles with automated elections and voting machines, I again strongly urge the COMELEC to heed our warnings and suggestions.

Oxford Visit

I am currently at the Oxford Internet Institute (OII) in the UK doing research on election modernization (including automation). Hopefully, my research can help the Philippine government in making the right decisions as it tries to automate the August 2008 regional elections in Muslim Mindanao and the May 2010 presidential elections.

I made my first presentation last Wednesday, Apr. 23, before an audience of around 15 research fellows, a computer scientist and PhD students. I thought the reactions to my presentation, which was about the use of double-entry accounting for election tallies were positive. More later about this.

An interesting experience I went through for this trip was getting a room to stay in. Because I was sponsored by the OII, they were going to pay for my board and lodging expenses, but I had to find a room myself. Since I got my visa only on the same day I was scheduled to leave (I got the call to pick it up the day before), everything was a mad rush on the day of my departure. I did manage to send an email the day before and make a reservation a room at the Exeter College Lodging House, for which I was relieved I got a confirmation, but only for four days.

My first four days at Oxford passed very quickly indeed, what with the presentation I was scheduled to make on the third day. As soon as the presentation was over, I started making calls (and sending emails) about rooms for rent (to let, as they say here). I didn’t realize I was very lucky to get a room for those first four days — most rooms within my budget were taken quickly and when I identified a prospect, I had to find out where it was, how far, did a bus go there, and I had to make a visit of course. I was again lucky to find two prospects, one in Rose Hill (about 360 PST per month) and a nearer one off Iffley Road at 450 PST. I visited the nearer place first, but got a feeling from one tenant that I was unwelcome because two of us would be staying in the double room (i.e., a room with a double bed) while they were only one per room. The Rose Hill one was friendlier (after an initial ‘Go away!’ when a tenant thought I was a salesman!). So, I made arrangements with the landlord (again, lots of back and forth calls, because I didn’t have the cash and the check was to be paid by OII). But it all worked out in the end.

On Monday, Apr. 28, I will move into the room I’ll be using for the rest of my stay. My wife Flora joins me May 18.

Good start.