## Rigorous LCB for the expectation of a positive r.v.

### October 27, 2008

A paper of mine was published in the open-access Electronic Journal of Statistics. It proposes a method of constructing lower confidence bounds for positive random variables that are guaranteed to have the nominal coverage probability for any distribution, as long as the sample points are i.i.d., from a distribution over the non-negative reals.

In the paper, the method is applied to analyze (a version of) the data of the second Lancet study of mortality in post-invasion Iraq.

## Dishwashing: man vs. machine

### September 24, 2008

When searching online for information comparing manual dishwashing to dishwashing machines, a University of Bonn study is the most prominent point of empirical research that shows up (e.g., 1, 2, 3, 4). This study is usually interpreted as showing that dishwashers are more resource efficient than hand washing – using less work time, less energy and less water to wash the same amount of dishes.

Some commenters in the Treehugger post linked above showed healthy skepticism of this all-too-convenient claim. Fortunately reports from the University of Bonn study are available online (1, 2, 3) and the researchers were kind enough to include some data in those reports, making it possible to examine the results rather than rely on media reports alone. I thus decided to have a methodical look at the study – this post presents my conclusions on this matter.

### Abstract

*Multiple weaknesses in the experimental setup make the interpretation of the study difficult. The data analysis carried out by the researchers seems tendentious. Claims that the study shows that using a dishwashing machine saves substantial amounts of energy, water and time as compared to hand washing are highly dubious. According to the study’s own findings, the most efficient handwashers used far less energy (actually, none, since these washers used no hot water) and about the same amount of water as the most efficient machines. Using no hot water had no negative impact on the cleanliness of the washed dishes.
*

### Read the rest of this entry »

## Effective sample size

### July 16, 2008

The notion of “effective sample size” is useful when comparing the information gathered using a certain sampling method to information gathered (or that would be gathered) with a different – reference – sampling method when applied to the same population. Given a known level of uncertainty of an estimate – the one achieved with the sampling method and sample size actually used, the “effective sample size” is the size of the sample that would be needed to achieve the same level of uncertainty if the reference sampling method were used instead.

## The MLE conjecture, the IMS bulletin and Science

### July 8, 2008

The latest issue of the IMS bulletin contains a letter in which a reader, Anirban DasGupta from Purdue University, lays out his thoughts regarding Ning-Zhong Shi’s MLE conjecture. Evidently, DasGupta’s comments were considered of higher relevance than my own letter to the bulletin’s editors regarding the conjecture, a letter which described the same counter-example to the conjecture that appears in the post linked above.

The IMS Bulletin is usually not a technical publication. It usually carries announcements about upcoming conferences, obituaries, stories about award winners, job ads and columns.

The latest issue of the IMS Bulletin, however, has an unusual item on page 4 in which Ning-Zhong Shi of the School of Mathematics and Statistics, Northeast Normal University, P.R. China lays out “A conjecture on Maximum Likelihood Estimation”. Shi conjectures that the MLE has the finite sample property of having expected squared-error monotonically decreasing in the number of samples n, i.e., MSE( *θ _{n+1}* ) ≤ MSE(

*θ*), where θ

_{n}_{n}is the MLE calculated with n i.i.d. samples, and MSE(

*θ*

_{n }) = E [ (

*θ*)

_{n}− θ*].*

^{2}This is a rather bold conjecture, since there is very little established regarding finite sample properties of MLEs. Shi notes that the conjecture is true in the special case in which the MLE is a mean of a sample of i.i.d. variables – such as when the variables are normal or Bernoulli variables parameterized by the unknown mean. It seems that despite applying the qualifier “under some regularity conditions”, Shi expects the result to hold on a much wider set of cases.

He is, of course, wrong.

## IMS Bulletin on refereeing

### March 24, 2008

The March 2008 IMS (Institute of Mathematical Statistics) Bulletin issue has a special section discussing refereeing. The bulletin editor, Xuming He, introduces the matter on the front page of the issue while four present and past editors of statistics journals – John Marden, Michael Stein, Xiao-Li Meng and Rick Durrett – present their ideas about refereeing on the inside pages.

The discussion takes a predictable path – the writers describe and beseech what they perceive as good behavior on the side of referees (mostly focusing on promptness). Indeed, the choice of writers – established figures in the field – made it unlikely a-priori that a radical examination of the matter would be undertaken. The role of refereeing as gate-keeping is never questioned and the question of what is the objective of refereeing is not even raised by most writers (Marden is the exception here, see below). In view of the absence of the question of the objective, it is impossible to address fundamental questions – does refereeing serve the public (the public of researchers and the public at large) and whether there could be publication selection systems that are superior to refereeing – and refereeing, essentially in its present form and function, is presented as an immutable natural phenomenon.

## IFHS – “Violent deaths”

### January 27, 2008

The main discrepancy between findings in the IFHS paper and those of Burnham et al. is not in the total excess deaths, but in the specific category “violent deaths”. It is therefore of interest to examine whether the classification methods used in those papers to assign deaths to the “violent” category are identical, and whether any differences in the classifications could account for some of the different findings. I notice two points on which the papers’ methodologies of classification differ: One is that Burnham et al. examined death certificates, while IFHS did not. A second difference is that they use different categories for injuries. Burnham et al. use two “accident” categories. One of those is included in the “non-violent” section, the other in the “violent” section. IFHS has no “violent accident” category, and has two categories, “road accidents” and “unintentional injuries”, counting injuries within the “non-violent” classification.

## IFHS – Accounting for under-reporting

### January 23, 2008

##### Justifying the factor used to account for under-reporting of deaths

The IFHS sample is a very low mortality group. For the pre-invasion period of about 1.25 years, the group, which contains 61,636 individuals, experienced 204 deaths. For post-invasion period (about 3.25 years) , the group experienced 1,121 deaths. These translate to 2.65 deaths per 1000 person-years, and 5.6 deaths per 1000 person-years, respectively.

The IFHS sample was not designed to be equal weight to begin with (i.e., some people had higher chance of being selected than others), and biases certainly increased after some clusters were dropped from the sample because they were considered too dangerous to access. Re-weighting (in some way – it is not clear whether the adjustment procedure for total mortality is the same one as that used for violent deaths) by the IFHS authors yields significantly higher mortality rate for the pre-invasion period: 3.17 deaths per 1000 person-years (95% CI 2.70–3.75), and somewhat higher rate for post-invasion period: 6.01 deaths per 1000 person-years (95% CI 5.49–6.60).

The low mortality rate is probably due to a certain extent to a young population – I was unable to find the breakdown of the sample by age (neither in the IFHS paper, nor in the report). The mortality rate is much lower, however, than that of the sample of Burnham et al. – not only for the post-invasion period but also for the pre-invasion period. Burnham et al. estimated the pre-invasion mortality as 5·5 deaths per 1000 person-years (95% CI 4·3–7·1). Thus the fact that IFHS authors find it necessary to adjust their estimate upward seems justified.

The problem is, however, to find the appropriate adjustment factor, and to account for any uncertainty in that factor. The authors mention (p. 486), with reservation, the figure of 62% as the proportion of deaths being reported (i.e., 38% go unreported). They also mention (p. 487) modeling the proportion going unreported as being a normal variable with mean 35% and “95% uncertainty range, 20 to 50”, which I take to mean that the standard deviation is (50-35) / 1.96 =~ 7.5%. The ratio between the Burnham et al. and IFHS point estimates for the war mortality rate is 5.5 / 3.17 = 1.74, which would stand for 42% of the deaths going unreported.

## IFHS – Effective sample size

### January 20, 2008

*An explanation of the concept of effective sample size is here. This post applies this concept to a particular study.*

Extrapolating from a sample taken in a reference area containing a relatively small number of deaths in order to estimate the number of deaths in the area containing the bulk of the deaths in Iraq is problematic even if we assume that the extrapolation factor is known precisely.

The reference area is the “three provinces that^{ }contributed more than 4% each to the total number of deaths^{ }reported for the period from March 2003 through June 2006″. By the design of the sample, there are no more than 180 clusters in these provinces (each province was sample with 54 clusters, except for Nineveh which was sample with 72 clusters). Table 3 of the supplementary material of the paper shows that sample sizes in each governorate. Taking the largest 3 samples in the “High mortality governorates” section (Nineveh, Babylon and Basra) gives an upper bound on the total sample in the reference area of 14,891 people. Multiplying that bound with the mortality rate in the area (Table 2 of the supplementary material) – 0.83 death per 1000 person-years – gives an upper bound for the number of violent deaths in the sample in the reference area of 14,891 x 0.83 / 1000 x 3.33 = 41.19.

That is, over 70% of the deaths in the IFHS estimate – those in Baghdad, Anbar and the three reference governorates – are based on a sample containing about 40 deaths. The estimate of the number of deaths in those areas is generated simply by multiplying those 40 deaths by a factor and thus any uncertainty in the number 40 is directly translated into uncertainty in the estimator of the total number of deaths in the areas.

Reference to the large total number of clusters in the IFHS (almost 1000 clusters visited) is therefore misleading. The determining factor of the estimate is the data collected in a much smaller number of clusters – generating a small number of recorded deaths. The uncertainty in the estimate is correspondingly large.