Random Assignment Of Subjects Occurs When An Earthquake

1 The logic of the experiment

Randomization, the allocation of subjects to experimental conditions via a random procedure, was introduced by eminent statistician R.A. Fisher [1]. Arguably, it has since become the most important statistical technique. In particular, statistical experiments are defined by the use of randomization [2, 3], and many applied fields, such as evidence based medicine, draw a basic distinction between randomized and non-randomized evidence.

In order to explain randomization’s eminent role, one may refer to the logic of the experiment, largely based on J. S. Mill’s method of difference[4]: If one compares two groups of subjects (Treatment T versus Control C, say) and observes a salient contrast in the end (e.g. ), that difference must be due to the experimental manipulation—IF the groups were equivalent at the very beginning of the experiment.

In other words, since the difference between treatment and control (i.e. the experimental manipulation) is the only perceivable reason that can explain the variation in the observations, it must be the cause of the observed effect (the difference in the end). The situation is quite altered, however, if the two groups already differed substantially at the beginning. Then (see Table 1 below), there are two possible explanations of an effect:

2 Comparability

Thus, for the logic of the experiment, it is of paramount importance to ensure equivalence of the groups at the beginning of the experiment. The groups, or even the individuals involved, must not be systematically different; one has to compare like with like. Alas, in the social sciences exact equality of units, e.g. human individuals, cannot be maintained. Therefore one must settle for comparable subjects or groups (TC).

2.1 Defining comparability

In practice, it is straightforward to define comparability with respect to the features or properties of the experimental units involved. In a typical experimental setup, statistical units (e.g. persons) are represented by their corresponding vectors of attributes (properties, variables) such as gender, body height, age, etc.

If the units are almost equal in as many properties as possible, they should be comparable, i.e., the remaining differences shouldn’t alter the experimental outcome substantially. However, since, in general, vectors have to be compared, there is not a single measure of similarity. Rather, there are quite a lot of measures available, depending on the kind of data at hand. An easily accessible and rather comprehensive overview may be found here: reference.wolfram.com/mathematica/guide/DistanceAndSimilarityMeasures.html

As an example, suppose a unit i is represented by a binary vector ai = (ai1, …, aim). The Hamming distance d(⋅,⋅) between two such vectors is the number of positions at which the corresponding symbols are different. In other words, it is the minimum number of substitutions required to change one vector into the other. Let a1 = (0,0,1,0), a2 = (1,1,1,0), and a3 = (1,1,1,1). Therefore d(a1,a2) = 2, d(a1,a3) = 3, d(a2,a3) = 1, and d(ai,ai) = 0. Having thus calculated a reasonable number for the “closeness” of two experimental units, one next has to consider what level of deviance from perfect equality may be tolerable.

Due to this, coping with similarities is a tricky business. Typically many properties (covariates) are involved and conscious (subjective) judgement seems to be inevitable. An even more serious question concerns the fact that relevant factors may not have been recorded or might be totally unknown. In the worst case, similarity with respect to some known factors has been checked, but an unnoticed nuisance variable is responsible for the difference between the outcome in the two groups.

Moreover, comparability depends on the phenomenon studied. A clearly visible difference, such as gender, is likely to be important with respect to life expectancy, and can influence some physiological and psychological variables such as height or social behaviour, but it is independent of skin color or blood type. In other words, experimental units do not need to be twins in any respect; it suffices that they be similar with respect to the outcome variable under study.

Given a unique sample it is easy to think about a reference set of other samples that are alike in all relevant respects to the one observed. However, even Fisher could not give these words a precise formal meaning [5]. Thus De Finetti [6] proposed exchangeability, i.e. “instead of judging whether two groups are similar, the investigator is instructed to imagine a hypothetical exchange of the two groups … and then judge whether the observed data under the swap would be distinguishable from the actual data” (see [7], p. 196). Barnard [8] gives some history on this idea and suggests the term permutability, “which conveys the idea of replacing one thing by another similar thing.” Nowadays, epidemiologists say that “the effect of treatment is unconfounded if the treated and untreated groups resemble each other in all relevant features” [7], p. 196.

2.2 Experimental techniques to achieve comparability

There are a number of strategies to achieve comparability. Starting with the experimental units, it is straightforward to match similar individuals, i.e., to construct pairs of individuals that are alike in many (most) respects. Looking at the group level (T and C), another straightforward strategy is to balance all relevant variables when assigning units to groups. Many approaches of this kind are discussed in [9], minimization being the most prominent among them. Treasure and MacRae [10] explain:

In our study of aspirin versus placebo … we chose age, sex, operating surgeon, number of coronary arteries affected, and left ventricular function. But in trials in other diseases those chosen might be tumour type, disease stage, joint mobility, pain score, or social class.

At the point when it is decided that a patient is definitely to enter a trial, these factors are listed. The treatment allocation is then made, not purely by chance, but by determining in which group inclusion of the patient would minimise any differences in these factors. Thus, if group A has a higher average age and a disproportionate number of smokers, other things being equal, the next elderly smoker is likely to be allocated to group B. The allocation may rely on minimisation alone, or still involve chance but ‘with the dice loaded’ in favour of the allocation which minimises the differences.

However, apart from being cumbersome and relying on the experimenter’s expertise (in particular in choosing and weighing the factors), these strategies are always open to the criticism that unknown nuisance variables may have had a substantial impact on the result. Therefore Fisher [1], pp. 18–20, advised strongly against treating every conceivable factor explicitly. Instead, he taught that “the random choice of the objects to be treated in different ways [guarantees] the validity of the test of significance … against corruption by the causes of disturbance which have not been eliminated.” More explicitly, Berger [11], pp. 9–10, explains:

The idea of randomization is to overlay a sequence of units (subjects, or patients) onto a sequence of treatment conditions. If neither sequence can influence the other, then there should be no bias in the assignment of the treatments, and the comparison groups should be comparable.

2.3 Randomization vs. comparability

Historically, Fisher’s idea proved to be a great success [12]. Randomized controlled trials (RCTs), as much as the randomized evidence they produce became the gold standard in a number of fields, and watchwords highlighting randomization’s leading part spread, e.g. “randomization controls for all possible confounders, known and unknown.”

Nevertheless, there have always been reservations about randomization. Putting the basic logic of the experiment in first place, randomization is a lower-ranking tool, employed towards the end of comparability. Moreover, it would be rather problematic if randomization failed to reliably yield similar groups, since non-comparable groups offer a straightforward alternative explanation, undermining experimental validity. To this end, Greenland [13], see Table 2, came up with “the smallest possible controlled trial” illustrating that randomization does not prevent confounding:

That is, he flips a coin once in order to assign two patients to T and C, respectively: If heads, the first patient is assigned to T, and the second to C; if tails, the first patient is assigned to C, and the second to T. Suppose , what is the reason for the observed effect? Due to the experimental design, there are two alternatives: either the treatment condition differed from the control condition, or patient P1 was not comparable to patient P2. However, as each patient is only observed under either the treatment or control (the left hand side or the right hand side of the above table), one cannot distinguish between the patient’s and the treatment’s impact on the observed result. Therefore Greenland concludes that “no matter what the outcome of randomization, the study will be completely confounded.” This effect has been observed on many occasions, for similar remarks see [10, 14–22]. In total generality, Berger [11], p. 9, states:

While it is certainly true that randomization is used for the purpose of ensuring comparability between or among comparison groups, … it is categorically not true that this goal is achieved.

Suppose the patients are perfect twins with the exception of a single difference. Then Greenland’s example shows that randomization cannot even balance a single nuisance factor. To remedy the defect, it is straightforward to increase n. However, no quantitative advice is given here or elsewhere. Thus it should be worthwhile studying a number of explicit and straightforward models, quantifying the effects of randomization. Moreover, quite early, statisticians—in particular of the Bayesian persuasion—put forward several rather diverse arguments against randomization [23–29]. At this point, it is not necessary to delve into delicate philosophical matters or the rather violent Bayesian-Frequentist debate (however, see Section 5), since fairly elementary probabilistic arguments suffice to demonstrate that the above criticism hits its target: By its very nature a random mechanism provokes fluctuations in the composition of T and C, making these groups (rather often) non-comparable.

The subsequent reasoning has the advantage of being straightforward, mathematical, and not primarily “foundational”. Its flavour is Bayesian in the sense that we are comparing the actual groups produced by randomization which is the “posterior view” preferred by that school. At the same time its flavour is Frequentist, since we are focusing on the properties of a certain random procedure which is the “design view” preferred by this school. There are not just two, but (at least) three, competing statistical philosophies, and “in many ways the Bayesian and frequentist philosophies stand at opposite poles from each other, with Fisher’s ideas being somewhat of a compromise” [30]. Since randomization is a Fisherian proposal, a neutral quantitative analysis of his approach seems to be appropriate, acceptable to all schools, and, in a sense, long overdue. To a certain degree, philosophy is a matter of opinion. However, it is difficult to argue with mathematical facts of actual performance (see [31], p. xxii).

3 Random confounding

The overall result of the following calculations is thus [32]:

Despite randomization, imbalance in prognostic factors as a result of chance (chance imbalance) may still arise, and with small to moderate sample sizes such imbalance may be substantial.

3.1 Dichotomous factors

Suppose there is a nuisance factor X taking the value 1 if present and 0 if absent. One may think of X as a genetic aberration, a medical condition, a psychological disposition or a social habit. Assume that the factor occurs with probability p in a certain person (independent of anything else). Given this, 2n persons are randomized into two groups of equal size by a chance mechanism independent of X.

Let S1 and S2 count the number of persons with the trait in the first and the second group respectively. S1 and S2 are independent random variables, each having a binomial distribution with parameters n and p. A natural way to measure the extent of imbalance between the groups is D = S1S2. Obviously, ED = 0 and Iff D = 0, the two groups are perfectly balanced with respect to factor X. In the worst case ∣D∣ = n, that is, in one group all units possess the characteristic, whereas it is completely absent in the other. For fixed n, let the two groups be comparable if ∣D∣ ≤ n/i with some i ∈ {1, …, n}. Iff i = 1, the groups will always be considered comparable. However, the larger i, the smaller the number of cases we classify as comparable. In general, n/i defines a proportion of the range of ∣D∣ that seems to be acceptable. Since n/i is a positive number, and S1 = S2 ⇔ ∣D∣ = 0, the set of comparable groups is never empty.

Given some constant i(< n), the value n/i grows at a linear rate in n, whereas grows much more slowly. Due to continuity, there is a single point n(i, k), where the line intersects with k times the standard deviation of D. Beyond this point, i.e. for all nn(i, k), at least as many realizations of ∣D∣ will be within the acceptable range [0, n/i]. Straightforward algebra gives,

Examples.

A typical choice could be i = 10 and k = 3, which specifies the requirement that most samples be located within a rather tight acceptable range. In this case, one has to consider the functions n/10 and . These functions of n are shown in the following figure (Fig 1):

Thus, depending on p, the following numbers of subjects are needed per group (and twice this number altogether): Relaxing the criterion of comparability (i.e. a smaller value of i) decreases the number of subjects necessary: The same happens if one decreases the number of standard deviations k:

This shows that randomization works, if the number of subjects ranges in the hundreds or if the probability p is rather low. (By symmetry, the same conclusion holds if p is close to one.) Otherwise there is hardly any guarantee that the two groups will be comparable. Rather, they will differ considerably due to random fluctuations.

The distribution of D is well known ([33], pp. 142–143). For d = −n, …, n,

Therefore, it is also possible to compute the probability q = q(i, n, p) that two groups, constructed by randomization, will be comparable. If i = 5, i.e., if one fifth of the range of ∣D∣ is judged to be comparable, we obtain: Thus, it is rather difficult to control a factor that has a probability of about 1/2 in the population. However, even if the probability of occurrence is only about 1/10, one needs more than 25 people per group to have reasonable confidence that the factor has not produced a substantial imbalance.

Several factors.

The situation becomes worse if one takes more than one nuisance factor into account. Given m independent binary factors, each of them occurring with probability p, the probability that the groups will be balanced with respect to all nuisance variables is qm. Numerically, the above results yield:

Accordingly, given m independent binary factors, each occurring with probability pj (and corresponding qj = q(i, n, pj)), the probabilities closest to 1/2 will dominate 1 − q1qm, which is the probability that the two groups are not comparable due to an imbalance in at least one variable. In a typical study with 2n = 100 persons, for example, it does not matter if there are one, two, five or even ten factors, if each of them occurs with probability of 1/100. However, if some of the factors are rather common (e.g. 1/5 < pj < 4/5), this changes considerably. In a smaller study with fewer than 2n = 50 participants, a few such factors suffice to increase the probability that the groups constructed by randomization won’t be comparable to 50%. With only a few units per group, one can be reasonably sure that some undetected, but rather common, nuisance factor(s) will make the groups non-comparable. Altogether our conclusion based on an explicit quantitative analysis coincides with the qualitative argument given by [23], p. 91 (my emphasis):

Suppose we had, say, thirty fur-bearing animals of which some were junior and some senior, some black and some brown, some fat and some thin, some of one variety and some of another, some born wild and some in captivity, some sluggish and some energetic, and some long-haired and some short-haired. It might be hard to base a convincing assay of a pelt-conditioning vitamin on an experiment with these animals, for every subset of fifteen might well contain nearly all of the animals from one side or another of one of the important dichotomies …

Thus contrary to what I think I was taught, and certainly used to believe, it does not seem possible to base a meaningful experiment on a small heterogenous group.

Interactions.

The situation deteriorates considerably if there are interactions between the variables that may also yield convincing alternative explanations for an observed effect. It is possible that all factors considered in isolation are reasonably balanced (which is often checked in practice), but that a certain combination of them affects the observed treatment effect. For the purpose of illustration (see Table 3 below), suppose four persons (being young or old, and male or female) are investigated:

Although gender and (dichotomized) age are perfectly balanced between T and C, the young woman has been allocated to the first group. Therefore a property of young women (e.g. pregnancy) may serve as an explanation for an observed effect, e.g. .

Given m factors, there are m(m − 1)/2 possible interactions between just two of the factors, and possible interactions between ν of them. Thus, there is a high probability that some considerable imbalance occurs in at least one of these numerous interactions, in small groups in particular. For a striking early numerical study see [34]. Detected or undetected, such imbalances provide excellent alternative explanations of an observed effect.

In the light of this, one can only hope for some ‘benign’ dependence structure among the factors, i.e., a reasonable balance in one factor improving the balance in (some of) the others. Given such a tendency, a larger number of nuisance factors may be controlled, since it suffices to focus on only a few. Independent variables possess a ‘neutral’ dependence structure in that the balance in one factor does not influence the balance in others. Yet, there may be a ‘malign’ dependence structure, such that balancing one factor tends to actuate imbalances in others. We will make this argument more precise in Section 4. However, a concrete example will illustrate the idea: Given a benign dependence structure, catching one cow (balancing one factor) makes it easier to catch others. Therefore it is easy to lead a herd into an enclosure: Grab some of the animals by their horns (balance some of the factors) and the others will follow. However, in the case of a malign dependence structure the same procedure tends to stir up the animals, i.e., the more cows are caught (the more factors are being balanced), the less controllable the remaining herd becomes.

3.2 Ordered random variables

In order to show that our conclusions do not depend on some specific model, let us next consider ordered random variables. To begin with, look at four units with ranks 1 to 4. If they are split into two groups of equal size, such that the best (1) and the worst (4) are in one group, and (2) and (3) are in the other, both groups have the same rank sum and are thus comparable. However, if the best and the second best constitute one group and the third and the fourth the other group, their rank sums (3 versus 7) differ by the maximum amount possible, and they do not seem to be comparable. If the units with ranks 1 and 3 are in the first group and the units with ranks 2 and 4 are in the second one, the difference in rank sums is ∣6 − 4∣ = 2 and it seems to be a matter of personal judgement whether or not one thinks of them as comparable.

Given two groups, each having n members, the total sum of ranks is r = 2n(2n+1)/2 = n(2n+1). If, in total analogy to the last section, S1 and S2 are the sum of the ranks in the first and the second group, respectively, S2 = rS1. Therefore it suffices to consider S1, which is the test statistic of Wilcoxon’s test. Again, a natural way to measure the extent of imbalance between the groups is D = S1S2 = 2S1r. Like before ED = 0, and because σ2(S1) = n2(2n+1)/12 we have

Moreover, n(n+1)/2 ≤ Sjn(3n+1)/2 (j = 1,2) yields −n2Dn2. Thus, in this case, n2/i (i ∈ {1, …, n2}) determines a proportion of the range of ∣D∣ that may be used to define comparability. Given a fixed i(< n2), the quantity n2/i is growing at a quadratic rate in n, whereas is growing at a slower pace. Like before, there is a single point n(i, k), where n2/i = (D). Straightforward algebra gives, Again, we see that large numbers of observations are needed to ensure comparability:

As before, it is possible to work with the distribution of D explicitly. That is, given i and n, one may calculate the probability q = q(i, n) that two groups, constructed by randomization, are comparable. If ∣D∣ ≤ n2/i is considered comparable, it is possible to obtain, using the function pwilcox() in R:

These results for ordered random variables are perfectly in line with the conclusions drawn from the binary model. Moreover, the same argument as before shows that the situation becomes (considerably) worse if several factors may influence the final result.

3.3 A continuous model

Finally, we consider a continuous model. Suppose there is just one factor XN(μ, σ). One may think of X as a normally distributed personal ability, person i having individual ability xi. As before, assume that 2n persons are randomized into two groups of equal size by a chance mechanism independent of the persons’ abilities.

Suppose that also in this model S1 and S2 measure the total amount of ability in the first and the second group respectively. Obviously, S1 and S2 are independent random variables, each having a normal distribution . A straightforward way to measure the absolute extent of imbalance between the groups is (1) Due to independence, obviously .

Let the two groups be comparable if ∣D∣ ≤ , i.e., if the difference between the abilities assembled in the two groups does not differ by more than l standard deviations of the ability X in a single unit. The larger l, the more cases are classified as comparable. For every fixed l, is a constant, whereas is growing slowly. Owing to continuity, there is yet another single point n(l), where . Straightforward algebra gives, In particular, we have: In other words, the two groups become non-comparable very quickly. It is almost impossible that two groups of 500 persons each, for example, could be close to one another with respect to total (absolute) ability.

However, one may doubt if this measure of non-comparability really makes sense. Given two teams with a hundred or more subjects, it does not seem to matter whether the total ability in the first one is within a few standard deviations of the other. Therefore it is reasonable to look at the relative advantage of group 1 with respect to group 2, i.e. Q = D/n. Why divide by n and not by some other function of n? First, due to Eq (1), exactly n comparisons X1, iX2, i have to be made. Second, since Q may be interpreted in a natural way, i.e., being the difference between the typical (mean) representative of group 1 (treatment) and the typical representative of group 2 (control). A straightforward calculation yields .

Let the two groups be comparable if ∣Q∣ ≤ . If one wants to be reasonably sure (three standard deviations of Q) that comparability holds, we have . Thus, at least the following numbers of subjects are required per group: If one standard deviation is considered a large effect [35], three dozen subjects are needed to ensure that such an effect will not be produced by chance. To avoid a small difference between the groups due to randomization (one quarter of a standard deviation, say), the number of subjects needed goes into the hundreds.

In general, if k standard deviations of Q are desired, we have, Thus, for k = 1,2 and 5, the following numbers of subjects nk are required in each group:

These are just the results for one factor. As before, the situation deteriorates considerably if one sets out to control several nuisance variables by means of randomization.

3.4 Intermediate conclusions

The above models have deliberately been kept as simple as possible. Their results are straightforward and they agree: If n is small, it is almost impossible to control for a trait that occurs frequently at the individual level, or for a larger number of confounders, via randomization. It is of paramount importance to understand that random fluctuations lead to considerable differences between small or medium-sized groups, making them very often non-comparable, thus undermining the basic logic of experimentation. That is, ‘blind’ randomization does not create equivalent groups, but rather provokes imbalances and subsequent artifacts. Even in larger samples one needs considerable luck to succeed in creating equivalent groups: p close to 0 or 1, a small number of nuisance factors m, or a favourable dependence structure that balances all factors, including their relevant interactions, if only some crucial factors are to be balanced by chance.

4 Unknown factors

Had the trial not used random assignment, had it instead assigned patients one at a time to balance [some] covariates, then the balance might well have been better [for those covariates], but there would be no basis for expecting other unmeasured variables to be similarly balanced ([2], p. 21)

This is a straightforward and popular argument in favour of randomization. Since randomization treats known and unknown factors alike, it is quite an asset that one may thus infer from the observed to the unobserved without further assumptions. However, this argument backfires immediately since, for exactly the same reason, an imbalance in an observed variable cannot be judged as harmless. Quite the contrary: With random assignment there is some basis for expecting other unmeasured variables to be similarly unbalanced. An observed imbalance hints at further undetectable imbalances in unobserved variables.

Moreover, treating known and unknown factors equivalently is cold comfort compared to the considerable amount of imbalance evoked by randomization. Fisher’s favourite method always comes with the cost that it introduces additional variability, whereas a systematic schema at least balances known factors. In subject areas haunted by heterogeneity it seems intuitively right to deliberately work in favour of comparability, and rather odd to introduce further variability.

In order to sharpen these qualitative arguments, let us look at an observed factor X, an unobserved factor Y, and their dependence structure in more detail. Without loss of generality let all functions d be positive in the following. Having constructed two groups of equal size via randomization, suppose is the observed difference between the groups with respect to variable X. Using a systematic scheme instead, i.e., distributing the units among T and C in a more balanced way, this may be reduced to dS(X).

The crucial question is how such a manipulation affects dR(Y), the balance between the groups with respect to another variable. Now there are three types of dependence structures:

A benign dependence structure may be characterized by dS(Y) < dR(Y). In other words, the effort of balancing X pays off, since the increased comparability in this variable carries over to Y. For example, given today’s occupational structures with women earning considerably less than men, balancing for gender should also even out differences in income.

If balancing in X has no effect on Y, dS(Y) ≈ dR(Y), no harm is done. For example, balancing for gender should not affect the distribution of blood type in the observed groups, since blood type is independent of gender.

Only in the pathological case when increasing the balance in X has the opposite effect on Y, one may face troubles. As an example, let there be four pairs (x1, y1) = (1, 4); (x2, y2) = (2, 2); (x3, y3) = (3, 1); and (x4, y4) = (4, 3). Putting units 1 and 4 in one group, and units 2 and 3 in another, yields a perfect balance in the first variable, but the worst imbalance possible in the second.

However, suppose d(⋅) < c where the constant (threshold) c defines comparability. Then, in the randomized case, the groups are comparable if both dR(X) and dR(Y) are smaller than c. By construction, dS(X) ≤ dR(X) < c, i.e., the systematically composed groups are also comparable with respect to X. Given a malign dependence structure, dS(Y) increases and may exceed dR(Y). Yet dS(Y) < c may still hold, since, in this case, the “safety margin” cdR(Y) may prevent the systematically constructed groups from becoming non-comparable with respect to property Y. In large samples, cdR(⋅) is considerable for both variables. Therefore, in most cases, consciously constructed samples will (still) be comparable. Moreover, the whole argument easily extends to more than two factors.

In a nutshell, endeavouring to balance relevant variables pays off. A conscious balancing schema equates known factors better than chance and may have some positive effect on related, but unknown, variables. If the balancing schema has no effect on an unknown factor, the latter is treated as if randomization were interfering—i.e. in a completely nonsystematic, ‘neutral’ way. Only if there is a malign dependence structure, when systematically balancing some variable yields (considerable) “collateral damage”, might randomization be preferable.

This is where sample size comes in. In realistic situations with many unknown nuisance factors, randomization only works if n is (really) large. Yet if n is large, so are the “safety margins” in the variables, and even an unfortunate dependence structure won’t do any harm. If n is smaller, the above models show that systematic efforts, rather than randomization, may yield comparability. Given a small number of units, both approaches only have a chance of succeeding if there are hardly any unknown nuisance factors, or if there is a benign dependence structure, i.e., if a balance in some variable (no matter how achieved) has a positive effect on others. In particular, if the number of relevant nuisance factors and interactions is small, it pays to isolate and control for a handful of obviously influential variables, which is a crucial ingredient of experimentation in the classical natural sciences. Our overall conclusion may thus be summarized in the following table (Table 4):

5 More principled questions

Since randomization has been a core point of dispute between the major philosophical schools of statistics, it seems necessary and appropriate to address these issues here.

5.1 The Frequentist position

Possibly the most important, some would say outstanding argument in favour of randomization is the view that the major “function of randomization is to generate the sample space and hence provide the basis for estimates of error and tests of significance” [36]. It has been proposed and defended by prominent statisticians, once dominated the field of statistics, and still has a stronghold in certain quarters, in particular medical statistics, where randomized controlled trials have been the gold standard.

In a statistical experiment one controls the random mechanism, thus the experimenter knows the sample space and the distribution in question. This constructed and therefore “valid” framework keeps nuisance variables at bay, and sound reasoning within the paradigm leads to correct results. Someone following this “Frequentist” train of thought could therefore state—and several referees of this contribution have indeed done so—that the above models underline the rather well-known fact that randomization can have difficulties in constructing similar groups (achieving exchangeablility/comparability, balancing covariates), in particular if n is small. However, this goal is quite subordinate to the major goal of establishing a known distribution on which quantitative statistical conclusions can be based. More precisely,

randomization in design … is supposed to provide the grounds for replacing uncertainty about the possible effects of nuisance factors with a probability statement about error ([37], p. 214, my emphasis).

In other words, because of randomization, all effects of a large (potentially infinite) number of nuisance factors can be captured by a single probability statement. How is this remarkable goal achieved?

Any analytical procedure, e.g., a statistical test, is an algorithm, transferring some numerical input into a certain output which, in the simplest case, is just a number. Given the same data, it yields exactly the same result. The procedure does not go any further: In general, there are no semantics or convincing story associated with a bare numerical result that could increase the latter’s impact. In other words, a strong interpretation needs to be based on the framework in which the calculations are embedded.

Now, since randomization treats all variables (known and unknown) alike, the analytical procedure is able to “catch” them all and their effects show up in the output. For example, a confidence interval, so the story goes, gives a quantitative estimate of all of the variables’ impact. One can thus numerically assess how strong this influence is, and one has, in a sense, achieved explicit quantitative control. In particular, if the total influence of all nuisance factors (including random fluctuations due to randomization) is numerically small, one may conclude with some confidence that a substantial difference between T and C should be due to the experimental intervention.

Following this line of argument, owing to randomization, a statistical experiment gives a valid result in the sense that it allows for far-reaching, in particular causal, conclusions. Thus, from a Frequentist point of view, one should distinguish between two very different kinds of input: (randomized) experimental data on the one hand and (non-randomized) non-experimental data on the other. Moreover, since randomization seems to be crucial—at least sufficient—for a causal conclusion, some are convinced that it is also necessary. For example, the frequently heard remark that “only randomization can break a causal link” ([38], p. 200) echoes the equally famous statement that there is “no causation without manipulation” [39].

This train of thought is supplemented by the observation that random assignment is easy to implement, and that hardly any (questionable) assumptions are needed in order to get a strong conclusion. For example, Pawitan [40] says:

A new eye drug was tested against an old one on 10 subjects. The drugs were randomly assigned to both eyes of each person. In all cases the new drug performed better than the old drug. The P-value from the observed data is 2−10 = 0.001, showing that what we observe is not likely due to chance alone, or that it is very likely the new drug is better than the old one … Such simplicity is difficult to beat. Given that a physical randomization was actually used, very little extra assumption is needed to produce a valid conclusion.

Finally, one finds a rather broad range of verbal arguments why randomization should be employed, e.g. “valid” conclusions, either based on the randomization distribution [41] or some normal-theory approximation [42], removal of investigator bias [43], face validity, fairness, and simple analysis [44], justification of inductive steps, in particular generalizations from the observed results to all possible results [45].

5.2 Bayesian opposition

Traditionally, criticism of the Frequentist line of argument in general, and randomization in particular, has come from the Bayesian school of statistics. While Frequentist statistics is much concerned with the way data is collected, focusing on the design of experiments, the corresponding sample space and sampling distribution, Bayesian statistics is rather concerned with the data actually obtained. Its focus is on learning from the(se) data—in particular with the help of Bayes’ theorem—and the parameter space.

In a sense, both viewpoints are perfectly natural and do not contradict one other. However, the example of randomization shows that this cannot be the final word: For the pre-data view, randomization is essential; it constitutes the difference between a real statistical experiment and any kind of quasi-experiment. For the post-data view, however, randomization does not add much to the information at hand, and is ancillary or just a nuisance.

The crucial and rather fundamental issue therefore becomes how far-reaching the conclusions of each of these styles of inference are. To cut a long story short, despite a “valid” framework and mathematically sound conclusions a Frequentist train of thought may easily miss its target or might even go astray. (The long story, containing a detailed philosophical discussion, is told in [46].) After decades of Frequentist—Bayesian comparisons, it has become obvious that in many important situations the numerical results of Frequentist and Bayesian arguments (almost) coincide. However, the two approaches are conceptually completely different, and it has also become apparent that simple calculations within the sampling framework lead to reasonable answers to post-data questions only because of “lucky” coincidences (e.g., the existence of sufficient statistics for the normal distribution). Of course, in general, such symmetries do not exist, and pre-data results cannot be transferred to post-data situations. In particular, purely Frequentist arguments fail if the sampling distribution does not belong to the “exponential family”, if there are several nuisance parameters, if there is important prior information, or if the number of parameters is much larger than the number of observations (pn).

5.3 A formal as well as an informal framework

The idea that the influence of many nuisance factors—even unknown ones—may be caught by a simple experimental device and some probability theory is a bold claim. Therefore it should come as no surprise that some Frequentist statisticians, many scientists and most Bayesians have questioned it. For example, towards the end of his article, Basu [26] writes quite categorically: “The randomization exercise cannot generate any information on its own. The outcome of the exercise is an ancillary statistic. Fisher advised us to hold the ancillary statistic fixed, did he not?” Basing our inferences on the distribution that randomization creates seems to be the exact reverse.

Even by the 1970s, members of the classical school noted that, upon using randomization and the distribution it entails, we are dealing with “the simplest hypothesis, that our treatment … has absolutely no effect in any instance”, and that “under this very tight hypothesis this calculation is obviously logically sound” ([47], my emphasis). Contemporary criticism can be found in Heckman [19] who complains that “a large statistical community” idealizes randomization, “implicitly appeal[s] to a variety of conventions rather than presenting rigorous models”, and that “crucial assumptions about sources of randomness are kept implicit.”

As for the sources of randomness, one should at least distinguish between natural variation and artificially introduced variability. A straightforward question then surely is, how inferences based on the “man-made” portion bear on the “natural” part. To this end, [26], pp. 579–581, compares a scientist, following the logic we described in Section 1, and a statistician who counts on randomization. It turns out that they are talking at cross-purposes. While the foremost goal of the scientist is to make the groups comparable, the statistician focuses on the randomization distribution. Moreover, the scientist asks repeatedly to include important information, but with his inquiry falling on deaf ears, he disputes this statistician’s analysis altogether.

Heckman’s criticism deploring a lack of explicit models and assumptions has been repeated by many (e.g. [31], [7]). In the natural sciences, mathematical arguments have always been more important than verbal reasoning. Typically, the thrust of an argument consists of formulae and their implications, with words of explanation surrounding the formal nucleus. Other fields like economics have followed suit and have learnt—often the hard way—that seemingly very convincing heuristic arguments can be wrong or misleading. Causality is no exception to that rule. In the last twenty years or so, causal graphs and causal calculus have formalized this field. And, as was to be expected, increased rigor straightforwardly demonstrated that certain “reasonable” beliefs and rather “obvious” time-honored conventions do not work as expected (see [7], in particular Chapter 6 and p. 341).

Our analysis above fits in nicely: The standard phrase “if n is not too small” is a verbal statement, implicitly appealing to the central limit theorem. Owing to the latter theorem, groups created by random assignment tend to become similar. The informal assurance, affirming that this happens fast, ranks among the most prominent conventions of traditional statistics. However, explicit numerical models underline that our intuition needs to be corrected. Rather straightforward calculations suffice to show that fluctuations cannot be dismissed easily, even if n is large.

Worse still, the crucial part of the Frequentist’s main argument in favour of randomization is informal in a rather principled way: In an experimental, as well as in a similar non-experimental situation, the core formal machinery, i.e. the data at hand, the explicit analytical procedure (e.g. a statistical test), and the final numerical result may be identical. In other words, it is just the narrative prior to the data that makes such a tremendous difference in the end. Since heuristic arguments have a certain power of persuasion which is certainly weaker than a crisp formal derivation or a strict mathematical proof, it seems to be no coincidence that opinion on this matter has remained divided. Followers of Fisher believed in his intuition and trusted randomization; critics did not. And since, sociologically speaking, the Frequentist school dominated the field for decades, so did randomization.

It is also no coincidence, but rather sheer necessity, that a narrow formal line of argument needs to be supplemented with much intuition and heuristics. So, on the one hand, an orthodox author may claim that “randomization, instrumental variables, and so forth have clear statistical definitions”; yet, on the other hand, he has to concede at once that “there is a long tradition of informal—but systematic and successful—causal inference in the medical sciences” ([7], p. 387, my emphasis). Without doubt, such a mixture is difficult to understand, to use and to criticize, and could be one of the main reasons for the reputation of statistics as an opaque subject. The narrow formal framework also partly explains why there is such a wide variety of verbal arguments in favour of randomization (see the end of Section 5.1).

5.4 Pragmatical eclecticism

From a Frequentist point of view, randomization is crucial since it “provides a known distribution for the assignment variables; statistical inferences are based on this distribution” [48]. Thus, the “known-distribution argument” is perhaps the single most important argument in favour of randomization.

How is it applied? If the result of a random allocation is extreme (e.g. all women are assigned to T, and all men to C), everybody—Fisher included—seems to be prepared to dismiss this realization: “It should in fairness be mentioned that, when randomization leads to a bad-looking experiment or sample, Fisher said that the experimenter should, with discretion and judgement, put the sample aside and draw another” [24].

The latter concession isn’t just a minor inconvenience, but runs contrary to the very principle of the Frequentist viewpoint: First, an informal correction is wide open to subjective judgement. (Already) “bad-looking” to person A may be (still) “fine-looking” to person B. Second, what’s the randomization distribution actually being used when dismissing some samples? A vague selection procedure will inevitably lead to a badly defined distribution. Third, why reject certain samples at all? If the crucial feature of randomization is to provide a “valid” distribution (on which all further inference is based), one should not give away this advantage unhesitatingly. At the very least, it is inconsistent to praise the argument of the known framework in theoretical work, and to turn a blind eye to it in practice.

As a matter of fact, in applications, the exact permutation distribution created by some particular randomization process plays a rather subordinate role. Much more frequently, randomization is used as a rationale for common statistical procedures. Here is one of them: Randomization guarantees independence and if many small uncorrelated (and also often unknown) factors contribute to the distribution of some observable variable X, this distribution should be normal—at least approximately, if n is not too small. Therefore, in a statistical experiment, it seems to be justified to compare and , using these means and the sample variance as estimators of their corresponding population parameters. Thus we have given an informal derivation of the t-test. Both the test and the numerical results it yields are supported by randomization. However, it may be noted that Student’s famous test was introduced much earlier [49] and worked quite well without randomization’s assistance.

How should experimental data be analyzed? If the known distribution were of paramount importance, there should be a unanimous vote, at least by Frequentist statisticians. However, only a minority, perfectly in line with the received position, advise leaving the data as it is. Freedman [50] argues thus (for similar comments see [48], [7], p. 340, and [38], pp. 250–253.):

Regression adjustments are often made to experimental data. Since randomization does not justify the models, almost anything can happen … The simulations, like the analytic results, indicate a wide range of possible behavior. For instance, adjustment may help or hurt.

Yet a majority have a different opinion (e.g. [2, 3, 45]). Tu et al. [51] explain that the first reason why they opt for an “adjustment of treatment effect for covariates in clinical trials” is to “improve the credibility of the trial results by demonstrating that any observed treatment effect is not accounted for by an imbalance in patient characteristics.”

5.5 Once again: randomization vs. comparability

Apart from the rather explicit rhetoric of a “valid framework”, there is also always the implicit logic of the experiment. Thus, although the received theory emphasizes that “actual balance has nothing to do with validity of statistical inference; it is an issue of efficiency only” [41]; comparability turns out to be crucial:

Many, if not most, of those supporting randomization rush to mention that it promotes similar groups. Nowadays, only a small minority bases its inferences on the known permutation distribution created by the process of randomization; but an overwhelming majority checks for comparability. Reviewers of experimental studies routinely request that authors provide randomization checks, that is, statistical tests designed to substantiate the equivalence of T and C. At least, in almost every article a list of covariates—with their groupwise means and standard errors—can be found.

Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. This ensures that each participant or subject has an equal chance of being placed in any group. Random assignment of participants helps to ensure that any differences between and within the groups are not systematic at the outset of the experiment. Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.

Random assignment, blinding, and controlling are key aspects of the design of experiments, because they help ensure that the results are not spurious or deceptive via confounding. This is why randomized controlled trials are vital in clinical research, especially ones that can be double-blinded and placebo-controlled.

Mathematically, there are distinctions between randomization, pseudorandomization, and quasirandomization, as well as between random number generators and pseudorandom number generators. How much these differences matter in experiments (such as clinical trials) is a matter of trial design and statistical rigor, which affect evidence grading. Studies done with pseudo- or quasirandomization are usually given nearly the same weight as those with true randomization but are viewed with a bit more caution.

Benefits of random assignment[edit]

Imagine an experiment in which the participants are not randomly assigned; perhaps the first 10 people to arrive are assigned to the Experimental Group, and the last 10 people to arrive are assigned to the Control group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group, and claims these differences are a result of the experimental procedure. However, they also may be due to some other preexisting attribute of the participants, e.g. people who arrive early versus people who arrive late.

Imagine the experimenter instead uses a coin flip to randomly assign participants. If the coin lands heads-up, the participant is assigned to the Experimental Group. If the coin lands tails-up, the participant is assigned to the Control Group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group. Because each participant had an equal chance of being placed in any group, it is unlikely the differences could be attributable to some other preexisting attribute of the participant, e.g. those who arrived on time versus late.

Potential issues[edit]

Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it.

To express this same idea statistically - If a randomly assigned group is compared to the mean it may be discovered that they differ, even though they were assigned from the same group. If a test of statistical significance is applied to randomly assigned groups to test the difference between sample means against the null hypothesis that they are equal to the same population mean (i.e., population mean of differences = 0), given the probability distribution, the null hypothesis will sometimes be "rejected," that is, deemed not plausible. That is, the groups will be sufficiently different on the variable tested to conclude statistically that they did not come from the same population, even though, procedurally, they were assigned from the same total group. For example, using random assignment may create an assignment to groups that has 20 blue-eyed people and 5 brown-eyed people in one group. This is a rare event under random assignment, but it could happen, and when it does it might add some doubt to the causal agent in the experimental hypothesis.

Random sampling[edit]

Random sampling is a related, but distinct process.[1] Random sampling is recruiting participants in a way that they represent a larger population.[1] Because most basic statistical tests require the hypothesis of an independent randomly sampled population, random assignment is the desired assignment method because it provides control for all attributes of the members of the samples—in contrast to matching on only one or more variables—and provides the mathematical basis for estimating the likelihood of group equivalence for characteristics one is interested in, both for pretreatment checks on equivalence and the evaluation of post treatment results using inferential statistics. More advanced statistical modeling can be used to adapt the inference to the sampling method.

History[edit]

Randomization was emphasized in the theory of statistical inference of Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). Peirce applied randomization in the Peirce-Jastrow experiment on weight perception.

Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.[2][3][4][5] Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the eighteen-hundreds.[2][3][4][5]

Jerzy Neyman advocated randomization in survey sampling (1934) and in experiments (1923).[6]Ronald A. Fisher advocated randomization in his book on experimental design (1935).

See also[edit]

References[edit]

  1. ^ abhttp://www.socialresearchmethods.net/kb/random.php. 
  2. ^ abCharles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83. 
  3. ^ abIan Hacking (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis (A Special Issue on Artifact and Experiment). 79 (3): 427–451. doi:10.1086/354775. 
  4. ^ abStephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032. 
  5. ^ abTrudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design". Isis. 88 (4): 653–673. doi:10.1086/383850. PMID 9519574. 
  6. ^Neyman, Jerzy (1990) [1923], Dabrowska, Dorota M.; Speed, Terence P., eds., "On the application of probability theory to agricultural experiments: Essay on principles (Section 9)", Statistical Science (Translated from (1923) Polish ed.), 5 (4): 465–472, doi:10.1214/ss/1177012031, MR 1092986 
  • Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics. 150. New York: Springer-Verlag. ISBN 0-387-98578-6. 
  • Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7. 
  • Charles S. Peirce, "Illustrations of the Logic of Science" (1877–1878)
  • Charles S. Peirce, "A Theory of Probable Inference" (1883)
  • Charles Sanders Peirce and Joseph Jastrow (1885). "On Small Differences in Sensation". Memoirs of the National Academy of Sciences. 3: 73–83. http://psychclassics.yorku.ca/Peirce/small-diffs.htm
  • Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis. 79 (3): 427–451. doi:10.1086/354775. JSTOR 234674. MR 1013489. 
  • Stephen M. Stigler (November 1992). "A Historical View of Statistical Concepts in Psychology and Educational Research". American Journal of Education. 101 (1): 60–70. doi:10.1086/444032. 
  • Trudy Dehue (December 1997). "Deception, Efficiency, and Random Groups: Psychology and the Gradual Origination of the Random Group Design". Isis. 88 (4): 653–673. doi:10.1086/383850. PMID 9519574. 
  • Basic Psychology by Gleitman, Fridlund, and Reisberg.
  • "What statistical testing is, and what it is not," Journal of Experimental Education, 1993, vol 61, pp. 293–316 by Shaver.

External links[edit]

One thought on “Random Assignment Of Subjects Occurs When An Earthquake

Leave a Reply

Your email address will not be published. Required fields are marked *