Go back to R doodles

My personal web page

My official USM web page

# 1 Researchers frequently report results as relative effects

For example,

“Male flies from selected lines had 50% larger upwind flight ability than male flies from control lines (Control mean: 117.5 cm/s; Selected mean 176.5 cm/s).”

where a relative effect is

$$$100 \frac{\bar{y}_B - \bar{y}_A}{\bar{y}_A}$$$

If we are to follow best practices, we should present this effect with a measure of uncertainty, such as a confidence interval. The absolute effect is 59.0 cm/s and the 95% CI of this effect is (48.7, 69.3 cm/s). But if we present the result as a relative effect, or percent difference from some reference value, how do we compute a “relative CI”?

# 2 Four methods to compute the standard error and CI of a relative effect

## 2.1 Naive relative confidence intervals

$$$relative\_CI = 100\frac{absolute\_CI}{\bar{y}_A}$$$

For the fly example, the 95% naive relative CI is bounded by $$100\frac{48.7}{117.5} = 41.4\%$$ and $$100\frac{69.3}{117.5} = 59.0\%$$. These are easy to compute but wrong because the error in estimating a relative difference is a function of the error in estimating both the numerator (the absolute difference) and the denominator (the reference mean). The consequence is that naive relative confidence intervals are too narrow (or too optimistic, or suggest too much precision).

## 2.2 Back-transformed intervals from log-transformed data.

This post is motivated by the post The percent difference fallacy and a solution: the ratio t-test from Anthony Darrouzet-Nardi at Anthony’s Science blog. Anthony suggests a clever solution for a simple, linear model with a single factor with two levels (or a “t-test” design): the relative CIs are the backtransformed CIs of the effect of the log-transformed data.

1. log transform the response variable
2. run the linear model $$log(y) = b_0 + b_1 Treatment$$
3. compute the CI of $$b_1$$
4. backtransform using $$100(\mathrm{exp}(CI) - 1)$$

1. The coefficient $$b_1$$ of the linear model of log transformed $$y$$ is $$\overline{\mathrm{log}(y_B)} - \overline{\mathrm{log}(y_A)}$$
2. This is the difference in the logs of the geometric means of group B and group A, where a geometric mean is
3. $$GM=(\Pi y)^\frac{1}{n}$$ and the log of a geometric mean is
4. $$\mathrm{log}(GM) = \mathrm{log}((\Pi y)^\frac{1}{n}) = \frac{1}{n}\sum{\mathrm{log}(y)}$$ (note that the typical way to compute a geometric mean is using a log transformation since this is unlikely to blow up to really big values)
5. And because the difference of two logs is the log of their ratio, $$b_1 = \overline{\mathrm{log}(y_B)} - \overline{\mathrm{log}(y_A)} = \mathrm{log}(\frac{GM_B}{GM_A})$$, and
6. $$\mathrm{exp}(b_1) = \frac{GM_B}{GM_A}$$
7. Backtransforming $$b_1$$ gives us the multiplier. To get the percent difference as a fraction, we need to subtract 1, therefore
8. $$\mathrm{exp}(b_1) - 1 = \frac{GM_B}{GM_A} - 1$$, which is equal to
9. $$\mathrm{exp}(b_1) - 1 = \frac{GM_B}{GM_A} - \frac{GM_A}{GM_A}$$, which is equal to
10. $$\mathrm{exp}(b_1) - 1 = \frac{GM_B - GM_A}{GM_A}$$, so, the relative difference of log-transformed $$y$$ backtransformed and converted to a percent is
$$$100(\mathrm{exp}(b_1) - 1) = 100(\frac{GM_B - GM_A}{GM_A})$$$

In other words, the CI using the backtransformed CI of the log-transformed $$y$$ is of the relative difference of the geometric means, not the arithmetic means. To think about this estimate of the CI of the relative effect, it helps to remember that the geometric mean is always smaller (closer to zero) than the arithmetic mean, so the numerator is a difference of slightly smaller values, and the denominator is a sligtly smaller values. I’ll return to this. A potential problem with this method is, how to generalize it for more complex models, such as single factor models with more than two levels, or factorial models, or models with continuous covariates.

## 2.3 The Delta method

The relative effect $$100\frac{\bar{y}_B - \bar{y}_A}{\bar{y}_A}$$ is a non-linear transformation of the absolute effect. The standard error of a non-linear transformation $$G(X)$$ of random variable can be approximated using a Taylor series approximation, using

$$$\mathrm{VAR}(G(X)) \approx \nabla G(X)^\top \mathrm{COV}(X) \nabla G(X)$$$

where $$\nabla G(X)$$ is the gradient of the function $$G(X)$$, which is a vector of partial derivatives – think of this as the vector specifying the direction of the steepest ascent at some point on the surface of $$G(X)$$.

The Delta method is easily applied to any linear model, including a hierarchical models and generalized linear models. And it is even easier in R thanks to the deltamethod function from the msm package.

## 2.4 Bootstrap standard errors and CI

The bootstrap standard error of a relative effect is simply the standard deviation of the vector of $$k$$ relative effects, with each relative effect computed from a random re-sample, with replacement, of the original data. There are several methods for computing a bootstrap CI, the simplest is to simply use percentiles (e.g. 0.025 and 0.975) of the vector of re-sampled relative effects.

# 3 Comparing CIs computed using the naive, log, and Delta methods

I use a simple Monte-Carlo simulation to compute the frequency of CIs that contain the true relative effect. Sorry, no comparison with a bootstrap because this is a quick report! The simulated data use parameters estimated from a full-factorial analysis of the effect of a selection treatment (levels “CN” and “AA”) and sex (levels “F” and “M”) on upwind flight ability in Drosophila melanogaster. Upwind flight ability was measured as the highest wind tunnel speed at which the fly could fly. For more about the selection experiment, see Weber, K.E. (1996). Large genetic change at small fitness cost in large populations of Drosophila melanogaster selected for wind tunnel flight: rethinking fitness surfaces. Genetics 144, 205–213.

The Harrell plots above show the (A) absolute and (B) relative effects (upper part) and the distribution of the raw data (bottom part) for the fly data. More about Harrell plots is at https://www.biorxiv.org/content/early/2018/11/10/458182

## 3.1 Single factor with two levels (“t-test”)

set.seed(1)
n <- 27 # average n over the four groups of flies
b0 <- 117.5 # control males
b1 <- 176.5 - b0 # effect of selection in males
# parameters gathered into vector
b <- c(b0, b1)
# error standard deviation
sigma <- 19.2 # residual standard error of fly data
# expected effects on percent scale
b1.p <- b1/b0*100
# make model matrix
selection_levels <- c("CN", "AA") #AA is selected
x <- data.table(Treatment=factor(rep(selection_levels, each=n), selection_levels))
X <- model.matrix(formula(