fold change vs standardized effect size

fold change vs standardized effect size


Hi all,

Just a quick question (as a newcomer to bioinformatics) regarding effect size in differential expression analysis.  Why does the field opt for using fold change as a metric of effect size?  Fold change doesn’t take into account variability, whereas standardized effect size measures like Cohen’s d do.  So why doesn’t the field report effect sizes that take into account variability?

To illustrate an example, say gene X has a mean of 7.20 in condition A and 7.60 in condition B.  Fold change for condition B compared to condition A is 7.60/7.20 = 1.05.  Say the standard deviation estimates on condition A is 0.09, while in condition B its  0.10.  Computing Cohen’s d on this, the effect size is somewhere around 4.2, which is a gigantic effect.  Fold change and Cohen’s d differ dramatically, so why not report effect size estimates that take into account variability rather than fold change?





Read more here: Source link