## How long does it take to get pregnant?

My girlfriend’s biological clock is ticking, and so we’ve started trying to spawn. Since I’m impatient, that has naturally lead to questions like “how long will it take?”. If I were to believe everything on TV, the answer would be easy: have unprotected sex once and pregnancy is guaranteed.

A more cynical me suggests that this isn’t the case. Unfortunately, it is surpisingly difficult to find out the monthly chance of getting pregnant (technical jargon: the “monthly fecundity rate”, or MFR), given that you are having regular sex in the days leading up to ovulation. Everyone agrees that age has a big effect, with women’s peak fertility occuring somewhere around the age of 25. Beyond that point, the internet is filled with near-useless summary statistics like the chance of conceiving after one year. For example, the usually reliable NHS site says

Women become less fertile as they get older. For women aged 35, about 94 out of every 100

who have regular unprotected sex will get pregnant after three years of trying. However, for

women aged 38, only 77 out of every 100 will do so.

I found a couple of reasonably sciency links(George and Kamath, Socal Fertility) that suggest that the MFR is about 25% for a women aged 25, and 10% at age 35. The Scoal link also gives rates of 15% at age 30, 5% at age 40 and less than 1% at age 45. If the woman is too fat, too thin, a smoker, or has hormone problems, or is stressed, then the rate needs reducing.

Given the MFR, the probability of getting pregnant after a given number of months can be calculated with a negative binomial distribution.

months <- 0:60 p_preg_per_month <- c("25" = 0.25, "30" = 0.15, "35" = 0.1, "40" = 0.05, "45" = 0.01) p_success <- unlist(lapply( p_preg_per_month, function(p) pnbinom(months, 1, p) ))

Now we just create a data frame suitable for passing to ggplot2 …

mfr_group <- paste( "MFR =", format(p_preg_per_month, digits = 2), "at age", names(p_preg_per_month) ) mfr_group <- factor(mfr_group, levels = mfr_group) preg_data <- data.frame( months = rep.int(months, length(mfr_group)) , mfr_group = rep(mfr_group, each = length(months)), p_success = p_success )

and draw the plot.

library(ggplot2) (p <- ggplot(preg_data, aes(months, p_success, colour = mfr_group)) + geom_point() + scale_x_continuous(breaks = seq.int(0, 60, 12)) + scale_y_continuous(breaks = seq.int(0, 1, 0.1), limits = c(0, 1)) + scale_colour_discrete("Monthly fecundity rate") + xlab("Months") + ylab("Probability of conception") + opts(panel.grid.major = theme_line(colour = "grey60")) )

So almost half of the (healthy) 25 year olds get pregnant in the first ~~month~~two months, and after two years (the point when doctors start considering you to have fertility problems) more than 90% of 35 year olds should conceive. By contrast, just over 20% of 45 year old women will. In fact, even this statistic is over-optimistic: at this age, fertility is rapidly decreasing, and a 1% MFR at age 45 will mean a much lower MFR at age 47 and the negative binomial model breaks down.

Of course, from a male point of view, conception is an embarrassingly parallel problem: you can dramatically reduce the time to conceive a child by sleeping with lots of women at once. (DISCLAIMER: Janette, if you’re reading this, I’m not practising or advocating this technique!)

## A Great European Bailout Infographic

Whenever there’s a financial crisis, I tend to assume that it’s Dirk Eddelbuettel‘s fault, though apparently the EMU debt crisis is more complicated than that. JP Morgan have just released a white paper about the problem, including a Lego infographic of who is asking who for money. Created, apparently, by Peter Cembalest, aged nine. Impressive stuff.

Found via The Register.

## Interactive graphics for data analysis

I got a copy of Martin Theus and Simon Urbanek’s Interactive Graphics for Data Analysis a couple of years ago, whence it’s been sat on my bookshelf. Since I’ve recently become a self-proclaimed expert on interactive graphics I thought it was about time I read the thing. Which is exactly what I did last weekend at the Leeds Festival (in between rocking out).

It’s a book of two halves, and despite the title the interactivity isn’t really the focus. The book is actually a guide on how to do exploratory data analysis. The first half of the book works like an advanced chart chooser, explaining which plots are useful for which types of data, and what types of interactivity they can benefit from. For me, it was worth it for the many rare plots, like spineplots and interaction plots and mosaic plots and fluctuation diagrams. If you’re bored of barcharts, this is a great way to expand your graphical vocabulary. The second half of the book consists entirely of case studies, where you can practice a workflow for exploring data, which is something that’s always worthwhile doing.

The really big takeaway that I got is that exploratory graphics have different priorities to publication graphics. When you are in the courting stage with a dataset, just getting to know each other, you don’t really care so much about whether the greek letters in your axis label are formatted correctly or whether the shade of pink in your dots is quite right. All you really need is to be able to generate lots and lots of plots quickly, and to be able to see the relationships between them.

It is this last point that the authors claim interactivity is most useful for. Perhaps the canonical example of this is clicking a bar on a histogram or barchart, and having corresponding points on a scatterplot highlighted. To demonstrate this, here’s an example using Simon’s `Acinonyx`

package (shortly to be renamed `ix`

for “iplots Extreme”). Acinonyx isn’t yet available on CRAN, see its home page

for installation details.

library(Acinonyx) library(MASS) data(Cars93) interactive_scatter <- with(Cars93, iplot(Horsepower, MPG.city)) interactive_histo <- with(Cars93, ihist(EngineSize))

Click a bar in the histogram and the the corresponding points in the scatterplot are highlighted. Likewise, drag to select points in the scatterplot and fractions of the histgram are highlighted.

The equivalent static version would be to use trellising and draw each possible graph combination. Splitting a scatterplot into different groups depending upon bars of a histogram works something like this:

library(ggplot2) Cars93$EngineSizeGroup <- cut(Cars93$EngineSize, 11) (static_trellis_scatter <- ggplot(Cars93, aes(Horsepower, MPG.city)) + geom_point() + facet_wrap(~ EngineSizeGroup) )

(We don’t actually need to bother with the histograms, since they are a little boring.) The reverse operation – going from a selected region of scatterplot to a higlighted region of bar chart is also possible, but trickier. In this case, we do need both graphs.

Cars93 <- within(Cars93, { selected <- ifelse( Horsepower < 200 & MPG.city > 20 & MPG.city < 30, "selected", "unselected" ) }) (static_scatter_with_highlight <- ggplot(Cars93, aes(Horsepower, MPG.city, colour = selected)) + geom_point() ) (static_histo_with_highlight <- ggplot(Cars93, aes(EngineSizeGroup, fill = selected)) + geom_histogram() + opts(axis.text.x = theme_text(angle = 30, hjust = 1, vjust = 1)) )

My conclusion from reading the book, and from my initial experimentation with `Acinonyx`

is that anything you can do interactively is also possible by drawing many static graphs, but the interaction can let you see things quicker.

## Nomograms everywhere!

At useR!, Jonty Rougier talked about nomograms, a once popular visualisation that has fallen by the wayside with the rise of computers. I’d seen a few before, but hadn’t understood how they worked or why you’d want to use them. Anyway, since that talk I’ve been digging around in biology books from the 60s and 70s, and it seems they are full of them. So for those of you who haven’t seen the talk, here’s how they work.

A basic nomogram consists of three scales. By reading off known values from two of the scales, you can estimate a third one. Here’s an example I found in the ICRP‘s reference manual.

It’s difficult to measure people’s skin surface area, but height and bodyweight are very straightforward. To use the nomogram, you place a ruler (or other straight edge) on the height* and weight scales and read of the point where the ruler crosses the surface area scale. I’m 177cm tall and weigh 72kg, so according to this, my estimated skin surface area is 1.89m^{2}.

Of course nowadays, the standard way to solve this sort of problem is to write a function. Jonty suggested that the main modern use of nomograms is in fieldwork situations, where computers aren’t handily available. (His case study was Kenyan vets trying to estimate the weight of donkeys form there height and girth.)

Altman and Dittmer’s Respiration and Circulation has many more pretty nomograms. I was particularly impressed by those on blood pH, reproduced below for your pleasure.

Your homework is to dig out a pre-1980 textbook and hunt for more nomograms.

*Gruesomely, the fact that the scale is labelled “length” rather than “height” makes me suspect that the bodies that provided the data were in a permanent lying down position – that is, they were corpses.

## Legendary Plots

I was recently pointed in the direction of a thermal comfort model by the engineering company Arup (p27–28 of this pdf). Figure 3 at the top of p28 caught my attention.

It’s mostly a nice graph; there’s not too much junk in it. One thing that struck me was that there is an awful lot of information in the legend, and that I found it impossible to retain all that information while switching between the plot and the legend.

The best way to improve this plot then is to find a way to simplify the legend. Upon closer inspection, it seems that there is a lot of information that is repeated. For example, there are only two temperature combinations, and three levels of direct solar energy. Humidity and diffused solar energy are kept the same in all cases. That makes it really easy for us: our five legend options are

Outdoor temp (deg C) | Direct solar energy (W/m^2) |
---|---|

32 | 700 |

32 | 150 |

32 | 500 |

29 | 500 |

29 | 150 |

Elsewhere we can explain that the mezannine/platform temps are always 2/4 degrees higher than outdoors, and that the humidity is always 50%, and that the diffused solar energy is always 100W/m^2.

Living in Buxton, one of the coldest, rainiest towns in the UK, it amuses me to see that their “low” outdoor temperature is 29°C.

The other thing to note is that we have two variables mapped to the hue. For just five cases, this is just about acceptable, but it isn’t the best option and it won’t scale to many more categories. It’s generally considered best practice to work in HCL color space when mapping variables to colours. I would be tempted to map temperature to hue – whether you pick red as hot and blue as cold or the other way around depends upon how many astronomers you have in your target audience. Then I’d map luminance (lightness) to solar energy: more sunlight = lighter line.

I don’t have the values to exactly recreate the dataset, but here are some made up numbers with the new legend. Notice the combined outdoor temp/direct solar energy variable.

time_points <- 0:27 n_time_points <- length(time_points) n_cases <- 5 comfort_data <- data.frame( time = rep.int(time_points, n_cases), comfort = jitter(rep(-2:2, each = n_time_points)), outdoor.temperature = rep( c(32, 29), times = c(3 * n_time_points, 2 * n_time_points) ), direct.solar.energy = rep( c(700, 150, 500, 500, 150), each = n_time_points ) ) comfort_data$combined <- with(comfort_data, factor(paste(outdoor.temperature, direct.solar.energy, sep = ", ")) )

We manually pick the colours to use in HCL space (using `str_detect`

to examine the factor levels).

library(stringr) cols <- hcl( h = with(comfort_data, ifelse(str_detect(levels(combined), "29"), 0, 240)), c = 100, l = with(comfort_data, ifelse(str_detect(levels(combined), "150"), 20, ifelse(str_detect(levels(combined), "500"), 50, 80)) ) )

Drawing the plot is very straightforward, it’s just a line plot.

library(ggplot2) p <- ggplot(comfort_data, aes(time, comfort, colour = combined)) + geom_line(size = 2) + scale_colour_manual( name = expression(paste( "Outdoor temp (", degree, C, "), Direct solar (", W/m^2, ")" )), values = cols) + xlab("Time (minutes)") + ylab("Comfort") p

Sensible people should stop here, and write the additional detail in the figure caption. There is currently no sensible way of writing annotations outside of the plot area (`annotate`

only works inside panels). The following hack was devised by Baptiste Auguie, read this forum thread for other variations.

library(gridExtra) caption <- tableGrob( matrix( expression( paste( "Mezzanine temp is 2", degree, C, " warmer than outdoor temp" ), paste( "Platform temp is 4", degree, C, " warmer than outdoor temp" ), paste("Humidity is always 50%"), paste( "Diffused solar energy is always 100", W/m^2 ) ) ), parse = TRUE, theme = theme.list( gpar.corefill = gpar(fill = NA, col = NA), core.just = "center" ) ) grid.arrange(p, sub=caption)

## Presenting Immer’s barley data

Last time I talked about adapting graphs for presentations. This time I’m putting some of the concepts I discussed there into action, with a presentation of Immer’s barley dataset. This is a classic dataset, originally published in 1934; in 1993 Bill Cleveland mentioned it in his book Visualising Data on account of how it may contain an error. Here’s the paper/screen version.

Here’s the presentation.

**Immer’s barley data**http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=immersbarleydata-101031144011-phpapp01&stripped_title=immers-barley-data&userName=richierocks

For the record, the presentation was created with Impress and the audio recorded with Audacity. Using these tools, it’s pretty straightforward to make and share an audio presentation.

EDIT:

I’ve corrected the map slide. Some people also asked about the code to draw the plots. First up, I recoded the factors so that site and variety appear in order of increasing yield.

barley$variety <- with(barley, reorder(variety, yield)) barley$site <- with(barley, reorder(site, yield))

The plot for the graph is straightforward.

ggplot(barley, aes(yield, variety, colour = year)) + geom_point() + facet_grid(site ~ .)

The presentation version uses `facet_wrap(~ site)`

rather than `facet_grid`

, and the text size is increased with `theme_set(theme_grey(24))`

.

## Adapting graphs for presentations

I’ve just finished reading slide:ology by Nancy Duarte. It contains lots of advice about how to convey meaning through aesthetics. The book has a general/business presentation focus, but it got me wondering about how to apply the ideas in a scientific context. Since graphs from a big part of most scientific talks, and since that’s the bit I know best; that’s what I’m going to discuss here.

We start with a basic example using the ggplot2 mtcars dataset.

p_basic <- ggplot(mtcars, aes(wt, mpg)) + geom_point()

Something I’ve been burned with recently is overestimating the size of the projector screen. Although my graphs looked great when my face was next to my monitor, the axes were hard to read when projected. A good rule of thumb for text is your font size should be half the age of the eldest member of your audience or 30 points, whichever is bigger. For graphs axes we can perhaps get away with something a little smaller. Don’t forget that by the time you’ve printed your graph to file and played with it in your presentation software, the font size you’ve picked may bear no relation to it’s final value. That said, the point remains that you need to make the text bigger.

old_theme <- theme_set(theme_grey(base_size = 24))

Reading a graph can be quite an in depth process which can overwhelm you audience. Once you’ve shown them the graph in its basic form, you should emphasis interesting features, one at a time. The one at a time thing is important – if you want your audience to be concentrating, then you shouldn’t distract them with many things at once. Here I’ve increased the size of the outliers in the bottom right hand corner.

mtcars <- within(mtcars, emph <- wt > 5) p_emph <- ggplot(mtcars) + geom_point(aes(wt, mpg, size = emph)) + opts(legend.position = "none") + scale_size_manual(values = c(2, 5))

As well as emphasising values, a useful tool can be to de-emphasise other regions of the graph. Overlaying a translucent white rectangle on such regions does the trick nicely.

x_lower <- floor(min(mtcars$wt)) x_upper <- ceiling(max(mtcars$wt)) y_lower <- floor(min(mtcars$mpg)) -1 y_upper <- ceiling(max(mtcars$mpg)) +1 p_deemph <- p_emph + geom_rect( aes(xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax), data = data.frame( xmin = c(x_lower, 5), xmax = c(5, x_upper), ymin = c(y_lower, 16), ymax = rep.int(y_upper, 2) ), fill = "white", alpha = 0.6 ) + scale_x_continuous(limits = c(x_lower, x_upper), expand = c(0, 0)) + scale_y_continuous(limits = c(y_lower, y_upper), expand = c(0, 0))

For more deemphsising more complicated regions, I find it easier to use dedicated image manipulation software. Paint.NET is my personal zero-cost favourite but any graphics software should suffice.

Finally, it can be useful to annotate your graph.

p_annotated <- p_deemph + geom_text( aes(x = x, y = y, label = label), data = data.frame(x = 5, y = 13, label = "outliers!"), size = 14 )

By now, if your audience hasn’t figured out which bit of the graph you want them to look at you’re in trouble. Let me know in the comments if you have any other ideas for how to draw graphs for presentations.