Archive for the ‘Uncategorized’ Category

Introducing the pathological package for manipulating paths, files and directories

28th April, 2014 7 comments

I was recently hunting for a function that will strip the extension from a file – changing foo.png to foo, and so forth. I was knitting a report, and wanted to replace the file extension of the input with the extension of the the output file. (knitr handles this automatically in most cases but I had some custom logic in there that meant I had to work things manually.)

Finding file extensions is such a common task that I figured that someone must have written a function to solve the problem already. A quick search using findFn("file extension") from the sos package revealed a few thousand hits. There’s a lot of noise in there, but I found a few promising candidates.

There’s removeExt in the limma package (you can find it on Bioconductor), strip_extension in Kmisc, remove_file_extension which has identical copies in both and gdalUtils, and extension in the raster.

To save you the time and effort, I’ve tried them all, and unfortunately they all suck.

At a bare minimum, a file extension stripper needs to be vectorized, deal with different file extensions within that vector, deal with multiple levels of extension (for things like “tar.gz” files), and with filenames with dots in the name other than the extension, and with missing values, and with directories. OK, that’s quite a few things but I’m picky.

Since all the existing options failed, I’ve made my own function. In fact, I went overboard and created a package of path manipulation utilities, the pathological package. It isn’t on CRAN yet, but you can install it via:


It’s been a while since I’ve used MATLAB, but I have fond recollections of its fileparts function that splits a path up into the directory, filename and extension.

The pathological equivalent is to decompose a path, which returns a character matrix data.frame with three columns.

x <- c(
  "somedir/foo.tgz",         # single extension
  "another dir\\bar.tar.gz", # double extension
  "baz",                     # no extension
  "quux. quuux.tbz2",        # single ext, dots in filename
  R.home(),                  # a dir
  "~",                       # another dir
  "~/quuuux.tar.xz",         # a file in a dir
  "",                        # empty 
  ".",                       # current dir
  "..",                      # parent dir
  NA_character_              # missing
(decomposed <- decompose_path(x))
##                          dirname                      filename      extension
## somedir/foo.tgz         "d:/workspace/somedir"       "foo"         "tgz"    
## another dir\\bar.tar.gz "d:/workspace/another dir"   "bar"         "tar.gz" 
## baz                     "d:/workspace"               "baz"         ""       
## quux. quuux.tbz2        "d:/workspace"               "quux. quuux" "tbz2"   
## C:/PROGRA~1/R/R-31~1.0  "C:/Program Files/R/R-3.1.0" ""            ""       
## ~                       "C:/Users/richie/Documents"  ""            ""       
## ~/quuuux.tar.xz         "C:/Users/richie/Documents"  "quuuux"      "tar.xz" 
## ""                           ""            ""       
## .                       "d:/workspace"               ""            ""       
## ..                      "d:/"                        ""            ""       
## <NA>                    NA                           NA            NA       
## attr(,"class")
## [1] "decomposed_path" "matrix" 

There are some shortcut functions to get at different parts of the filename:

##         somedir/foo.tgz another dir\\bar.tar.gz                     baz 
##                   "tgz"                "tar.gz"                      "" 
##        quux. quuux.tbz2  C:/PROGRA~1/R/R-31~1.0                       ~ 
##                  "tbz2"                      ""                      "" 
##         ~/quuuux.tar.xz                                               . 
##                "tar.xz"                      ""                      "" 
##                      ..                    <NA> 
##                      ""                      NA 
##  [1] "d:/workspace/somedir/foo"         "d:/workspace/another dir/bar"    
##  [3] "d:/workspace/baz"                 "d:/workspace/quux. quuux"        
##  [5] "C:/Program Files/R/R-3.1.0"       "C:/Users/richie/Documents"       
##  [7] "C:/Users/richie/Documents/quuuux" "/"                               
##  [9] "d:/workspace"                     "d:/"                             
## [11] NA 

strip_extension(x, include_dir = FALSE)
##         somedir/foo.tgz another dir\\bar.tar.gz                     baz 
##                   "foo"                   "bar"                   "baz" 
##        quux. quuux.tbz2  C:/PROGRA~1/R/R-31~1.0                       ~ 
##           "quux. quuux"                      ""                      "" 
##         ~/quuuux.tar.xz                                               . 
##                "quuuux"                      ""                      "" 
##                      ..                    <NA> 
##                      ""                      NA 

You can also get your original file location (in a standardised form) using

##  [1] "d:/workspace/somedir/foo.tgz"           
##  [2] "d:/workspace/another dir/bar.tar.gz"    
##  [3] "d:/workspace/baz"                       
##  [4] "d:/workspace/quux. quuux.tbz2"          
##  [5] "C:/Program Files/R/R-3.1.0"             
##  [6] "C:/Users/richie/Documents"              
##  [7] "C:/Users/richie/Documents/quuuux.tar.xz"
##  [8] "/"                                      
##  [9] "d:/workspace"                           
## [10] "d:/"                                    
## [11] NA 

The package also contains a few other path utilities. The standardisation I mentioned comes from standardise_path (standardize_path also available for Americans), and there’s a dir_copy function for copying directories.

It’s brand new, so after I’ve complained about other people’s code, I’m sure karma will ensure that you’ll find a bug or two, but I hope you find it useful.

A little Christmas Present for you

25th December, 2012 Leave a comment

Here’s an excerpt from my chapter “Blood, sweat and urine” from The Bad Data Handbook. Have a lovely Christmas!

I spent six years working in the statistical modeling team at the UK’s Health and Safety
Laboratory. A large part of my job was working with the laboratory’s chemists, looking
at occupational exposure to various nasty substances to see if an industry was adhering
to safe limits. The laboratory gets sent tens of thousands of blood and urine samples
each year (and sometimes more exotic fluids like sweat or saliva), and has its own team
of occupational hygienists who visit companies and collect yet more samples.
The sample collection process is known as “biological monitoring.” This is because when
the occupational hygienists get home and their partners ask “How was your day?,” “I’ve
been biological monitoring, darling” is more respectable to say than “I spent all day
getting welders to wee into a vial.”
In 2010, I was lucky enough to be given a job swap with James, one of the chemists.
James’s parlour trick is that, after running many thousands of samples, he can tell the
level of creatinine in someone’s urine with uncanny accuracy, just by looking at it. This
skill was only revealed to me after we’d spent an hour playing “guess the creatinine level”
and James had suggested that “we make it more interesting.” I’d lost two packets of fig
rolls before I twigged that I was onto a loser.

The principle of the job swap was that I would spend a week in the lab assisting with
the experiments, and then James would come to my office to help out generating the
statistics. In the process, we’d both learn about each other’s working practices and find
ways to make future projects more efficient.
In the laboratory, I learned how to pipette (harder than it looks), and about the methods
used to ensure that the numbers spat out of the mass spectrometer4 were correct. So as
well as testing urine samples, within each experiment you need to test blanks (distilled
water, used to clean out the pipes, and also to check that you are correctly measuring
zero), calibrators (samples of a known concentration for calibrating the instrument5),
and quality controllers (samples with a concentration in a known range, to make sure
the calibration hasn’t drifted). On top of this, each instrument needs regular maintaining
and recalibrating to ensure its accuracy.
Just knowing that these things have to be done to get sensible answers out of the ma?
chinery was a small revelation. Before I’d gone into the job swap, I didn’t really think
about where my data came from; that was someone else’s problem. From my point of
view, if the numbers looked wrong (extreme outliers, or otherwise dubious values) they
were a mistake; otherwise they were simply “right.” Afterwards, my view is more
nuanced. Now all the numbers look like, maybe not quite a guess, but certainly only an
approximation of the truth. This measurement error is important to remember, though
for health and safety purposes, there’s a nice feature. Values can be out by an order of
magnitude at the extreme low end for some tests, but we don’t need to worry so much
about that. It’s the high exposures that cause health problems, and measurement error
is much smaller at the top end.


Radical Statistics was radical

25th February, 2012 Leave a comment

Today I went to the Radical Statistics conference in London. RadStats was originally a sort of left wing revolutionary group for statisticians, but these days the emphasis is on exposing dubious statistics by companies and politicians.

Here’s a quick rundown of the day.

First up Roy Carr-Hill spoke about the problems with trying to collect demographic data and estimating soft measures of societal progress like wellbeing. (Household surveys exclude people not in households, like the homeless soldiers and old people in care homes; and English people claim to be 70% satisfied regardless of the question.)

Next was Val Saunders who started with a useful debunking of done methodological flaws in schizophrenia research, then blew it by detailing her own methodologically flaws research and making overly strong claims to have found the cause of that disease.

Aubrey Blunsohn and David Healy both talked about ways that the pharmaceutical industry fudges results. The list was impressively long, leading me to suspect that far to many people have spent far too long thinking of ways to game the system. The two main recommendations that resonated with me were to extend the trials register to phase 1 trials to avoid unfavourable studies being buried and for raw data to be made available for transparent analysis. Pipe dreams.

After lunch Prem Sikka pointed out that tax avoidance isn’t just shady companies trying to scam the system, but actually accountancy firms pay people to dream up new wheezes and sell them to those companies.

Ann Pettifor and final speaker Howard Reed had similar talks evangelising Keynesian stimulus (roughly, big government spending in times of recession) for the UK economy amongst some economic myth debunking. Thought provoking, though both speakers neglected to mention the limitations of such stimuli – you have to avoid spending in pork barrel nonsense (see Japan in the 90s, that buy-a-banger scheme in the UK in 2009) and you have to find a ways to turn of the taps w when recession is over.

The other speaker was Allyson Pollack who discussed debunking a dubious study by Zac Cooper claiming that patients being allowed to choose their surgeon improved success rated treating acute myocardial infarction. Such patients are generally unconscious while having their heart attack so out was inevitably nonsense.

Overall a great day.

A Great European Bailout Infographic

8th September, 2011 1 comment

Whenever there’s a financial crisis, I tend to assume that it’s Dirk Eddelbuettel‘s fault, though apparently the EMU debt crisis is more complicated than that. JP Morgan have just released a white paper about the problem, including a Lego infographic of who is asking who for money. Created, apparently, by Peter Cembalest, aged nine. Impressive stuff.

European bailout graph

Found via The Register.

The Stats Clinic

27th July, 2011 1 comment

Stats clinic logo
Here at HSL we have a lot of smart kinda-numerate people who have access to a lot of data. On a bad day, kinda-numerate includes myself, but in general I’m talking about scientists who have have done an introductory stats course, but not much else. When all you have is a t-test, suddenly everything looks like two groups of normally distributed numbers that you need to know how significantly different their means are.

While we have a pretty good cross-disciplinary setup here, the ease of calculating a mean here or a standard deviation there means that many scientists can’t resist a piece of the number crunching action. Then suddenly there’s an Excel monstrosity that nobody understands rearing its ugly head.

Management has enlightenedly decided to fund a stats clinic, so us number nerds can help out the rest of the lab without any paperwork overhead (which was the biggest reason to put off asking for help). They didn’t like my slogan, but hey, you can’t have everything.

I’m really interested to hear how other organisations deal with this issue. Let me know in the comments.

Tags: ,

Tufte article in Washington Monthly

16th May, 2011 Leave a comment

The Washington Monthly magazine has a long article about graphics guru Edward Tufte. It mostly covers his work on presenting data, with a few snippets about his powerful friends (he has advised both the Bush and Obama administrations), and his current work on

I still have no idea whether his name is pronounced “Tuft” or “Tufty” or “Tooft”.

Legendary Plots

12th March, 2011 Leave a comment

I was recently pointed in the direction of a thermal comfort model by the engineering company Arup (p27–28 of this pdf). Figure 3 at the top of p28 caught my attention.

Arup thermal comfort model, figure 3

It’s mostly a nice graph; there’s not too much junk in it. One thing that struck me was that there is an awful lot of information in the legend, and that I found it impossible to retain all that information while switching between the plot and the legend.

The best way to improve this plot then is to find a way to simplify the legend. Upon closer inspection, it seems that there is a lot of information that is repeated. For example, there are only two temperature combinations, and three levels of direct solar energy. Humidity and diffused solar energy are kept the same in all cases. That makes it really easy for us: our five legend options are

Outdoor temp (deg C) Direct solar energy (W/m^2)
32 700
32 150
32 500
29 500
29 150

Elsewhere we can explain that the mezannine/platform temps are always 2/4 degrees higher than outdoors, and that the humidity is always 50%, and that the diffused solar energy is always 100W/m^2.

Living in Buxton, one of the coldest, rainiest towns in the UK, it amuses me to see that their “low” outdoor temperature is 29°C.

The other thing to note is that we have two variables mapped to the hue. For just five cases, this is just about acceptable, but it isn’t the best option and it won’t scale to many more categories. It’s generally considered best practice to work in HCL color space when mapping variables to colours. I would be tempted to map temperature to hue – whether you pick red as hot and blue as cold or the other way around depends upon how many astronomers you have in your target audience. Then I’d map luminance (lightness) to solar energy: more sunlight = lighter line.

I don’t have the values to exactly recreate the dataset, but here are some made up numbers with the new legend. Notice the combined outdoor temp/direct solar energy variable.

time_points <- 0:27
n_time_points <- length(time_points)
n_cases <- 5
comfort_data <- data.frame(
  time =, n_cases),
  comfort = jitter(rep(-2:2, each = n_time_points)),
  outdoor.temperature = rep(
    c(32, 29),
    times = c(3 * n_time_points, 2 * n_time_points)
  ), = rep(
    c(700, 150, 500, 500, 150),
    each = n_time_points
comfort_data$combined <- with(comfort_data,
  factor(paste(outdoor.temperature,, sep = ", "))

We manually pick the colours to use in HCL space (using str_detect to examine the factor levels).

cols <- hcl(
  h = with(comfort_data, ifelse(str_detect(levels(combined), "29"), 0, 240)),
  c = 100,
  l = with(comfort_data,
    ifelse(str_detect(levels(combined), "150"), 20,
    ifelse(str_detect(levels(combined), "500"), 50, 80))

Drawing the plot is very straightforward, it’s just a line plot.

p <- ggplot(comfort_data, aes(time, comfort, colour = combined)) +
  geom_line(size = 2) +
    name = expression(paste(
      "Outdoor temp (", degree, C, "), Direct solar (", W/m^2, ")"
    values = cols) +
  xlab("Time (minutes)") +

My version of the plot, with an improved legend

Sensible people should stop here, and write the additional detail in the figure caption. There is currently no sensible way of writing annotations outside of the plot area (annotate only works inside panels). The following hack was devised by Baptiste Auguie, read this forum thread for other variations.

caption <- tableGrob(
        "Mezzanine temp is 2", degree, C, " warmer than outdoor temp"
        "Platform temp is 4", degree, C, " warmer than outdoor temp"
      paste("Humidity is always 50%"),
        "Diffused solar energy is always 100", W/m^2
  parse = TRUE,
  theme = theme.list(
    gpar.corefill = gpar(fill = NA, col = NA),
    core.just = "center"
grid.arrange(p,  sub=caption)

The additional information is included in the plot's subcaption

My New Year’s Resolution: Be lazier

3rd January, 2011 2 comments

I wrote this on New Year’s Eve but given the contents it seemed more appropriate to post a few days late. In many walks of life, laziness is a terrible vice but for programmers and statisticians it can be an unsung virtue. You don’t believe me? Then read on to hear my ideas for virtuous laziness.

Idea 1: Write less code
Writing less code means less code to maintain which means even less work – a virtuous circle of laziness. Of course, you can’t just stop doing your job, but you can use existing packages instead of reinventing wheels and you can write code that is reusable (write functions instead of scripts and packages instead of loose functions).

Idea 2: … but code instead of clicking
There are loads of little tasks that computers (and other machines) do better than humans. You just need to tell them to do it! If you find yourself typing the same thing over and over again, write a function, script or macro so you don’t need to bother next time. Jobs that complete themsleves automatically are the best kinds of jobs!

Idea 3: If your can’t automate, then simplify
Of course, many tasks are tricky to automate. In that case, can you simplify your problem? If you’re doing bleeding edge research, can you make your task straightforward enough that other researchers can do it? If you’re doing something more routine, can you simplify it enough that the intern could take over? If you’re the intern, can you simplify your job so that a kid could do it? Now we’re nearly at the point of automation.

Idea 4: Give the gift of laziness
I know that a lot of you readers are coders and will probably have heard something like this idea before. Out there in the wider world, I don’t think the concept of automating tasks is as common. This doesn’t always mean programming things either. Over the coming days, teach your grandma about keyboard shortcuts or take the time to introduce a less technical colleague to the idea of network shortcuts or the button that minimizes all your windows. (It’s amazing how few people seem to know about that!)

Two amigos: follow up

13th October, 2010 Leave a comment

Brett and Jiro have announced the results of the competition to make a Bob-free image. There were five entries, two prizes and … I didn’t win either. Still, it was a fun challenge and a useful learning experience so I’m consoling myself with cliches like “it’s not the winning that’s important but the taking part”. I’m certainly not using MATLAB to construct a voodoo-doll image of Brett and Jiro.
jiro and brett with pins in their heads

%% Read in image and display
theAmigos = imread('the amigos better blur.jpg');

%% Add lines
pinColour = [.5 .5 .5];

xcoords = { ...
   [130 180] ...
   [132 182] ...
   [136 184] ...
   [140 186] ...
   [148 190] ...
   [165 195] ...
   [182 200] ...
   [200 205] ...
   [215 214] ...
   [230 223] ...
   [243 228] ...
   [255 234] ...
   [270 237] ...
   [283 244] ...
   [295 247] ...
   [300 246] ...
   [303 248] ...
   [465 515] ...
   [465 516] ...
   [466 517] ...
   [469 519] ...
   [475 522] ...
   [487 526] ...
   [505 534] ...
   [528 540] ...
   [548 546] ...
   [567 551] ...
   [588 554] ...
   [606 557] ...
   [621 560] ...
   [628 563] ...
   [633 566] ...
   [633 567] ...
   [634 568] ...

ycoords = { ...
   [295 300] ...
   [275 290] ...
   [260 280] ...
   [240 275] ...
   [225 274] ...
   [220 274] ...
   [215 273] ...
   [212 273] ...
   [212 273] ...
   [214 273] ...
   [217 274] ...
   [221 274] ...
   [230 275] ...
   [240 277] ...
   [250 280] ...
   [275 285] ...
   [290 292] ...
   [320 322] ...
   [305 315] ...
   [288 310] ...
   [272 304] ...
   [253 300] ...
   [240 296] ...
   [233 292] ...
   [230 291] ...
   [230 291] ...
   [232 292] ...
   [236 294] ...
   [246 297] ...
   [262 300] ...
   [280 302] ...
   [296 307] ...
   [309 312] ...
   [320 320] ...

xstart = cellfun(@(x) x(1), xcoords);
ystart = cellfun(@(x) x(1), ycoords);

hold on
cellfun(@(x, y) line(x, y, 'Color', pinColour), xcoords, ycoords);
arrayfun(@(x, y) plot(x, y, '.', 'Color', pinColour), xstart, ystart);
hold off

%% Remove the extra bits created by plot calls and write to file
set(gca, 'Visible', 'off')
set(gca, 'Position', [0 0 1 1])

print(gcf, '-djpeg', 'the amigos voodoo.jpg')

The truth is stranger than fiction

27th September, 2010 Leave a comment

It turns out that when you search Google for the phrase 4D pie chart, this blog is only the second link.  The number one spot is taken by a tutorial on how to create animated pie charts with Maxon Computer’s Cinema 4D package.  I can’t detect any hint of sarcasm in the instructions.  Hang your head in shame, Maxon Computer employee.

Tags: ,

Get every new post delivered to your Inbox.

Join 204 other followers