Archive

Posts Tagged ‘pqR’

Fearsome Engines Part 3: Which one should you use?

13th October, 2013 4 comments

There are lots of R engines emerging! I’ve interviewed members of each of the teams involved in these projects. In part 1 of this series, we covered the motivation of each project. Part 2 looked at the technical achievements and new features. This part tries to determine which projects are suitable for which users.

Compatibility

CXXR and pqR and forks of GNU R, and have retained full package compatibility. This means that they should work out of the box on your existing R code (though note that both are Linux only and require you to build from source).

The other four engines are complete rebuilds, and consequently have had to recreate compatibility.

TERR currently has over 1200 completely compatible packages, and many more partially compatible ones.

Renjin also has, I think, over one thousand compatible packages. Helpfully, there’s a list of which CRAN packages work, and which don’t. (I haven’t bothered counting the blue dots, but it looks like almost half the packages are compatible.)

Riposte and FastR are at an earlier stage in development FastR has no package support at all, the Riposte is just getting around to it.

Justin Talbot:

A couple of months back I started a big push to transition Riposte from an academic project to a full-featured drop-in replacement for R. Right now, Riposte can load the base and recommended packages with just a few errors, which is a substantial improvement over a couple months back when it couldn’t load even one.

Interestingly, one of the big headaches reported by several engine developers is that some packages make assumptions about the internals of R. So they assume things like numeric vectors being pointers to double arrays, and it makes it harder for other engines to extend the functionality. It seems that some work is needed to the R Language definition to clarify exactly what a package should and should not be allowed to assume.

Licensing

pqR, CXXR, Renjin and FastR are all licensed under the GPL. Riposte is also open source, under the more permissive 2-clause BSD licence.

TERR, by contrast is a closed-source product. It comes in both free and paid for versions.

Ecosystem

Having lots of different engines is mostly great, but fragmentation is a big worry. Web development suffered for a long time (and still does, a little bit) from having to write different code for different browsers. SQL is subtly different for different databases, with vendor-specific extensions rampant.

All the engine developers have stated their intention to work towards (or retain) full compatibility with GNU R, and use that as the reference implementation. Certainly fragmentation is a problem that no-one wants.

Good intentions are all very well, but I was curious to know how much interaction there has been between the different teams, and with R-Core.

Between teams:

Tomas Kalibera previously worked at the University of Kent, where Andrew Runnalls is based. Surprisingly, they didn’t have much contact, as they were working in different areas at the time.

Jan Vitek was on Justin Talbot’s dissertation committee, so there has been some contact between the FastR and Riposte teams.

Justin has also spoken with Alex Bertram of Renjin.

Overall, there hasn’t been that much contact. I think I’d be less worried about fragmentation if the teams talked with each other more. (TERR is an exception in this; as a commercial product, a certain level of secrecy is required.)

A few R-Core names crop up in relation to several projects:

Doug Bates has been involved with CXXR.

Duncan Murdoch helped get some of pqR’s bug fixed merged into GNU R, and Radford Neal has had some contact with Luke Tierney and Robert Gentleman.

Luke Tierney has helped with the bytecode compilation in Renjin.

John Chambers, Ross Ihaka and Simon Urbanek have provided feedback on the TERR project.

Luke Tierney, Duncan Temple-Lang and Robert Gentleman have provided feedback to FastR.

Luke Tierney has helped on Riposte.

So at least some members of R-Core are aware of these projects, and there is some collaboration going on. Personally, I’d quite like to see a more formal arrangement with a language committee trying to refine the R Langauge Definition to aid compatibility between projects. This is probably just a legacy of my time as a civil servant.

The FastR team organized a number of workshops that featured members of the Core-R team (one on virtual execution environments for scientific computing and one on big data). After the SPLASH conference in October 2013, key developers from the Renjin and TERR engines will meet up with the FastR team, along with some Core-R members at an NSF workshop on scalable data analytics organized by Vitek.

The future

As a commercial project, TERR lives or dies by its ability to be sold (it does have a real customer base already, predominantly in the oil & gas, life sciences and consumer goods sectors).

Renjin is supported by BeDataDriven’s consultancy work, so it also has ongoing financial support.

The other four projects are all academic, so their long term support is trickier.

Justin works for Tableau Software, and they let him develop Riposte in his 20% time, but additional developers would help the project.

Justin Talbot:

Right now the team is just me. Zach has moved on to his dissertation work, the very cool Terra project (http://terralang.org). I’d be more than happy to expand the team if anyone is interested in helping out. Right now Riposte is at a place where there is a lot of easily factorable work in supporting popular packages or R’s internal functions that people could pick up if they wanted to. As I said at the beginning, Riposte’s overriding goal is to figure out how to execute dynamically-typed vector code like R at near the performance of highly optimized C. I think this sets it apart from a number of the other R projects like CXX, Renjin, and pqR, where better performance is nice, but not the driving design goal. If Riposte’s mission interests anyone, please get in contact!

CXXR and pqR are also solo projects and would benefit from developers.

Radford Neal:

I’m treating pqR as a fork, without any expectation that it will necessarily be merged with the version maintained by the R Core Team.

One necessary thread of work on pqR consists of fixing bugs, getting it to work in more environments (Windows, Mac, with GUIs), and adding features from later R Core releases. I’d like to have it compatible soon with R-2.15.1, and later with at least R-2.15.3.

There are still many areas in which performance can be improved
without major design changes.

At some point I’d like to get back to doing statistics, so I hope that
other people will also get involved with pqR.

FastR has a long term goal of being maintained by statisticians and not computer scientists!

Jan Vitek:

The hope, in an ideal world, is eventually that we could turn FastR over to R-Core.

As well as developers, of course, the biggest thing that these projects need is a userbase. Feedback and bug reporting is hugely important, so a great way for you to contribute is to simply to try these projects out. Failing that, telling other R users that these projects exists is an excellent start. Get tweeting!

Fearsome Engines Part 2: Innovations and new features

13th October, 2013 5 comments

There are lots of R engines emerging! I’ve interviewed members of each of the teams involved in these projects. In part 1 of this series, we covered the motivation of each project. This part looks at the technical achievements and new features.

Many of the innovations are performance improvements, reflecting the primary goal of several of the teams. pqR (the only engine that like R is written in C) get a lot of benefit from simple fixes to R’s C code.

Radford Neal:

A big part of the speed-up in pqR comes just from local rewrites of C code, or relatively limited redesigns of how operations are done.

In general however, some bigger changes have been required to make R go faster.

Garbage collection and reference counting

One performance bottleneck seems to be R’s garbage collector (the thing that frees up memory when you remove variables), with almost all the projects using a different collection strategy to try and make memory management more efficient.

Renjin and FastR get a lot of benefit for free by using the Java Virtual Machine (JVM):

Alex Bertram:

The JVM has 20 years and millions of man hours invested in the garbage collector. As revolutionary as R is you just don’t have a group of people that are focused on garbage collection. So out of the box we see that if you a have a memory intensive benchmark then Renjin sails through it [compared to GNU R].

One of the things that we get for free is parallel garbage collection. That’s a nice extra.

In theory, if you copy a variable then R will create a completely new copy of it. For example, if you do y <- x <- 1:10, then x and y both store completely separate copies of 1:10. In practice, this is really wasteful of memory, so y will just point to the same memory location as x until you modify one of the variables to make them different. This means that you need to keep a count of how many variables point to a particular location in memory; a process known as reference counting.

GNU R uses a simply system where variables are referenced zero, one or two-or-more times. Both pqR and TERR have tried to improve on this. pqR stores the number of references in a three-bit field, so it can count from zero to seven (seven is really “seven-or-more”). TERR goes even further, storing the count as a 16-bit integer, allowing counts up to 65535. The more accurate reference count allows some memory to be reclaimed when a reference count decreases to zero, which is cheaper than reclaiming it via the garbage collector. (GNU R and pqR always use the garbage collector for reclaiming memory.)

While the pros and cons of different methods of reference counting and garbage collection can get quite technical, the good news is that is seems to be an area where substantial performance improvement is possible. Radford Neal has written in depth about the pqR implementation, and Michael Sannella’s talk from useR2013 is available on slideshare.

Smart execution

GNU R uses what is known as “lazy evaluation” on variables. That is, when you pass variables into a function they aren’t copied or evaluated until absolutely necessary. Several of the engines have taken this concept further, with smarter ways of evaluating (or not evaluating) code in order to improve performance.

Riposte uses “fusion” operations on vectors to reduce the creation of intermediate variables.

Justin Talbot:

While R uses C code for vector operations, it only does one vector operation at a time. This leads to lots of memory traffic as R must read in each vector operand and write out each vector result. Riposte tackles this problem by performing “fusion”, an optimization where multiple vector operations are performed simultaneously in the machine’s registers. This eliminates a lot of memory traffic and can speed up long vector code. In some of our benchmarks I’ve seen 10-50x
performance improvements for long vectors (100K-100s of millions of elements).

As an example, if you wanted to find which elements of a vector corresponded to males over 40, you could write something like age > 40 & gender == "male". GNU R would create several intermediate vectors for age > 40 and gender == "male" before calculating the result. An alternative that uses less memory is to calculate the result of the whole operation on each element, and then loop over the vector (in C++, where loops are fast).

Of course, R’s flexibility causes problems.

Justin Talbot:

R allows you to replace the default operators within almost any scope, by e.g.:

`+` <- function(x,y) x*y

Because of this, if you have a simple for loop:


j <- 0
for(i in 1:10000) {
j <- j+1
}

an interpreter can’t simply do an addition. Instead, it has to check each time through the loop that `+` is still really plus and not something else. If j were some long vector, this wouldn’t matter, the time to do the check would be dominated by the cost of actually doing the addition. But if j is short, as it is here, then the time to check dominates.

Riposte solves this problem by assuming that a small set of very common operators cannot be replaced in very limited circumstances. Basically, if you write:

j+1

and j is a basic type (not an S3/S4 class) Riposte assumes that + must be normal addition. If you really want + to use your redefined version you can either (1) make j an object or (2) call it using names, e.g.: `+`(x=j, y=1).

Most of the other engines have similar concepts to Riposte’s fusion. Renjin calls it a “vector pipeline” and FastR calls it a “view”. All refer to executing multiple steps as a group.

Jan Vitek:

We have two implementations of a vector. One is the usual one, and one is an interface. This is a lazy proxy to the computation. So if you do binary plus on two vectors you could allocate an array and do the plus on every element of the inputs as GNU R would do it, but our implementation you could also create a view. This is an object that just remembers that there are these two vectors that it needs to add. It only starts adding the numbers when asked for them.

The last sentence is particularly important. Somewhat non-intuitively, executing code straight away doesn’t always result in the best performance – sometimes it can be faster to wait. This “deferred execution” becomes very important when trying to parallelise code, as we’ll see in a moment. One surprising challenge for deferred execution is respecting the order of error messages and warnings produced by GNU R.

Parallelism

Much of GNU R is single threaded by default, so with four cores now standard on desktop machines, exploiting multicore capabilities is an obvious target for performance boosting. In fairness, since R2.14.0, the parallel package has allowed multicore capabilities with a bit of code rewriting, and there are many packages for various types of parallelism. The holy grail of parallelism, however is for it to happen automatically (“implicit parallelism”) without code rewrites.

The FastR project have been pushing parallel computing in several forms.

Tomas Kalibera:

We aim to push on parallelism a lot. There are different parallelisms that you may look at: multi-threading, GPUs and possibly distributed settings.

One of the good things about an implementation from scratch is that you can build a mostly thread-safe interpreter. At least we know the points where we aren’t thread-safe. That means that we can evaluate the views in parallel. We know that they are simple operations on vectors. They don’t have side-effects. We can just put them in a thread-pool and almost magically you get parallelism.

In the long run the goal is to integrate parallelisation into the compiler. To use the compiler to analyse the code and understand where parallelism makes sense and to automatically synthesise parallelism in the programs. It should work!

Renjin’s vector pipeline provides implicit parallelism.

Alex Bertram:

With our vector pipeline, we have support for implicit parallelisation. Add sum of two large vectors, it will recognise that the calculation has two independent sums and calculate those on different threads. You don’t have to deal with that; it just parallelises it automatically.

Renjin will try and defer work as long as possible so that it has an opportunity to parallelise more things. You can get quite far before you have to do any work, and then Renjin will say “I’ve got all these different calculations that I have to do, a lot of them are independent. Let me distribute them across the available cores”.

pqR uses a related concept called helper threads. While individual calculations (“tasks” in pqR terminology) are single threaded, several calculations can be done in parallel, one on each thread.

Riposte has several parallelisation tricks up its sleeve. As well as vector fusion and implicit multicore behaviour it also uses SSE and AVX processor level instructions. (AFAIK, these were originally designed for parallel processing in media streaming so this is quite a novel use.) Surprisingly, fusion is much more important than multicore parallelism.

Justin Talbot:

For big vector operations on which Riposte performs fusion, it will attempt to leverage the SSE or AVX instructions to get a 2-4x performance improvement on a single core. It will also distribute the work across multiple threads. This leads to nearly linear performance increases for large enough vectors. Note the orders of magnitude though: fusion itself can lead to 10-50x performance improvement. Distributing the work across 4-6 cores only leads to at most a 4-6x improvement. So there’s more to be gained by doing fusion well.

Bytecode compilation

Interpreted languages are, in general, slower than compiled languages. This means that there is scope for improving R’s performance by compiling the code before it is executed. Recent versions of GNU R include the compiler package, which converts functions to bytecode, which can then be run more quickly than pure interpreted code. Riposte and Renjin both try and improve on this.

Justin Talbot:

when I started working on R it only used the older AST walking
interpreter originally developed in the early 1990s. Interpreter
technology has advanced considerably since then. Riposte uses a much
more modern register-based bytecode interpreter. This made it about 5x
faster than R on scalar code (e.g. your typical for loop). Luke
Tierney’s new stack-based bytecode interpreter speeds up R somewhat,
but I think Riposte’s interpreter is still 2-3x faster in general.

Compiling R code is, in general, a very hard problem since R code is very flexible. If you include promises, calls to the eval function, or start redefining operators, or messing about with environments, then optimising code is almost impossible.

A similar situation occurs with JavaScript, and the browser makers (who have considerably more resource than the R community) have put in a lot of effort trying to make JavaScript run faster. There, the solution seem to be “compile the bits that you can, optimise other bits where possible, and just run the hard stuff”.

Renjin uses a similar approach for R code.

Alex Bertram:

We have different interpretation modes. The baseline is this Abstract Syntax Tree (AST) interpreter that’s modelled almost exactly like GNU R. If you want to figure out how an eval statement works or system calls, we just went in and looked at how it was implemented in GNU R. The aim of that is that we have at least one AST interpreter that is capable of handling everything.

If we see that you’re dealing with large data or there’s a function that is getting called a lot, or you’ve hit a for loop of one to one million then it breaks out of that baseline interpreter and tries to use more advanced optimisations. That will only work for a subset of the R language, so the worst case it that we get performance equal to GNU R.

If you’ve got a for loop and you’re calling eval, then Renjin will throw up its hands and say ‘alright, if that’s what you want to do then you’re not going to get any performance speedup’. But if it sees that in the for loop, all you’re doing are some computations, then it will break out [of the AST interpreter] and compile down to machine code.

Data that isn’t in memory

By default, all R variables are stored in RAM. This contrasts with, for example, SAS, where variables are stored on disk. There is a tradeoff in this – RAM can be accessed more quickly than disk, but it limits the size of the data the that you can store. Some R packages allow disk-based variables, most notably the ff and bigmemory packages. The MonetDB.R package allows database variables stored in a MonetDB.

In an ideal world, you should be allowed to store your variables anywhere: in RAM, on disk, or in a database, and R would be able to work with them in the same way. GNU R isn’t well designed for this. As an example, numeric vectors, in the underlying C code, are always assumed to be a pointer to a double array. To easily allow different data types, a level of abstraction is needed.

TERR leverages the object oriented nature of C++ to achieve this. In TERR, each data type is an abstract C++ class, that can have one or more representations.

I don’t know any details of Tibco’s implementation, but it seems that from an abstract class of NumericVector, you could have concrete classes of NumericVectorInRam and NumericVectorOnDisk to account for different data sources (not to mention specialisation for vectors with or without names and other attributes).

Data from disk and database sources isn’t yet supported in TERR, but it is planned for the future.

Both FastR and Renjin have also made some progress towards non-RAM variables.

Alex Bertram:

We have been able to introduce a layer of abstraction between the algorithms that operate on data and where that data is stored. In GNU R there is an expectation that all the data is in arrays, and that’s very difficult to overcome.

In Renjin, it’s very basic but we have an interface, so if you want the sum of something, say a vector or other object, we hand some algorithm a pointer to an interface that says ‘if you want the tenth element, then call this method; if you want the length, then call that, etc.’. So it could be in memory, backed by an array, but it could also be on disk. It could be in a database or in a buffer.

That’s the basis of everything else: there’s a layer of abstraction between your algorithms and your storage.

Code specialisation

CXXR and FastR both make some performance gains by providing specialised routines for particular cases.

Tomas Kalibera:

Imagine that you are indexing a vector. Every user of R knows that there are very rich semantics. You can have logical or numeric indicies, you can have NAs, you can have attributes on the base or on the index. There are many different semantics. One possibility is to implement the code using a single subroutine that will cover all the general cases but this is slow.

We implement many different selection methods. One a still a general one that can do everything, but we also have specialised ones that make some assumptions. [For example], in one we assume that a base numeric vector is a double vector and an index is a scalar integer or a scalar double within the bounds of the vector. Of course you have to have some guards – some checks that these assumptions are valid, but once you check these guards then the code is much simpler and the just-in-time Java compiler is then able to optimise the code much better because it is so much smaller than the general case. We get big speedups from this.

Other features

TERR provides a C++ API to allow easy integration with C++ code. (Using Rcpp, it isn’t hard, but alternatives are always welcome.)

FastR uses some Oracle technology to improve performance.

Jan Vitek:

FastR 0.1 is a self-modifying interpreter. The interpreter is formed by a tree of executable nodes, which initially reflects the syntax tree of the program, but as the program executes, the tree self re-writes to more optimised versions based on properties of the observed execution. Most optimisations come from restricting to specialised types of arguments (for example nodes for vector indexing or arithmetic), or specialising the contents of the arguments (for exmaple that a vector index is within bounds). The tree can then rewrite its nodes to general versions if the applied specialisations cease to hold.

The new version of FastR, 0.2, includes some novel Oracle technologies. (Oracle is one of the sponsors of the project.) Truffle and Graal allow code to self-modify as it runs, and to generate native code, providing on-the-fly optimisation. FastR has also experimented with Oracle’s Delite which autogenerates GPU code.

CXXR has been conspicuously absent from this so far. It’s single user-level new feature is provenance tracking, based on the S AUDIT feature.

Andrew Runnalls:

R itself stores (in the .Rhistory) file a record of all R commands that have been issued, so to that extent you can “recreate your current workspace”. The S AUDIT feature, however, enabled you to identify the specific sequence of commands that contributed to the current value of a particular S object: it was this selectivity that I particularly valued.

Rather than adding new features, a substantial amount of development effort has gone into refactoring R to make it more maintainable and better documented. Are these features? Hell yes! Rather than being user-level features, they are developer-level features. In this respect, I think CXXR might be useful for the developers of other engines. Rather than trying to replicate features from GNU R’s source code; it might be easier to replicate them from CXXR.

This post is far from exhaustive on the new features in the engines. But as you can tell from what I’ve included, there really are a ton of technical innovations that have been made, with more on the way.

I put it to Radford Neal that it was difficult to tell which ones were the important ones.

Radford Neal:

What is most important is very hard to say, since it depends so much
on the user’s program and environment. Helper threads are of no use
on a single-core system! And speeding up matrix multiplication
doesn’t help programs that don’t use it. I’d encourage people who’ve
tried pqR to let me know what programs have and have not been sped up
a lot – so far I haven’t gotten much of that. This is most helpful,
of course, if you can send me your program and data so that I can try
it myself, and maybe change pqR to make your program faster.

Let me know in the comments which features seem most exciting to you, or if there is anything that you want to see in R that isn’t possible.

In the next part, I’ll have a go at figuring out which engines you might want to use for which purpose.

Fearsome Engines, Part 1

7th September, 2013 16 comments

Back in June I discovered pqR, Radford Neal’s fork of R designed to improve performance. Then in July, I heard about Tibco’s TERR, a C++ rewrite of the R engine suitable for the enterprise. At this point it dawned on me that R might end up like SQL, with many different implementations of a common language suitable for different purposes.

As it turned out, the future is nearer than I thought. As well as pqR and TERR, there are four other projects: Renjin, a Java-based rewrite that makes it easy to integrate with Java software and has some performance benefits; fastR, another Java-based engine focused on performance; Riposte, a C++ rewrite that also focuses on performance; and CXXR, a set of C++ modifications to GNU R that focus on maintainability and extensibility.

I think that having a choice of R engine is a good thing. The development model of one implementation with a team acting as gatekeepers as to what goes into the codebase is, well, a bit communist. Having alternatives should introduce competition and (go capitalism!) bring improvements at a greater rate.

The fact that we now have a choice of seven different engines for running R code is amazing news, but it takes a traditional R problem to a new level. Rather than just worrying about which implementation of an algorithm to use in which package, you now have to worry about which version of R to use.

In order to try and figure out which projects might be suitable for which purposes, and what stage each one is at, I spoke to members of each project. In alphabetical order of project, they were:

Andrew Runnalls from the University of Kent, talking about CXXR.

Jan Vitek and Tomas Kalibera from Purdue University, talking about FastR.

Radford Neal of the University of Toronto, talking about pqR.

Alex Bertram from BeDataDriven, talking about Renjin.

Justin Talbot from Tableau Sfotware, talking about Riposte.

Lou Bajuk and Michael Sannella from Tibco, talking about TERR.

My interview with Andrew Runnalls was via phone, so his responses are paraphrased or imputed from my notes; likewise my recording of the conversation with Lou Bajuk and Michael Sannella failed, so their responses are also taken from notes. Other responses are taken from emails and Skype recordings, and as such are accurate.

I started by asking about the motivation for each project.

Before pqR, Radford had explored some possibilities for speed improvement in R.

Radford Neal:

Speed issues were what first prompted me to actually look at and modify the R interpreter. This came about when I happened to notice that (in R-2.11.1) curly brackets take less time than parentheses, and a*a is faster than a^2 when a is a long vector. This indicated to me that there must be considerable scope for improving R’s implementation. Previously, I hadn’t thought much about this, just assuming that R was close to some local optimum, so that large speed gains could be achieved
only by a major rewrite.

I’ve also commented on various design flaws in R, however. In the longer term, I’m interested in ways of fixing these, while retaining backwards compatibility.

Riposte started life as a PhD project.

Justin Talbot:

Riposte was started during my Ph.D. at Stanford. I started off creating tools for visualizing and manipulating statistical models. Most of these tools were built using R as the back end to perform data manipulation and to create statistical models. I found that I was unable to get the interactive performance I wanted out of R for my visualization tools. This led me to start exploring R’s performance. Since there were other individuals in my research group working on other programming languages, it was a semi-natural transition to start working on improving R’s performance.

The main goal of Riposte is to see if it is possible to execute a dynamically-typed vector language at near the speed of well-written optimized C.

CXXR began as a simple quest for a feature.

Andrew Runnalls:

I started CXXR in 2007 when I was researching flight trials data. One of the features I missed when moving from S-Plus was an audit feature that searches your command history to find the code that created a particular variable. This hadn’t been ported to R, so I wanted to see if I could alter R’s internals to recreate the feature. However, being a “dyed-in-the-wool” OO programmer, when I started looking at the interpreter’s C code, its structure was so foreign that I felt ‘I’d rather not start from here!’ Thus it was that the project metamorphosed into a more general enterprise to refactor the interpreter into C++, with the provenance-tracking objective then becoming secondary.

The Renjin project was born of frustration, and a ridiculous amount of optimism (or maybe naivety).

Alex Bertam:

We were with a mobile operator. We had just designed a great model to predict customer churn, and we were trying to get R to run on their servers. They had a weird version of Unix, we couldn’t get [GNU] R to build. We couldn’t get the libraries to build. We spent such a lot of time trying to get the libraries to talk to each other. Then it couldn’t handle the load from the sales team. There’s got to be a better way, and I thought ‘man, how hard can it be?’

TERR has been waiting to happen for a long time.

Lou Bajuk:

I joined Mathsoft [later Insightful Software, the previous owners of S-Plus] in 1996. Even back then, we wanted to rebuild the S_Plus engine to improve it, but Insightful was too small and we didn’t have the manpower.

Tibco’s main priority is selling enterprise software, including Spotfire, and once Tibco bought Insightful, we were better positioned to embrace R. It made sense to integrate the software so that open source R could be used as a backend for Spotfire, and then to implement TERR as an enterprise-grade platform for the R language.

My favourite reason of all for starting a project was the one given by Jan Vitek.

Jan Vitek:

My wife is a statistician, and she was complaining about something [with GNU R] and I claimed that we computer scientists could do better. “Show me”, she said.

In the next part, I tell you about the technical achievments of each project.

How R will turn into SQL

16th July, 2013 14 comments

Up until very recently the only way of running R code was through the standard R distribution. Of course you could use another IDE, but somewhere underneath it all you would be running the same, standard R engine from the R-core team.

This is no longer your only option. A couple of weeks ago Radford Neal released pqR, a research project that reimplements pieces of the R engine to make it “pretty quick”. Much of pqR is likely to be folded back into the main R engine, but that isn’t the only new player in town.

A team at Tibco, including software architect Michael Sannella, have rewritten the R engine from the ground up for TERR, a commercial R distribution. This leads to some very interesting possibilities. Maybe in the few years we could see many more R engines appearing. Oracle have recently invested heavily in R, and it’s certainly imaginable that they could create a high performance R that is tightly coupled to their database products. Google are also big R investors, and I could easily see them creating an R engine that has parallelism built in.

After that, perhaps even Microsoft might notice R and fix the fact that its integration with .NET is rubbish. IronR, anyone?

This situation isn’t as far fetched as you may think: there is a very obvious precedent in the data analysis world. There are dozens of different database vendors that use a common interface langauge: SQL. In the world of databases, the engine is separate from the programming language.

This is a good thing – multiple vendors provides competition, differentiation and innovation. You have a full spectrum of products from big corporate databases (Oracle again) down to miniature data stores like sqlite.

The fact that SQL has led the way means that some of the potential traps are visible before we embark down this road. My biggest bugbear with SQL is that it isn’t quite universal: different vendors have their own non-standard extensions, which means that not all SQL code is portable. A situation like HTML or C++ or Fortran, where a standards committee defines an official version of the language would be preferable. Whether R-core or another body would set this standard is a matter to be decided. (I suspect that R-core would not welcome the additional administrative burden, and commercial vendors may want more control in defining the spec, so a separate R-standards group is more likely.)

These are interesting times for R, and I look forward to seeing how the separation of language and engine progresses.

Bootnotes:

  1. A sloppy sentence in a previous version of this post made it sound like Michael Sannella ran Tibco. He’s actually a software architect.
  2. The future is happening faster than we think. Renjin and Riposte are two other R engines.
Tags: , , , , ,
Follow

Get every new post delivered to your Inbox.

Join 160 other followers