What Happened in 2017?

Oops, 2017 went by without a blog post! What happened?

Not a lot…

  • The interactive Brat shell works again!
  • More unboxed operations on numbers
  • Single-quoted backslashes finally parse correctly
  • More tracking of types during compilation
  • Symbols (:blah) are now immutable, again
  • Create prototype objects on demand
  • This works: x.y = {}()
  • Fix nested comments and semicolons in comments
  • Updated LuaJIT

Overall, Brat is another year older, and a little bit faster!

Inlining Branches and Sharing Metatables

In this post, we continue the quest to make Brat faster (which essentially equates to making LuaJIT happier).

Inline Branches

Previously we removed inner function creation by lifting functions out into the global scope and faking closures when needed.

One place where a lot of functions show up is in conditionals. true?/false?/null? are all just functions that take a condition and two branches. In the Tak benchmark, you can see the branches are wrapped in functions:

tak = { x, y, z |
  true? y < x
    { tak tak(x - 1, y, z), tak(y - 1, z, x), tak(z - 1, x, y) }
    { z }

This is standard Brat style to delay execution of the branches.

Before the lifting of functions, these functions would be created everytime the tak function was called. That’s pretty bad! Now the functions will be lifted and fake closures will be used instead.

However, what’s better than lifting a function? Not calling a function at all! Since the conditionals are used all the time and are core functions in the language, it makes sense to optimize them to just be regular Lua if statements.

When can we do this? Any time the branch is a liftable function. That’s convenient, since we already have the logic to figure that out.

To inline the branches, they are treated almost exactly like functions. A new scope is created and what would have been the body of the method is output in a do...end block. Instead of a return value, the result is just passed back in a variable. The condition and the branches are then put into a reguar if statement with guards just in case someone decides to override true?/false?/null? (which is possible but unlikely. If it happens, the original code without inlining is used.)

What are the results?

Tak benchmark before inline branches: 0.751 seconds

Tak benchmark after inline branches: 0.431 seconds

A nice 43% improvement!

Shared Metatables

Metatables are Lua’s way of overriding behavior of a table. For example, you can set the method to be called when brackets [] are used, or what method to use for conversion to a string. Brat sets up these two methods for every new object.

In wandering the web looking for nuggets of LuaJIT wisdom, I found this email from Mike Pall. In it, he notes that the parent post was creating new metatables for each new object, but the methods were the same.

Looking at Brat’s object creation, it already factored out the methods, but a new metatable was created for each new object. It was a simple change to always use the same one, and the change had no ill side effects on existing code.


Tak benchmark after metatable change: 0.225 seconds. Another 48% improvement! These two changes together reduced the runtime for the Tak benchmark by 70%.

Similarly, the Kaprekar benchmark went from 86 seconds in our last blog post to just 21 seconds - another 70% improvement. Fibonacci (king of microbenchmarks) runs in just 0.043 seconds.

For more real-world use, these two optimizations reduced parsing time of peg.brat (ironically the current largest Brat file) by 42%.

While Brat is still not (nor will ever be) particularly fast in general, it is fun to continue pushing it.

Optimizing with Lifting Functions and Faking Closures

Brat uses functions all over the place. Everything between curly braces is a function, and every function is also a closure (meaning it saves its environment).

For example, here is the standard recursive definition of the Fibonacci sequence:

fibonacci = { x |
  true? x < 2, x, { fibonacci(x - 1) + fibonacci(x - 2}

fibonacci 25

Of course fibonacci itself is a function, but so is the block passed to the call to true?.

Unfortunately, this means every call to fibonacci will create a new inner function/closure. Even more unfortunately, LuaJIT does not currently compile creation of closures.

Below is the output of luatrace showing how the JIT performed.

Trace Status                 Traces       Bytecodes           Lines
------------                 ------       ---------           -----
Success                   13 ( 39%)      244 (  4%)       63 (  5%)
NYI: bytecode FNEW        18 ( 54%)     5098 ( 91%)     1080 ( 90%)
blacklisted                1 (  3%)      138 (  2%)       38 (  3%)
NYI: bytecode UCLO         1 (  3%)       62 (  1%)       11 (  0%)
------------------  --------------- --------------- ---------------
Total                     33 (100%)     5542 (100%)     1192 (100%)
==================  =============== =============== ===============

39% of attempted traces were successfully compiled to native code, but 54% were aborted because of the closure creation. The running time (average of 5 runs) was 0.894 seconds.

Naturally, it would be good to get rid of the closure creation so LuaJIT can compile more code. But how?

Faking Closures

The first step is to figure out how we can make a closure without making a closure. All we need is the function itself and somewhere to put the variables it needs to access but are outside its own scope.

This is accomplished with a simple data structure that stores a function and a table with variable names and their values.

The creation of this data structure looks like this:

local _temp13 = _lifted_call(_lifted1, {})
_temp13.arg_table['_temp1'] = _temp1
_temp13.arg_table['_temp2'] = _temp2

For reasons covered below, this is called a “lifted call”. _lifted1 is the name of the function being stored. After creating the new stored call, the variables are stored into the table. For simplicity, the keys are the same as the variable names.

Now we have a package with the function and the values it would normally capture as a closure. For convenience, the package can be called just like a function. Unfortunately, it is not a function, so the compiled Brat code must check if a variable is a function or one of these packaged up calls (which just look like Lua tables otherwise). Either way, it can be invoked the same way.

Lifting Functions

The next step is to move the function creation outside of any other functions, essentially “lifting” or “hoisting” it up and away. This is so it only gets created once.

The lifted function accepts the table of variables, self, and then any normal arguments. In our example, there are no normal arguments, so the function starts like this:

_lifted1 = function(_argtable, _self)
  local _temp1 = _argtable['_temp1']
  local _temp2 = _argtable['_temp2']

At the beginning of the function it reads the variables back out of the table and into local variables. These have the same names as before, so the function can be compiled the same as if it had not been lifted.

The Dirty Details

Not all inner functions can be lifted. In our implementation, the function cannot really access the outside variables, only their values have been copied into the local scope. This means any function which sets the value of a variable outside its own scope (an “upvar”) cannot be lifted.

Unfortunately, it gets worse, though. If any inner function sets an upvar, none of its outer functions can be lifted, either.

Even worse, if a function at the same level or lower sets an upvar, none of the functions at the same level can be lifted.

For example:

f = {
  x = 1
  while { x < 10 } { x = x + 1 }

None of the two inner functions can be lifted. If { x < 10 } is lifted, it will get a snapshot of x and the later assignment will not affect it.

In theory f could be lifted although as a top-level function it would not do any good.


Going back to Fibonacci, how does the JIT trace look after lifting out the inner function?

Trace Status                         Traces       Bytecodes           Lines
------------                         ------       ---------           -----
Success                           20 ( 90%)    11541 ( 70%)     1114 ( 83%)
down-recursion, restarting         1 (  4%)     3753 ( 22%)      125 (  9%)
call unroll limit reached          1 (  4%)     1101 (  6%)       97 (  7%)
--------------------------  --------------- --------------- ---------------
Total                             22 (100%)    16395 (100%)     1336 (100%)
==========================  =============== =============== ===============

Nice! 90% of the attempted traces were compiled, and the aborted traces were only due to recursion. The average of five runs was 0.678 seconds, which is 24% faster.

In this case, even the overhead of our own fake closures was worth it to get the JIT-compiled code.

A different example, calculating Kaprekar numbers went from 122 seconds to just 86, ~30% faster.

More posts…

Fork me on GitHub