Wednesday, January 31, 2007

The End is Nigh: Help Squash Rails Bugs

My friends, the end of the road is in sight for official Rails support.

Tom, Ola, and I have been working over the past week to get remaining Rails issues wrapped up. As a result of our efforts:
  • ActionPack is now "practically" 100% working, minus a test or two we can't support and a few tests that are broken or that run fine in isolation (it would be nice to know *why* those fail)
  • ActiveSupport is well above 95% passing
  • ActiveRecord is in the 90% range passing with MySQL and in the 80% range with Derby
The remaining modules are still yet to be worked, but they are mostly in high % as well: ActionWebService, ActionMailer, Railties

So here is a set of instructions on how YOU, dear reader, can help round out JRuby's support for Ruby on Rails.
  1. Getting a testbed set up
    1. Update JRuby trunk (, build it (ant clean jar), love it
    2. Install Rake (gem install rake)
    3. Fetch Rails 1.2.1 from Rails SVN at

  2. Running the tests
    1. From within the module under test (like activesupport/ or actionmailer/) just run "rake". All tests should execute and a report should be provided at the end. You can usually run tests individually as well, although a few depend on the side-effects of previous tests.
    2. ActiveRecord requires some additional setup; I'll update this post with Tom's instructions and patches shortly.

  3. Reporting issues
    1. Reporting failures in Rails is good
    2. Reporting reduced test cases or clear explanations for failures in Rails is better
    3. Reporting reduced test cases and including patches for failures in Rails is BEST
    4. Don't forget to check if your issue has already been reported, and please sync up on the mailing list while you're working
    5. Patches will probably not be accepted without a reusable test case. We're trying to grow our regression suite as a result of this work.
    6. JRuby's JIRA is here:

  4. Caveats, things to watch for, things to try
    1. ActiveRecord (with the AR-JDBC adapter) could use wider DB testing. We've done quite a bit of work with MySQL and Tom has been improving Derby support, but there are lots of other databases out there. Pick your favorite database, follow Tom's instructions to get up and going, and report issues (in the JRuby JIRA at least, but also report to jruby-extras project if appropriate)
    2. Railties includes code that will never run under JRuby, like its fcgi-based dispatcher tests. You should confirm with us that they're expected, and then ignore or delete them for your future runs.
    3. Rails is a very...interesting...application to debug. Feel free to ask on-list if you simply don't get something. I've seen things in Rails code no man should have to see, so I know it can be frustrating to debug at times.
We're on the home stretch now, and Rails is getting more and more solid every day. With you all helping, we should be able to finish off the remaining failures, clean up major outstanding JRuby issues, and kick out a pretty sweet "Rails-supporting" JRuby release in the next couple weeks.

Update: A couple folks pointed out that the codebase didn't compile under Java 1.4.2. That has been corrected!

Update 2: A few folks are seeing a problem installing gems related to the %p operator to printf. We're working on that, and it's a fairly minor issue, but to avoid it there's one additional step before you install rake: set the JRUBY_HOME env var to the root of your JRuby stuff.

Tuesday, January 30, 2007

Improving Java Integration Performance

I created JRUBY-501 to track performance improvements to Java integration, since it's come to light recently that it may be one of our biggest bottlenecks now. And I found a ripe, juicy fix already.

For every call to a Java type, we call JavaUtilities.matching_method with a list of potential methods and the given argument list. matching_method compares the available methods and the types of the arguments, choosing the best option and returning it to be called. This is essentially our heuristic for choosing an overloaded method from many options, given a set of arguments.

Problem was, we didn't cache anything.

Given a list of argument types and a list of methods, there's only ever going to be one appropriate choice. Unfortunately our code was doing the search for every single call, and you can imagine how much additional overhead that added. Or perhaps you can't, and I'll show you.

Here's the numbers before my tiny change:
 38.862000   0.000000  38.862000 ( 38.861000)
40.230000 0.000000 40.230000 ( 40.230000)
This test basically just instantiates a StringBuffer and appends the same character to it 100_000 times. It takes roughly 40 seconds to do that with the old code.

And here's with my changes:
  3.295000   0.000000   3.295000 (  3.294000)
2.933000 0.000000 2.933000 ( 2.933000)
Yes, you're reading that right. It's an 13 times improvement.

And the change was trivial: given the list of methods and argument types, cache the correct method. So simple, so elegant, so effective.

So does this affect regular Ruby code? You better believe it does!

I had been intrigued by the fact that some of the first methods JITed during rdoc generation were all JavaSupport methods. That told me something in rdoc was using a class we provide through Java integration, rather than natively or in pure Ruby. So I figured with this change, I'd re-run the numbers.

Before the change, a full rake install with rdoc took about 42s, or about 31s with ObjectSpace disabled. And now, the "after" numbers:
with ObjectSpace:
real 0m29.765s
user 0m28.843s
sys 0m2.169s

without ObjectSpace:
real 0m24.984s
user 0m23.559s
sys 0m1.757s
This is by far the largest increase we've seen in rdoc performance in several months. The fix should also drastically improve the performance of libraries like ActiveRecord-JDBC, which is extremely Java-integration-heavy.

Another area that's been painful was installing Rails with all docs. It used to take over an hour, but now it's under *seven minutes*.

I hope those of you who've seen or blogged about performance problems (especially with the aforementioned ActiveRecord-JDBC) will try re-running your tests. This improvement ought to have a very noticeable effect on benchmarks.

Now the only concern I have with the caching is that it's a little coarse; there may be better places to do the caching, or finer-grained items to cache against. And we could probably pre-fill the cache with some likely candidates. But an improvement like this outweighs those concerns, so it's been committed...and there's bound to be similar improvements as well.

Boy oh boy is that low-hanging fruit looking ripe.

Friday, January 19, 2007

Velocity, F3, Grizzly on Rails, JParseTree


Martin Fowler enjoys using JRuby with Velocity

Jean Lazarou creates an F3 clone with JRuby

Ashish Sahni posts a walkthrough for JRuby on Rails under Grizzly

Werner Shuster releases JParseTree, sexp-based JRuby parse tree generator.

JRuby, JRuby, JRuby.

Thursday, January 18, 2007

JRuby Compiler: In Trunk and Ready to Play

Times they are a-changing.

I posted previously on JRuby's compiler work. There have been various iterations of the compiler, many purely prototype and never intended to be completed, and a few genuine attempts at evolving toward full Ruby support. However I believe in the recent weeks I've settled on a design that will carry us to the JRuby compiler endgame.

For the past year, we've emphasized correctness over performance nine times out of ten. When we did focus on performance, it was solely on improving JRuby's interpreter speed, in an attempt to match Ruby's performance in this area and because we knew that JRuby could never entirely escape interpretation. Ruby's just too dynamic for that. So while compatibility with Ruby 1.8.x continued to improve by leaps and bounds, our performance was rather poor in comparison.

This past fall, things started to change. Compatibility reached a point where we could finally be confident about our set of regression tests and our understanding of "how Ruby works" across all its weirdest features. As we understood better the design of the C implementation and the quirky intricacies of the language, we started to see a path to enlightenment. We started to realize how we could support Ruby as it exists today while simultaneously evolving JRuby into a more efficient and cleaner design. And so the performance numbers started to change.

From 0.9.0 to 0.9.1, we had a clean doubling of performance across the board. Our favorite benchmark--RDoc generation--was easily twice as fast, and other simpler benchmarks like fib had similar improvements. 0.9.2 was more of a rushed release for JavaPolis, but we had a good 1/4 to 1/3 speedup even then, since the ongoing refactoring removed another large chunk of overhead from JRuby's core runtime.

From 0.9.2 to current trunk, however, has been a different matter entirely.

The first major change is that we've started to seriously alter the way JRuby does dynamic method dispatching. I did some research, read a few papers, and mocked up and benchmarked a few options. What we've settled on for the moment is a combination of STI for the core classes (STI provides a large table mapping methods and classes to actual code) and various forms of inline caching for non-core classes (basically, for pure Ruby classes; though this is yet to be implemented in trunk). STI provides an extremely fast path for dispatch on those hardest-hit methods, since it reduces calling most core methods to two array indexes and a switch, a vast improvement over the hash lookup and multiple layers of abstraction and framing we had before.

We are continuing to expand our use of STI as it is applicable, and I will soon start exploring options for interpreted-mode inline caching (polymorphic, likely, though I need to run a few trials to get numbers balanced right). So fast dynamic dispatching is well on its way, and will improve performance across the board.

Then there's the compiler work. You have no idea how much it's irritated me to hear people talk about JRuby the past year and say "yeah, but it doesn't compile to Java bytecode." This obviously amounts to pure FUD, but beyond that it totally ignores the complexity of the problem: not a single person on this earth has managed to compile Ruby to a general-purpose VM yet. So complaining about our missing compiler is a bit like complaining that we haven't moved mountains. Honestly people, what do you expect?

Of course, there's the flip side of this statement: compiling Ruby is a hard problem, and I like hard problems. For me it's doubly hard, since I've never written a compiler before. But hell, before JRuby I'd never even worked on an interpreter or language implementation before, and that seems to have gone alright. So there it is...Mount Ruby, waiting to be climbed. And climb it I must!

The current compiler design lives in two halves: the AST-walking half; and the code-generation half. I chose to split these two because it make several things easier. For starters, it allows me to abstract all the bytecode generation logic behind a simple interface, an interface that presents coarse-grained operations like invokeDynamic() and retrieveLocalVariable(). The ultimate implementation of those operations can then be modified at will. It also allows us to evolve the AST independently of the compiler backend, even to the point of swapping in a completely different parser and in-memory code representation (like YARV bytecodes) without harming the evolving code generator backend. So this split helps future-proof the compiler work.

The current design also has another advantage: not all of Ruby has to compile for it to be useful. Currently, as the AST walker encounters nodes, if it finds a node it can't deal with it simply raises an exception. Compilation terminates, and the compiler's client can deal with the result as it will. This leads to a really powerful feature of this design: we can install the compiler now as a JIT and as it evolves more and more code will automatically get optimized. So once we're confident that a given node type is 100% compiling correctly, that node will now be eligible for JIT compilation. As an example, here's the output from a gem installation with the current compiler enabled as a JIT (with my logging in place, naturally):
compiled: TarHeader.empty?
compiled: Entry.initialize
compiled: Entry.full_name
compiled: Entry.bytes_read
compiled: Entry.close
compiled: Entry.invalidate
Successfully installed rake, version 0.7.1
Installing ri documentation for rake-0.7.1...
compiled: LeveledNotifier.notify?
compiled: LeveledNotifier.<=>
compiled: RubyLex.getc
compiled: null.debug?
compiled: BufferedReader.ungetc
compiled: Token.set_text
compiled: RubyLex.line_no
compiled: RubyLex.char_no
compiled: BufferedReader.column
compiled: RubyToken.set_token_position
compiled: Token.initialize
compiled: RubyLex.get_read
compiled: RubyLex.getc_of_rests
compiled: BufferedReader.getc_already_read
compiled: BufferedReader.peek
compiled: RubyParser.peek_tk
compiled: TokenStream.add_token
compiled: TokenStream.pop_token
compiled: CodeObject.initialize
compiled: RubyParser.remove_token_listener
compiled: Context.ongoing_visibility=
compiled: PreProcess.initialize
compiled: AttrSpan.[]
compiled: null.wrap
compiled: JavaProxy.to_java_object
compiled: Line.isBlank?
compiled: Fragment.add_text
compiled: Fragment.initialize
compiled: ToFlow.convert_string
compiled: LineCollection.add
compiled: Entry_.path
compiled: Entry_.dereference?
compiled: AttrSpan.initialize
compiled: Entry_.prefix
compiled: Entry_.rel
compiled: Entry_.remove
compiled: Lines.rewind
compiled: AnyMethod.<=>
compiled: Description.serialize
compiled: AttributeManager.change_attribute
compiled: AttributeManager.attribute
compiled: ToFlow.annotate
compiled: NamedThing.initialize
compiled: ClassModule.full_name
compiled: Lines.initialize
compiled: Lines.empty?
compiled: LineCollection.normalize
compiled: ToFlow.end_accepting
compiled: Verbatim.add_text
compiled: FalseClass.to_s
compiled: TopLevel.full_name
compiled: Attr.<=>
Installing RDoc documentation for rake-0.7.1...
compiled: Context.add_attribute
compiled: Context.add_require
compiled: Context.add_class
compiled: AbstructNotifier.notify?
compiled: Context.add_module
compiled: null.instance
compiled: HtmlMethod.path
compiled: HtmlMethod.aref
compiled: ContextUser.initialize
compiled: TokenStream.token_stream
compiled: LineReader.initialize
compiled: TemplatePage.write_html_on
compiled: Context.push
compiled: Context.pop
compiled: Context.find_local_symbol
compiled: SimpleMarkup.add_special
compiled: TopLevel.find_module_named
compiled: Context.find_enclosing_module_named
compiled: HtmlMethod.<=>
compiled: ToHtml.annotate
compiled: HtmlMethod.visibility
compiled: HtmlMethod.section
compiled: HtmlMethod.document_self
compiled: LineReader.dup
compiled: Lines.unget
compiled: ToHtml.accept_paragraph
compiled: ContextUser.document_self
compiled: ToHtml.accept_heading
compiled: Heading.head_level
compiled: ToHtml.accept_list_start
compiled: ToHtml.accept_list_end
compiled: ToHtml.accept_verbatim
compiled: SimpleMarkup.initialize
compiled: AttributeManager.initialize
compiled: ToHtml.initialize
compiled: ToHtml.end_accepting
compiled: HtmlMethod.singleton
compiled: Context.modules
compiled: Context.classes
compiled: ContextUser.build_include_list
compiled: HtmlMethod.description
compiled: HtmlMethod.parent_name
compiled: HtmlMethod.aliases
compiled: HtmlClass.parent_name
compiled: ContextUser.as_href
compiled: ContextUser.url
compiled: ContextUser.aref_to
compiled: HtmlFile.<=>
compiled: HtmlClass.<=>
You can see from the output that not only are RubyGems methods getting compiled, but so are stdlib methods and our own Java integration methods. And this is with the current compiler, which doesn't support compiling class defs, blocks, case statements, ... Hopefully you get the picture; this bit-by-bit implementation of the compiler allows us to slowly grow our ability to optimize Ruby into Java bytecodes.

So then, how well does it perform? It performs just dandy, when we're able to compile. Witness the following results for a simple recursive fib algorithm running under Ruby 1.8.5 and JRuby trunk with the JIT enabled.

$ ruby test/bench/bench_fib_recursive.rb
12.760000 1.400000 14.160000 ( 14.718925)
12.660000 1.490000 14.150000 ( 14.648681)
$ JAVA_OPTS=-Djruby.jit.enabled=true jruby test/bench/bench_fib_recursive.rb
compiled: Object.fib_ruby
8.780000 0.000000 8.780000 ( 8.780000)
7.761000 0.000000 7.761000 ( 7.761000)
Yes, that's nearly double the performance of the C implementation of Ruby. And this is absolutely real.

Now JITing is great, and it's obviously carried Java a long ways. The HotSpot JIT is an unbelievable piece of work, and any app that runs a long time is guaranteed to perform better and better as deeper optimizations start to take hold. But We're talking about Ruby here, which starts up at C-program speeds, and runs as fast as it does immediately. So then JRuby needs a way to compete for immediate execution performance, and the most straightforward way to do that is with an ahead-of-time compiler. That compiler is now also available in JRuby trunk.

The name of the command is "jrubyc", and it does just what you'd expect, it outputs a Java class file for your Ruby code. However the mapping from Ruby code to a class file is not as straightforward as you'd expect: a Ruby script may contain many classes or no classes at all, and those classes may be opened and re-opened by the same script or other scripts at runtime. So there's no way to map directly from a Ruby class to a Java class given the strict limitations of Java's class model. But there is a much smaller unit of code that does not change over time, aside from being mercilessly juggled around: methods.

Ruby, in the end, is a creative and sometimes complicated jumble of method "objects", floating from class to class, from module to module, from namespace to namespace. Methods can be renamed, redefined, added and removed, but never can they be directly modified. And so here is where we have our immutable item to compile.

JRuby's compiler takes a given Ruby script and generates the following Java methods out of it: One Java method for the top-level, straight-through execution of the script, including class bodies and "def"s and the like (called "__file__" in the eventual Java class...thanks Ola for the idea), and a Java method for every Ruby method body and closure contained therein, named in such a way as to avoid conflicts. So for the following piece of code:
require 'foo'

def bar
baz { puts "hello" }

def baz
There would be four Java methods generated: one for the toplevel execution of the script, two for the bar and baz methods, and one for the closure contained within bar. The resulting class file would store these as static methods, so they are accessible from any class or object as necessary, and the toplevel run-through would bind the two Ruby methods to their appropriate names in Ruby-space.

Quite simple, really!

So then an example of the precious, precious JRuby compiler:
$ cat fib_recursive.rb
def fib_ruby(n)
if n < 2
fib_ruby(n - 2) + fib_ruby(n - 1)

puts fib_ruby(34)
$ jrubyc fib_recursive.rb
$ ls fib_recursive.*
fib_recursive.class fib_recursive.rb
$ time java -cp lib/jruby.jar:lib/asm-2.2.2.jar:. fib_recursive

real 0m8.126s
user 0m7.632s
sys 0m0.208s
$ time ruby fib_recursive.rb

real 0m14.649s
user 0m12.945s
sys 0m1.480s
Again, about twice as fast as Ruby 1.8.5 for this particular benchmark.

Now I don't want you going off and saying JRuby has a perfect compiler that will double the performance of your Rails apps. That's not true yet. The current compiler covers only about 30% of the possible code constructs in Ruby, and the remaining 60% (Update: 70%...that's what I get for late-night blogging) contains some of the biggest challenges like closures and class definitions. It's sure to be buggy right now, and the JIT isn't even enabled by default, plus it has my nasty logging message burned into it, to discourage any production use.

But it is very real. JRuby has a partial but growing compiler for Ruby to Java bytecode now.

And oh my, look at the time. Tonight I have to finish my visa application for a trip to India, nail down schedules and descriptions for several upcoming talks, and prepare some slides and notes for presentations in the coming weeks. You will see more about the Java compilation and our developing YARV/Ruby 2.0 bytecode support over the next couple months...and you can expect JavaOne to be an interesting time for Ruby on the JVM this year ;)

Friday, January 12, 2007

Ruby Compiler Fun: AOT and JIT Compilation

Who knew writing a compiler could be so much fun.

I managed to accomplish two things tonight. It's late and I have a flight home tomorrow, so I'll be brief.

jrubyc: JRuby's Ahead-Of-Time (AOT) Compiler

I have whipped together the very barest of command-line, ahead-of-time compilers, along with a simple script to invoke it.
~/NetBeansProjects/jruby $ jrubyc
Usage: jrubyc <filename> [<dest>]
It's mostly just a very thin wrapper around the existing compiler code, so it can only compile constructs it knows about. However, for really simple scripts without any unrecognized nodes, it works fine:
~/NetBeansProjects/jruby $ cat samples/fib.rb
# calculate Fibonacci(20)
# for benchmark
def fib(n)
if n<2
print(fib(20), "\n")
~/NetBeansProjects/jruby $ jrubyc samples/fib.rb tmp
~/NetBeansProjects/jruby $ ls tmp/samples
fib$MultiStub0.class fib.class
At the moment, two classes are generated; one is a class to hold the script entry points and the other is a stub class for all the actual blocks of code contained within the script (toplevel code, method code, etc). This will soon be a single class file, so pay the MultiStub no mind.

We can then execute the script like you'd expect, specifying the JRuby and ASM jar files on the classpath:
~/NetBeansProjects/jruby $ export CLASSPATH=lib/jruby.jar:lib/asm-2.2.2.jar:tmp        
~/NetBeansProjects/jruby $ java samples/fib
Huzzah! Compilation!

Now of course, as I mentioned, this only compiles scripts containing constructs it knows about. If you try to compile a script it can't handle, you'll get an error:
~/NetBeansProjects/jruby $ jrubyc lib/ruby/1.8/singleton.rb
Error -- Not compileable: Can't compile node: ModuleNode[]
The compiler currently supports only literal fixnums, strings, and arrays, simple method definitions, while loops, if/else, and calls that don't involve blocks or splatted arguments. More will come as time progresses. The benefit of building the compiler piecemeal like this becomes more apparent in the next section...

JIT Compilation

The current compiler only understands enough of Ruby to handle my experimentation and research. The compiler also does not output one-to-one Ruby-to-Java classes or even a single large method: it outputs a class containing a method for every semantically separate block of code in a given script. In Ruby's case, that means toplevel code, code found within the body of a class, and code found within the body of a method definition. By combining these two traits, we have everything necessary for a simple JIT.

A JIT, or Just-In-Time compiler, performs its compilation at runtime, usually based on some gathered information about the executing code. HotSpot, for example, has an extensive array of optimizations it can perform on running code just by watching how it executes and eliminating unnecessary overhead. My vastly simpler JIT uses a much more basic metric: the number of times a method has been invoked.

The actual compiler code is the same as that used for the AOT compiler, with one major difference. Instead of the generated code being dumped to a file for later execution, it's immediately loaded, instantiated, and snuggled away in the same location where interpreted code used to live. The logic goes like this:
  1. A method is called. We'll name it "foo"
  2. foo's code is written in Ruby, so it's just a sequence of AST nodes to be interpreted
  3. we interpret foo's nodes, but each time we increment a counter. When the counter reaches some number (currently 50), the compiler kicks in
  4. if the code can't be compiled, we continue to interpretation, but we set a flag and never try to compile again
  5. if the code can be compiled, we save the generated code and use it for all future invocations
Because the compiler can generate these small pieces of code, we're able to JIT Ruby code that was not compiled before execution began, gaining the benefits of a compiled platform without losing the flexibility of an agile script-based development model. It also means we can start benefiting from bytecode compilation even before the compiler is complete.

So how well does it perform? Very well, provided you don't go outside the narrow range of AST nodes the script supports:
~/NetBeansProjects/jruby $ cat test/bench/bench_fib_recursive.rb
require 'benchmark'

def fib_ruby(n)
if n < 2
fib_ruby(n - 2) + fib_ruby(n - 1)

puts Benchmark.measure { fib_ruby(30) }
puts Benchmark.measure { fib_ruby(30) }
Here we have a fib benchmark script with a few nodes the compiler can't handle. For example, the blocks at the bottom of the script won't compile correctly at present. So it's a good candidate for the JIT.

Once the JRuby JIT's been wired up, we can simply run the code as normal:
~/NetBeansProjects/jruby $ jruby test/bench/bench_fib_recursive.rb
compiled: Object.fib_ruby
2.877000 0.000000 2.877000 ( 2.876000)
2.955000 0.000000 2.955000 ( 2.955000)
You will notice the "compiled" logging output I currently have in the JIT. The only method hit hard enough to be compiled during this run was the fib_ruby method defined on the toplevel Object instance. Now this performance is drastically increased over the current trunk, largely due to compilation but also due to a faster dynamic method invocation algorithm we're experimenting with. And there's still a lot of optimization left to be done at both the compiler and runtime levels. But it's already a vast improvement over JRuby from even a month ago. Things are moving very quickly now.

We also look better running under the Java 6 server VM. The "server" VM performs more aggressive optimizations of Java code than does the default "client" VM. Generally this is because the optimizations involved cause the server VM to start up a bit more slowly, since it waits longer and gathers more information before JITing. However in this case, the results are very impressive when we compare the JRuby JIT running under the Java 6 server VM against Ruby 1.8.5:
~/NetBeansProjects/jruby $ jruby SERVER test/bench/bench_fib_recursive.rb
compiled: Object.fib_ruby
1.645000 0.000000 1.645000 ( 1.645000)
1.452000 0.000000 1.452000 ( 1.453000)
~/NetBeansProjects/jruby $ ruby test/bench/bench_fib_recursive.rb
1.670000 0.000000 1.670000 ( 1.677901)
1.660000 0.000000 1.660000 ( 1.671957)
The future's looking pretty bright.

None of this code is in trunk at the moment, but it should land fairly soon. The AOT compiler may come before the JIT, since it's minimally invasive and won't affect normal interpreted mode execution. Look for both to be available in JRuby proper within a week or two, and watch for the compiler itself move toward completion over the coming weeks.

Saturday, January 6, 2007

Five Things About Me

Tor, you sneaky devil. You tagged me before anyone else had a chance. You grabbed the brass ring. Kudos.

So to continue the "5 Things" meme (for the record, I really hate the word "meme"), I present for you five things you probably don't know about me. Actually, some of you will know some of these facts, but I doubt any of you will know them all. I've tried to pick the most quirky or interesting bits out of my otherwise humdrum life.

  • Some time in 1998, I became the lead developer on the LiteStep project. LiteStep was a very popular replacement for the Explorer desktop shell on Windows during the late 90s. It provided a new taskbar, desktop window, NeXT-like dock, and pluggable UI and theming system. For hardcore users tired of the boring Explorer UI, it was the state of the art.

    Originally created by a fellow named Francis Gastellu, it had by 1998 grown rather quiet. At the time, the codebase was silently fading away, with none of the original developers still working on the project and few active developers interested in or able to make a large time commitment to get LiteStep going again. I discovered LiteStep and was attracted by its ability to replace the entire desktop Look & Feel of my Windows machines. I had also been an avid Win32 developer, releasing the shareware program "Hack-It" to some minimal financial success. However the LiteStep code was in really rough shape.

    Almost all the logic was packed into a single large C file that controlled the main desktop window. All the other modules were heavily dependent on this one piece of code, which ultimately crippled LiteStep's ability to incorporate certain types of UI plugins into a user's desktop. I tackled the problem in two ways:

    1. I started converting the core plugins to C++ pure virtual classes and implementations, to allow for a more componentized system
    2. And I reworked all the critical functionality from the desktop module into a central runtime, allowing all other modules to finally remove their desktop dependencies

    Over the next year, LiteStep started to grab the attention of the desktop theming community once again. "Skinning" in general really took off during this time, with the launch of new shells GeoShell, DarkStep, and others. An article published in Wired (for which I was interviewed but not quoted) detailed this new movement.

    Sadly, with the release of theming capabilities in Windows XP, the rise of Linux desktops, and the rebirth of Macintosh with OS X, LiteStep has long since fallen from grace. But to this day I still have the odd person walk up to me and thank me for my efforts during that time. LiteStep, we barely knew ye.

    I suppose an addendum to this item is that for many years I wrote at least as much Win32 C++ code as I did Java, and I still have the programming guides to prove it. How's that for diversity?

  • I do not remember a time in my life I was not in front of a computer. The first computing experience I can remember was programming and playing with BASIC on my Atari 400, writing little games and buying programming books containing short apps I could type finger at a time. I remember saving my programs to the Atari cassette tape drive and praying, praying, praying it would actually take. I remember dialing up to text-based information services at 300bps over an acoustic coupler. In third grade, a mentor came to my elementary to teach me to program in Apple BASIC, though I never owned an Apple computer until my current MacBook Pro.

    Throughout gradeschool and highschool, my primary interests lie with computers. I ran a BBS called "Terminal Nightmare" (clever, eh?) for which I toiled many hours creating ANSI graphics and advertising on more popular boards. I brought C programming manuals to school in 8th grade to read during slow periods. I wrote C and assembler code on embedded processors for my dad's electronics design ventures in 9th grade. And so on and so forth. I've been a computer geek as long as I can remember, and I've never had a problem with that.

    Toward the end of highschool I started thinking about degree programs. I initially started my post-secondary education in Organic Chemistry, and completed the first two years of requirements. But I hated labs. Some time during the second year, I discovered that there was something called a "Computer Science" degree. Oh, hell yes. From then on I never performed another titration or chromatograph, and I couln't be happier.

  • When I am not programming (which is extremely rare) I am an enthusiast of complete-information strategy games. I have spent some amount of time reading about and studying Go, which is my favorite game. I enjoy playing various Shogi variants (including Shogi, Chu Shogi, Tenjiku Shogi, and Tori Shogi), though I don't claim to be good at any of them. I will play Xiang Qi, but it's not one of my favorites, and I have not learned any particularly good strategies. I also play Chess, having been taught by my father at an early age.

    Occasionally me and a few local friends will get together and play these games until the wee hours of the morning. Some people have LAN parties; we have strategy gaming parties. We most frequently play Bughouse when we can find four people and two clocks, but we often just get together to play the above games one-on-one.

    And by "complete-information" games, I mean those in which there is no element of chance. I do not enjoy dice games, and I will play card games only if present company prefers such games. My opinion is that if I lose a game, I would much rather it be due to my own ineptitude than due to random chance.

  • I was one of the best fight-game players in local arcades in the late 1990s. Oddly enough, I was never drawn to Street Fighter, but I spent literally thousands of dollars over the years getting good at the Mortal Kombat and Killer Instinct series of games from Midway. My friend and I would generally spend most weekend nights at arcades, usually playing for minimum cost against players short on skill but long on quarters. We got quite good.

    I was also pretty heavily addicted to those games. During my first two years at the University of Minnesota, I generally skipped class to play. There was such a rush from getting a higher combo, or beating a new player who tried to represent. I also made many friends in those arcades whose names I never knew and whom I have never seen since...but there was a bond among us gamers.

    When I had the means, I began to collect arcade machines. Unfortunately, the means ran dry after only a few purchases, but I've been happy to have them. I own the following arcade machines, stowed in my basement and occasionally played:

    I also own the hollowed-out remnants of an old Gun Fight cabinet. I intended to restore it, but the side art and wood were in very poor shape. It's rotting in the garage.

    I'd love to have a Ms PacMan, Q*Bert, or Tron machine. Unfortunately, so would the rest of the world.

  • I write and eat left-handed, though I prefer my right hand for almost everything else. Unfortunately, like most lefties, this means I can't use writing utensils that may smear or smudge. You lefties know what I'm talking about: the dreaded "pencil hand" you get from dragging your hand through what you've just written. In junior high I finally got tired of having to wash pencil lead off my hand every day, and for several years I utilized a novel solution:

    I wrote backwards.

  • --

    I'll tag Ola Bini, Nick Sieger, Pat Eyler, Evan Phoenix, and Jochen Theodorou to blog "5 Things" people might not know about them.

Friday, January 5, 2007

Ruby Breaks TIOBE Top Ten; Declared Language of the Year

The headline says it all, really!

The TIOBE Programming Community Index measures language popularity based on "the world-wide availability of skilled engineers, courses and third party vendors" using the major search engines. It's not not a terribly scientific way to measure popularity, but I'm not sure anyone has a better index.

Ruby has been moving up every month during 2006 and for the first time has broken the top ten in January 2007. TIOBE also declared it the "Programming Language of 2006", which comes as no surprise to us Rubyists who love the language so much.

Congratulations, Ruby!

Thursday, January 4, 2007

New JRuby Compiler: Progress Updates

I've been cranking away on the new compiler. I'm a bit tired and planning to get some sleep, but I've gotten the following working:
  • all three kinds of calls
  • local variables
  • string, fixnum, array literals
  • 'def' for simple methods and arg lists
  • closures
Now that last item comes with a big caveat: I have no way to pass closures I create. The code is compiling, basically as a Closure class that you initialize with local variables and invoke. But since block management in JRuby is still heavily dependent on ThreadContext nonsense, there's no easy way to pass it to a given method. So the next step to getting closures to work in the compiler is to start passing them on the call path as a parameter, much like we do for ThreadContext.

I've managed to keep the compiler fairly well isolated between node walking and bytecode generation, though the bytecode generator impl I have currently is getting a little large and cumbersome. It's commented to death, but it's pushing 900 LOC. It needs some heavy refactoring. However, it's behind a fairly straightforward interface, so the node-walking code doesn't ever see the ugliness. I believe it will be much easier to maintain, and it's certainly easier to follow.

In general, things are moving along well. I'm skipping edge cases for some nodes at the moment to get bulk code compiling. There's a potential that as this fills out more and handles compiling more code, it could start to be wired in as a JIT. Since it can fail gracefully if it can't compile an AST, we'd just drop back to interpreted mode in those cases.

So that's it.


Ok, ok, here's performance numbers. Twist my arm why don't you.

(best times only)

The new method dispatch benchmark tests 100M calls to a simple no-arg method that returns 'self', in this case Fixnum#to_i. The first part of the test is a control run that just does 100M local variable lookups.
method dispatch, control (var access only):
interpreted, client VM: 1.433
interpreted, server VM: 1.429
ruby 1.8.5: 0.552
compiled, client VM: 0.093
compiled, server VM: 0.056
Much better. The compiler handles local var lookups using an array, rather than going through ThreadContext to get a DynamicScope object. Much faster, and HotSpot hits it pretty hard. At worst it takes about 0.223s, so it's faster than Ruby even before HotSpot gets ahold of it. The second part of the test just adds in the method calls.
method dispatch, test (with method calls):
interpreted, client VM: 5.109
interpreted, server VM: 3.876
ruby 1.8.5: 1.294
compiled, client VM: 3.167
compiled, server VM: 1.932
Better than interpreted, but slow method lookup and dispatch is still getting in the way. Once we find a single fast way to dynamic dispatch I think this number will improve a lot.

So then, on to the good old fib tests.
recursive fib:
interpreted, client VM: 6.902
interpreted, server VM: 5.426
ruby 1.8.5: 1.696
compiled, client VM: 3.721
compiled, server VM: 2.463
Looking a lot better, and showing more improvement over interpreted than the previous version of the compiler. It's not as fast as Ruby, but with the client VM it's under 2x and with the server VM it's in the 1.5x range. Our heavyweight Fixnum and method dispatch issues are to blame for the remaining performance trouble.
iterative fib:
interpreted, client VM: 17.865
interpreted, server VM: 13.284
ruby 1.8.5: 17.317
compiled, client VM: 17.549
compiled, server VM: 12.215
Finally the compiler shows some improvement over the interpreted version for this benchmark! Of course this one's been faster than Ruby in server mode for quite a while, and it's more a test of Java's BigInteger support than anything else, but it's a fun one to try.

All the benchmarks are available in test/bench/compiler, and you can just run them directly. If you like, you can open them up and see how to use the compiler yourself; it's pretty easy. I will be continuing to work on this after I get some sleep, but any feedback is welcome.

Wednesday, January 3, 2007

InvokeDynamic: Actually Useful?

Over time I've become less convinced that hotswappable classes would be an absolute requirement for the proposed invokedynamic bytecode to be useful, and more convinced that there's a number of ways a dynamic language like Ruby or Groovy could utilize the new bytecode. This post gives a little background on invokedynamic and attempts to summarize a few ideas off the top of my head.

Many folks, myself included, have long held that the proposed invokedynamic bytecode would only be useful if coupled with hotswappable classes. Hotswapping is the mechanism by which we could alter class structure after definition and have existing instances of the class pick up those changes. It's true this would be required if we were to compile Ruby all the way to bytecode; since Ruby classes are always open, we need the ability to add and remove methods without destroying already-created instances. The argument goes that if invokedynamic requires a dynamically-invoked method to exist on a target receiver's type, then we would only ever be able to invokedynamic against compiled Ruby code if we could continue to alter those types when classes get re-opened.

I do believe that hotswapping would be useful, but it's fraught with many really difficult problems. To begin with, there's Java's security model, whereby a class that's been loaded into the system *can not* be modified in most typical security contexts. The JVM does have the ability to replace existing method definitions at runtime, but that's generally reserved for debugging purposes, and it doesn't allow adding or removing methods. It also does not currently have the ability to wholesale remove and replace a class that has live instances, and it's an open research question to even consider the ramifications of allowing such a thing.

So what are the alternatives? Gilad Bracha proposed having the ability to attach methods dynamically to a given static class at runtime. This would perhaps be similar to the CLR's "dynamic methods". This idea perhaps has more issue not addressed by hotswappable classes is that even once we compile Ruby to bytecode, it's still dynamic and duck-typed. Would all methods accept Object and return Object? Is that useful? By specifically stating that some methods are dynamic and mutable (in the case of a Ruby class, likely all methods we've compiled), you effectively create the equivalent of hotswapping without breaking existing static types and their security semantics.

But this is all research that could and perhaps should occur outside invokedynamic, and it all may or may not be related. So then, can invokedynamic be useful with these class-structure questions unanswered? What does invokedynamic mean?

To me, invokedynamic means the ability to invoke a method without statically binding to a specific type, and perhaps additionally without specifying static types for the parameter list. For those that don't know, when generating method-call bytecodes for the JVM, you must always provide two things in addition to the method name: the class within which the method you're invoking lives and the precise parameter list of the method you want to call. And there's not much wiggle room there; if you're off on the target type or if the receiver you're calling against has not yet been cast to (or been determined to match) that type, kaboom. If your parameter list doesn't match one on the target type, kaboom. If your parameters haven't been confirmed as being compatible with that signature, kaboom. Perhaps you can see, then, why writing a compiler for the JVM is such a complicated affair.

So there's potential for invokedynamic to make even static compilation easier. Without the need to specify all those types, we can defer that compile-time magic to the VM, if we so choose. We don't have to dig around for the exact signature we want or the exact target type. Given a receiver object, a method name, and a bundle of parameter objects, invokedynamic should "do the right thing."

Now we start to see where this could be useful. Any dynamic language on the JVM is going to be most interesting in the context of the platform's available libraries. Ruby is great on its own, and there's certainly an entire (potentially large) market segment that's interested in JRuby purely as an alternative Ruby runtime. But the larger market, and the more intriguing application of JRuby, is as a language to tie the thousands of available Java libraries together. And that requires calling Java code from Ruby and Ruby code from Java with as little complexity and overhead as possible.

Enter invokedynamic.

Now I've only recently started to see how invokedynamic could really be useful even without dynamic methods or hotswappable classes, so this list is bound to grow. I'd love to have all three features, of course, but here's a few areas that invokedynamic alone would be useful:
  • Our native implementations of Ruby methods can't really be tied to a specific concrete class, since we have to be able to rewire them at runtime if they're redefined. If invokedynamic came along with a mechanism for doing a Java-based "method_missing", whereby we could intercept dynamic calls to a given object and dispatch in our own way, we could make use of the bytecode without having hot-swappable classes.
  • It would also aid compilation and code generation. In my work on the prototype compiler, one of the biggest stumbling blocks is making sure I'm binding method calls to the appropriate target type. I must make sure the receiver of a method has been casted to the type I intend to bind to or Java complains about it. If there were a way to just say invokedynamic, omitting the target type, it would make compilation far simpler; and I don't believe HotSpot would have to do any additional work to make it fast, since it already has optimizations under the covers that are fairly type-agnostic.
  • To a lesser extent, invokedynamic could push the smarts of determining appropriate method signatures onto the VM. I would supply a series of parameters and a method name, and tell the VM to invokedynamic. The VM, in turn, would look at the params and name and select an appropriate method from the receiving object. This is in essence all that's needed for real duck typing to work.
This last item calls out a perhaps surprising area that invokedynamic would be very useful: invoking Java code from a dynamic language.

When calling Java code from Ruby, for example, all we really have to work with are two details: a method name and potentially an arity. We can do some inference based on the actual types of parameters, but there's a lot of magic and a number of heuristics involved. If there were a JVM-native mechanism for calling arbitrary methods on a given object, without having to statically bind to those methods, it would eliminate much of our Java integration layer.

All told, I think invokedynamic would definitely be much more than a PR stunt, as some have claimed. It would eliminate one of the most difficult barriers to generating JVM bytecodes by allowing arbitrary method calls that aren't necessarily bound to specific types. I for one would vote yes, and I plan to throw my weight behind making invokedynamic do everything I need it to do...with or without hotswapping.

Tuesday, January 2, 2007

Groovy 1.0 is Released!

Congratulations to the Groovy team on their release of Groovy 1.0! Groovy is another dynamic language for the JVM inspired by features in Smalltalk, Python, and of course Ruby. It's been a long time coming, and a lot of hard work involved, but Groovy 1.0 is finally here.

See the announcement from Guillaume Laforge, one of the Groovy team members.

Here's hoping there's a bright future of cooperation between the Groovy team and the other dynamic languages for the JVM.

Monday, January 1, 2007

Welcome Nick Sieger to the JRuby Team

The team has grown again! After I asked the JRuby community to nominate a new team member, based on past code, mailing list, documentation, or other contributions, a number of folks thought Nick Sieger would be a good addition. And we agreed.

Nick is the original author of the ActiveRecord-JDBC connector, and has done a lot of work wiring JRuby up with NanoContainer. He's been an active member of the mailing lists and you've probably all read his blog at some point...if only for his excellent summary posts from RubyConf 2006. Even better, Nick hails from the Minneapolis area like Tom and I, and we attend the same Ruby user group meetings with the Ruby Users of Minnesota.

We also expect Nick will bring his familiarity with Maven 2 and his professional experience leading both Java and Ruby-based projects. He's a good developer and a good leader to add to the team.

Hopefully this will also serve as a reminder that JRuby is a true Open Source project, and anyone with Ruby and/or Java experience can easily start helping out. The team and the community continue to grow, as does Ruby's potential on the JVM.

Welcome to the team, Nick!