A Brainf*ck JIT

I recently came across an article by Josh Haberman on the web which demonstrates how to make simple fun JITs, with the help of LuaJIT. I am still a beginner in the field of compilers and JITs. So I tried to implement a Brainf*ck JIT with the help of GNU LibJIT library. It proved to be a really entertaining exercise. The results are up here on Github.

The fun is in the part where we directly emit the machine instructions while reading source. The JIT doesn’t make any optimizations to the code, it just directly converts to the machine instructions.

Wikipedia tells us what each of the BF instructions correspond to in C. This makes it easy to write the corresponding instructions for the JIT. The entire BF program is defined as single function that when once compiled will run. I’ll take up sections of the code from bf-jit.c and lookup what they do:

case '>':
    tmp = jit_insn_add(function, ptr, uptr);
    jit_insn_store(function, ptr, tmp);

The > instruction increments the current pointer by an unit address value. The uptr value is defined to have a pointer of unit byte length. Once we do the add, we store the new value back to the ptr (since, ++ptr is equivalent to ptr = ptr + 1).

case '+':
    tmp = jit_insn_load_relative(function, ptr, 0, jit_type_ubyte);
    tmp = jit_insn_add(function, tmp, ubyte);
    tmp = jit_insn_convert(function, tmp, jit_type_ubyte, 0);
    jit_insn_store_relative(function, ptr, 0, tmp);

Similarly the + instruction loads the values from the address stored in ptr. We add one to the byte value and type cast it to ensure that it is still a byte value. We then store the result into the address contained in ptr.

case '.':
    tmp = jit_insn_load_relative(function, ptr, 0, jit_type_ubyte);
    jit_insn_call_native(function, "putchar", putchar, putchar_sig, &tmp, 1, JIT_CALL_NOTHROW);

Here the . instruction prints the character pointed to by the ptr. Then we call a native C function, putchar that prints the character on the screen.

Making the loop constructs are the most interesting part. We defined a struct that has the labels to the start and end of the loop in the program. It also has a pointer to the parent loop, so that we can return to the parent loop.

case '[':
    loop_start(function, &loop);
    tmp = jit_insn_load_relative(function, ptr, 0, jit_type_ubyte);
    jit_insn_branch_if_not(function, tmp, &loop->stop);

The condition the loop is the value pointed to by the ptr. We load that value and then use to decide whether we should branch on the loop, depending on whether it is 0 or not.

We then compile the JIT function and then call the function. The results of the benchmark by running the Mandelbrot code were really interesting:

Implementation                      Time
----------------------------------------
Mandelbrot (C Unoptimized)       22.343s
Mandelbrot (C Optimized)          1.241s
Mandelbrot (BF Interpreter)       8.304s
Mandelbrot (BF JIT)               3.918s

The unoptimized JIT is 2x faster that the optimizing interpreter. Though we are nowhere close to the optimized C code, but the gap can be narrowed by making some optimizations to the generated code.

Future work can be done in constructing an IR of the source and making an optimization pass, for example ++++ can be converted to +4 directly then, instead of +1 instruction 4 times.

Certain references that really helped are:

UPDATE:

I made a few optimizations to fuse the multiple increment/decrement and pointer movement operations into a single one. The benchmarks from my JIT is now much closer to that of the optimized C version. The latest benchmark results are:

Implementation                      Time
----------------------------------------
Mandelbrot (C Unoptimized)       22.343s
Mandelbrot (C Optimized)          1.241s
Mandelbrot (BF Interpreter)       8.304s
Mandelbrot (BF JIT)               1.609s

Maybe I should look at implementing these optimizations next.

Advertisements

Fun with morse code

Few days back I sat down and cooked up something, an esoteric language made only of Morse code characters: . and _. I won’t call it a full programming language per se, but then it can be used to make some fun stuff (without the codes making no sense by themselves).

Mind you, don’t try to develop anything useful with it, (it just has basic stack manipulations and arithmetic operators on integers). If you can make something with it, I would send you a cookie. 🙂

The docs are up on the website and source code is on GitHub.

In the meantime here is a morse “Hello” program.

_.. __.____
_.. __.__..
_.. __.__..
_.. __.._._
_.. _.._...
___.
___
___.
___
___.
___
___.
___
___.

GSoC: The final wrap up

This is the final round up of the work I did for my Google Summer of Code project – “Autosuggestion of Search Engines” for Mozilla Firefox. It was a great experience.

The module initiates with the launch of the browser and raises notification when it finds that you have been submitting some search forms. The GUI part of it has not been implemented yet, as it was one of the non-goals of the project. The algorithm that I had proposed has been completely implemented. The code has test coverage for operation of the algorithm and to check for the proper functioning of the disk I/O.

The raised notification can now be used by any of the modules in the Firefox code to do something. I have kept the documentation of the module in the Mozilla Wiki.

It was a great experience, and I plan to work on all the follow up bugs that come up for the module. My patch is up for review at Bug 335448 and will merged to the source tree after that.

I would like to thank my mentor Matthew Noorenberghe of Mozilla, for giving me all the advice and support for the project, tips on improving performance and code quality.

GSoC: Towards the first deadline

Today is the soft pencils down date for GSoC 2013. My work is almost done. From the last update, I made some changes to the following things:

  1. Removed a lot of unnecessary methods and clubbed many of the functions into a single one. So now the code is more manageable.
  2. Add a few browser preferences to toggle Autosuggest Search Engine feature on and off.
  3. Saved a lot on the I/O front. Now the module will save to disk after every 5 minutes to prevent a lot of disk I/O. It will also save the data just before the browser is closed.
  4. Better tests – now has tests to check if the disk I/O functions are behaving properly and also if the most likely search field element is being predicted correctly.
  5. Also got my patch reviewed by Gregory Szorc [:gps], to get feedback on the build front. A lot of boilerplate in the Makefile has thus been reduced.

GSoC Project Update

It has been a long time since the last update! The project has shown a lot of progress since then.

  • The algorithm for detecting the form submissions and detecting heuristically whether it is a search form now works perfectly (according to what I had proposed initially).
  • I struggled for sometime on integrating the frecency metric for a website from the Places API. But now this is done, and it checks against the frecency score of a website before marking it as a positive result. Simply put, we now measure how frequently you have visited the website and that will also be used to say whether the form is relevant enough to be a search form.
  • I had written some basic automated mochi-tests for the mid-term evaluation. Now the tests are better, the pains of testing the algorithm is over. Yay!

Progress is found in Bug 335448.

At the moment, when a positive result is detected, a notification is raised in the browser. Any consumer can listen to the notification to get information about the site that has been detected with a search engine. An ideal use case would be to implement a GUI to allow the user to add the website as a search to the browser. This was a non-focus goal for the project, so will be tracked by a follow-up Bug 879155.

ARM Development Environment on Linux

This post is intended to the Systems Science students taking the Embedded Systems course at IIT Jodhpur.

The Keil IDE is inefficient to do normal programming for the ARM platform, and setting up a virtual machine on an ARM processor is too hectic. But it turns out QEMU can run the binaries using the host OS itself, so its dead simple to set up an ARM system on your Linux.

I will be using the Ubuntu distro for the same.

Install the following packages from the Terminal.

$ sudo apt-get install gcc-4.6-arm-linux-gnueabi libc6-dev-armel-cross qemu-user-static

Write some assembly in your text editor. There are some differences from the syntax taught in class.

There is no AREA and ENTRY directive. Instead to define an entry point for your program use a label _start. Then on top add

.globl _start

to signify that it is the global start for your program. Also any label needs to be followed by a :.

Here is a sample program that calculates the factorial of 10.

[gist 343ccc4f48020f656c14]

To assemble this execute:

$ arm-linux-gnueabi-as -o factorial.o factorial.s

To create an executable file:

$ arm-linux-gnueabi-ld -o factorial factorial.o

Just to verify that the files you created are for the ARM platform, you can do:

$ file factorial
factorial: ELF 32-bit LSB executable, ARM, version 1 (SYSV), statically linked, not stripped

Finally to execute:

$ ./factorial

Hence, we have a fully functional ARM build from our host Linux system itself.

GSoC Project Report Update

I know, this update is coming after a long time. Here is a quick list of what works and what needs to be worked on:

  • The basic form handling and score calculation works. It still not the exact algorithm that I had proposed, but we will get there.
  • Right now I have been doing some manual testing. Will be writing automated test cases once the detection stabilizes.
  • I haven’t started working on the UI front. I guess that would be once the above two is done.

You can track the progress of the work on Bug 335448.

Shoes, Socks & Presentations

After listening to Steve Klabnik‘s talk in RubyConf India, I really wanted to try Ruby and Shoes as well. After messing with it for few hours, I came up with Socks.

Socks is a Shoes app that takes in data in JSON format and renders them on screen. Its designed to be used as slides for presentations, so supports only text and images.

The best part is that you define your GUI in a declarative language in the JSON format. This is very much simpler to the CSS styles, so its real easy to get started. Moreover your slides are just your data, so you can write software on any platform to read them. (Data freedom? Yeah!)

This produces a window like this:

socks2

Or you could write:

socks1

You can put all this together and Socks will create a nice presentation for you.

It renders just text and images now. Go give it a try, its on GitHub. Any help for Socks is appreciated, so just fork and send me a pull request!

Grite: Blogging for Hackers

Last night I sat down and built a simple blogging platform, that lets you write based on GitHub Gists. I most blogging platforms are an overkill when you ‘just’ want to write (yes, including this blog of mine). There has been recent rise of static blog/site generators like Octopress, Hyde, etc. But these need to rebuild the static site in case new content comes in.

GitHub Gists offer a clean way of sharing text snippets, formatted by Markdown if you wish so. So why not have a blog that aggregates your gists in the form of a blog?

Grite does exactly that. Its a clean, minimalist and responsive blogging tool, fully powered by client-side JS. You just write your gists and update the configuration file directly on GitHub online, without cloning the repository on your machine. Boom! Your blog is also updated, nothing to rebuild. Check out the example site.

You don’t need to do anything to set it up as well. Just fork it on GitHub, write your gists, and edit the configuration file. Give it a try, and let me know!

Simple chatrooms with Redis

Redis the in-memory key value store, provides very efficient storage of data structures and is amazingly fast. Its pub/sub system can be used to create a simple chat room or message broadcasting system for your server with zero code on the backend to handle the messages. I’ll be using Node.js to build a simple chat room (which obviously can be modified to use a message broadcasting system for your server).

Here we have a channel, and anybody can join the channel as a publisher or subscriber. Publisher pushes messages into the channel, which is then sent to all the subscribers. In our chat client we will be creating 2 connections, one as publisher and other as subscriber because we will need to send and receive messages.

First make sure you have the Redis server installed & running. You can download it for free from redis.io.

Next we will need the Redis client for Node.js. To install it execute:

npm install redis

We will be creating to connections to the server, one to publish the messages and another one to receive the incoming messages. To create a server we have:

var redis = require('redis'),
    client = redis.createClient();

We will just need to handle the events from now on. In case of error, we handle it gracefully by:

client.on('error', function(err) {
    console.log("Error: " + err);
});

When we receive an incoming message, we get a message event:

client.on('message', function(channel, message) {
    // we are printing on screen, you can do whatever you want
    console.log(message);
});

Once the client has subscribed, the subscribe event is raised:

client1.on('subscribe', function(channel, count) {
    // we can publish anything in the channel
    process.stdin.on('data', function(text) {
        client2.publish(channel, text);
    });
});

Just make sure you subscribe the client to the channel.

client.subscribe(channel);

Combining all these and a little bit more, I have written a minimal chat client.

With this we can build a high performance message broadcasting for the servers, with very little code handle the transfer of data in the channels.

Take care, it’s Floating Point Arithmetic

Yesterday night, I was just idling on IRC and happened to come by an interesting discussion that was taking place there. The original issue was that doing Taylor approximation used to give identity for very small numbers, which gradually turned out to be an enlightening discussion. Thanks to Boris Zbarsky for clearing my doubts on this topic.

This post is a note to self (and others as well) so that I can keep them in mind in future.

For example we want to calculate e^x - 1. By the normal Taylor Series expansion of e^x, we would get:

But if we want to add them in a computer as x + \frac{x^2}{2!} + \frac{x^3}{3!} + ... then you would probably get a correct answer. But if you were to do the same for a very small x you would probably end up getting x, which is not correct.

The correct way out of the problem would be to use something like \frac{x^3}{3!} + \frac{x^2}{2!} + x. The reason being for a very small x the correct answer is “basically x“. So for this we would need to use:

  • Next term error estimation to decide how many terms we would want
  • Add the terms from smallest to largest

This gives as some important insight into floating point arithmetic. For example we just want to add the following floats 1, and 2^{25} instances of 2^{-25}. So what is 1 + 2^{-25}?

You would say almost 1 (even I thought so). But to the computer that is exactly one. This where floating point arithmetic turns evil, because single precision floats only have 23 mantissa bits. So the rest of the thing is rounded off, and you get one.

But if you start off by adding with the smaller guys, they can actually accumulate and give the correct result. So if we add 2^{-25}, 2^{25} times and then add 1, we would get 2, quite opposite to the previous result.

Floating point arithmetic is not arithmetic we learnt in school. Its not commutative nor associative nor distributive. It also implies that just because these properties are not satisfied, compilers cannot optimize floating point operations either. The same reason why you have -ffast-math to tell gcc just to do your floating point operations fast and not care about the correct answer at all.

Comments on Hacker News.

Trends in Developers

I tried to measure the relative trends that go on in respective developer communities. Say for example, whether Ruby people are more active than Java developers. How do they compare with each other, by the use of different metrics and if we could deduce any important conclusions from that.

Before I begin, I would like to make an important confession: I am in no way any data scientist, nor have too much of education in statistics. I have just tried to analyze the trends using standard tools and techniques. Maybe my inspiration was:

The best way to get started in data science is to DO data science!

First, data scientists do three fundamentally different things: math, code (and engineer systems), and communicate. Figure out which one of these you’re weakest at, and do a project that enhances your capabilities. Then figure out which one of these you’re best at, and pick a project which shows off your abilities.

Getting Started with Data Science by Hilary Mason

I wanted to try some real world data collected by me, so I turned to the Github APIs and collected the following metrics about some random 2,600 projects for the languages Java, JavaScript, Ruby and Python:

  • Watchers –  People who are following the project, but not contributing code to it
  • Forks – People who have created a fork and making their own contributions to the project

Both my assumptions are the ideal case. I assume the watchers are following the project, while people who fork are contributing code (though in many cases, people may not make any changes to the forks).

I first wrote a Python script to pull out data from Github on some projects for each language and store them into CSV files. The Search API came of help here.

This creates datasets for the languages with columns in the order – name of project, owner of projects, number of watchers, number of forks, if the repository is a fork or not. Since Github doesn’t list projects by language without a search keyword, we iterate over all the alphabets to get a list of 2,600 repositories.

Next just to compute it by the number of forks for Java vs. JavaScript, I sort the datasets for Java and JavaScript by their 4th column, ie. the number of forks and then plot one against one. I tried to fit a linear model in the data with a zero intercept to find the best fit line. These computations were done using R.

Java vs. JavaScript

Java vs. JavaScript by Forks
Java vs. JavaScript by Forks

It yielded a linear model:

javascript = 3.609 * java

Thus we can notice that the tendency to fork is more in the JavaScript developers than Java. JavaScript folks like to take some project and then make changes or contribute to it more easily compared to Java.

I did the same computation for the number of watchers by sort them by column 3 of the original dataset and make appropriate changes to the code.

Java vs. JavaScript by Watchers
Java vs. JavaScript by Watchers

This yielded a linear model:

javascript = 7.22 * java

This result confirms that the there are lot more JavaScript developers who follow other JavaScript projects compared to the Java developers.

Java vs. Python

After trying to compute it by the number of forks for Java vs. Python here is what I got:

Java vs. JavaScript by Forks
Java vs. JavaScript by Forks

The linear model came out as:

python = 0.7672 * java

Thus we can see the trend-line is more oriented towards Java, giving us a conclusion that Java developers fork repositories more than Python devs and try to make changes or contribute.

On doing the same analysis for watchers,

Java vs. JavaScript by Watchers
Java vs. JavaScript by Watchers

This result was a bit surprising, the linear model came out as:

python = 1.423 * java

Thus in the Python community there are more watchers for other projects compared to the Java community. So Python people love to follow a project more, when Java devs prefer forking it and contributing it.

Python vs. Ruby

The results of the number of forks for Python vs. Ruby is:

Python vs. Ruby by Forks
Python vs. Ruby by Forks

This gives a linear model:

ruby = 2.743 * python

The trend-line is more oriented towards the y-axis, ie. Ruby, concluding that Ruby devs fork a repo much more are compared to a Python developer. Rubyists like to jump into it, and make changes right away.

For the number of watchers:

Python vs. Ruby by Watchers
Python vs. Ruby by Watchers

This gave us:

ruby = 2.329 * python

I don’t need to say much. Rubyists follow others projects more than the corresponding Python devs.

Conclusion

We can figure out important trends between different programming languages by comparing different aspects together and we can figure out certain important facts, like Java developers like to fork more than Python developers while Python developers like to follow others projects more. Rubyists fork and follow the projects more as compared to Python people, etc.

Its a nice way to learn and analyze different trends developer communities. More can be done, this is just a scratch on the surface.

My little build notifier

Building the Mozilla source tree takes quite some time on my laptop, I’ll tell the truth, I love to have my fair dose of Facebook and Twitter and other other social networking sites in the mean time. Well, switching to the Terminal window from time to time just to check whether the build is complete is disruptive!

I run a Linux system, I have libnotify installed so why not put it to good use? I had written this side project called Coocoo – a tool to send notifications to different systems over the network (read wifi in our college). I left it incomplete and moved on with other things, things like an Android app to receive the notifications in phone never got built. But one thing that it could do is sent notifications from one Linux box to another via libnotify.

So lets put it to some good use and let my system notify me when the builds were done, instead of me checking it everytime. So I wrote a shell script:

So I started the notifier on my system using:

python notifier/coocoo-linux.py

and executed mozbuild.sh and went on to do something else. Voila! The notification was there when the build completed.

You can find Coocoo on Github at https://github.com/sankha93/coocoo

The end of my ‘tweets’ Repository

I had written about getting pretty cool statistics about your tweeting habits by transforming your tweets into a Git repository here. A simple Python script that I wrote did the job fantastically well. And I could see the graphs and other analytics on my Github tweets repo in the Graphs section.

That was till Github started sending out notification mails for mentions in the commits. This turned out to be a disaster because most of the developers have the same handles on Twitter as well as Github, and when the tweets were turned into commits, Github fired tons of mails to mentioned people. (And I still don’t know how to put a stop to these mails).
So I decided to stop pushing updates to the Github repo from now on. Though I’ll still maintain a copy on my local machine but I doubt it will ever go online in the repo. Its atleast better than pestering people with annoying mails from mentions in a Git repo they are not even associated with.

The story of an Online Judge

It was fun organising a Coding Marathon in our college this year, and is a kind of sense of happiness that resurfaces from time to time looking at the success of the event. It is interesting to watch how all of the students from all the years of study come together to solve algorithmic programming problems. But the actual twist was in the online judge that was used to hold the competition online and to judge all the submissions automatically.

The motivation behind building the online judge came from the event format last year. Participants had to submit solutions to the problems via email and then they had to check manually by compiling and testing the output against input data. This process was not good. It was a kind of overwork on the part of the person who was checking the solutions. So this summers I started on building an online judge as a leisure project to automate this whole process.

Challenges were there. The programs would compile on a server and participants need to submit programs through a web interface. Security was also a big challenge. Running a server meant I could not allow people to hack into the server to tamper around with the data nor the full system by submitting malicious programs, nor could they overload the server by running programs that take up too much time. Additionally I wanted an universally accessible site, that would also fit well on tablets and mobile devices. Hence I decided go forward with the Twitter Bootstrap Framework. I also open-sourced the code on Github. Things went well and the next semester started. It was time to put this thing on test.

I installed this in one of the computers of our Computer Center, and ran it for a trial run period. And as expected people came and hacked on it. Quite a few problems and security flaws were found and fixed. I iterated this process, till those flaws were eliminated. So came the time when I had to put thing to a real test.

I organised a contest under Programming Club. A six hour coding marathon to solve algorithmic programming problems. But all solutions were to be submitted on the online judge, where it will run and tested to give the results immediately. Amongst all this the biggest dilemma was whether the system would be able to bear the load when multiple submissions came and were compiled and run at the same time. But all went well, it didn’t disappoint. It run successfully through all the solutions, running and testing each under its time limit.

This was a kind of an achievement. The first product of the Programming Club of IIT Rajasthan was used by the Programming Club to hold some constructive event. And I always give my deepest heartfelt thanks to those seniors who helped me figure out the security flaws in the system (Shashank and Chintan), deploy it as a server (Ashayam) and largely to them who so enthusiastically helped organise the competition by providing the questions and test cases even on such a short notice of only 12 hours (Rohit and Nishchay).

Congratulations to the winners!

The judge lives open-sourced as Codejudge on Github. It even has a homepage for easy access at http://sankha93.github.com/codejudge. Also my I am thankful to people who gave their valuable suggestions after the event to improve Codejudge. Those features will also be coming soon to Codejudge.