We'll respond shortly.
So what makes Tracker work? I will be doing a series of technical blog posts to explain that, starting with a deep dive into the client side architecture. In future blog posts I will highlight Tracker's server side architecture and hosting environment, the external services Tracker uses and what major architecture changes we're considering.
The client side architecture is the heart and soul of Tracker and is largely based on two user design principles. First, users should not have to "reload" the page or perform any action to see changes made by other users. Second, users should see the affects of their changes immediately without having to wait for the server to respond.
The client is kept up to date by periodically polling for changes. Changes are returned in the form of commands which are executed on the polling client the same way they were executed on the originating client which in turn update domain models that notify the views, which create, update and remove the html elements you see on the page. This client polling approach results in about 2k server requests per second at peak, with the majority of the traffic being handled by memcached processes sitting behind our nginx servers. Other than a few glitches here and there, this polling approach has worked well for us, but with ever-increasing traffic, we are getting closer and closer to its limits and we are considering moving to WebSocket/Socket.IO based push approach. I will dive into a deeper discussion of what we're considering in future blog posts.
It's important to merge changes from other people on the client without losing what a given user is in the middle of working on. This can be tricky, but here is how it works in Tracker. First, changes from other people are merged in as soon as a client receives them, paying careful attention not override the user's local changes. Tracker keeps track of all of the user's local changes so it can determine which of the incoming changes can be safely merged in. Second, Tracker ensures that all commands are executed in order. If the client was up to date before the change was made, the user's changes are saved to the database, a command representing the change is inserted into the database and a successful response is returned. If the client wasn't up to date, the user's changes are not saved and and a list of "stale" commands are returned to the client. The client's command queue then rolls back the command initiated on the client, executes the stale commands from the server, re-executes the original command and sends it back to the server. Rolling back a command entails undoing any domain changes that the command execution made.
As many of you have noticed, Tracker is currently using Prototype, YUI and jQuery libraries for various utilities and event handling, but we are in the process of moving completely to jQuery. We are also looking into the feasibility of replacing our proprietary MVC architecture with Backbone.js as well as looking closely at Socket.IO and node.JS. It’s possible that the next generation of Tracker will be built using combination of Backbone.js, Socket.IO and node.JS.
Stay tuned for more blog posts on the technical details behind Pivotal Tracker.
P.S. If you’d like to help us take this architecture to the next level, we’re hiring!