Sunday, April 3, 2011

JavaScript Loaders and Dynamic Optimization

If you are using a JavaScript framework to develop browser based applications then the chances are you are using a JavaScript loader that the framework provides. The advantages of using a loader are that you can write your JavaScript code as separate modules and also declare dependencies on other modules. When your application is loaded the frameworks loader will ensure that the modules and their dependencies are loaded in the correct order.

For most of the JavaScript work I have done I have used Dojo as a framework. Dojo provides two core API's that enable modules to be loaded and declare dependencies
  • dojo.provide() - used to declare the id of your module
  • dojo.require() - used to declare a dependency on another module
When the dojo.require() statement is called the Dojo framework will see if the module has already been loaded and if not it will load it via an XHR request.

This works well while developing your application, however when it comes time to deploy in a production environment you do not want it making potentially hundred of HTTP requests to load all it modules. Performance will suffer especially in a high latency environment.

To handle this issue most frameworks provide a "build" tool that allows you to package your application and all it depencies into a single resource or multiple resources that can be loaded via a small set of script tags. Dojo provides such a "build" tool.

If you are like me though and don't particularly care for having to run static builds then using a dynamic optimizer is more appealing. Also if your application is one that supports extensibility then using a static build may not even be an option unless you are willing to customize the frameworks build tool. One example of this is Jazz whose Web UI is extensible and also provides a dynamic optimizer that supports providing code contributions via extensions. (I should note that I am the writer of the original Jazz Web UI optimizer).

Concatenating modules and their dependencies together into a easily loadable resource is only one step in obtaining well performing JavaScript loading. Bill Higgins, a former colleague of mine from Jazz wrote an excellent blog post that details some core techniques for optimizing. With this in mind I decided to write an optimizer that would support these objectives :
  1. Given a set of module id's enable the loading of these modules + all their dependencies in a single HTTP request.
  2. Use both validation based caching and expiration based caching when possible.
  3. Load any localization modules that are required using the locale of the client to determine the message files written into the response.
  4. Support a "debug" flag that when passed set to true will ensure each module + its dependencies can be written into the loading HTML response as individual <script> tags thus enabling easy debugging via browser tooling.
  5. Allow the javascript to be compressed as part of writing the HTTP response. Use both gzip and other javascript specific compressors (shrinksafe, uglifyjs, etc).
  6. Support a variety of server-side environments written in Java and Javascript. For example JEE Servlet based (OSGi, JEE WebContainers) and commonjs based environments such as nodeJS.
So far I have what I have talked about covers the old style Dojo sync loader. Dojo is now moving to adopting Asynchronous Module Definition for its loader architecture (1.6 in its source form is AMD compliant in the dojo and dijit namespaces, for 1.7 it should be using an AMD loader by default). This affects 1) and 4) above in how they are implemented.

1) Given a set of module id's enable the loading of these modules + all their dependencies in a single HTTP request.

This requires performing dependency analysis on the set of modules that make up the application.  The result is an ordered set of modules that can be used to build a stream of content written into the response to the HTTP request.

For a Dojo sync loader based system this has typically meant using the Dojo bootstrap process with replaced versions of dojo.provide() and dojo. require() that record the id/dependencies. For each module that is loaded a Regular Expression is applied on the source code to obtain the dojo.provide() details and then for each included dojo.require() the dependencies. Regular Expression works quite well in this scenario as the API's are very simple in structure. This is how the Dojo build tool works to build its optimized versions and also the Jazz Web UI Framework.

For an AMD loader based system using Regular Expression, while certainly possible, is not what I would consider the best option as the AMD API is more complex in structure. In this case using a real JavaScript language lexer/parser is a much better solution. As I wanted the optimizer to run in environments such as NodeJS I needed a JavaScript lexer/parser that was written in JavaScript itself. Luckily the excellent Uglify-JS provides one. Similar to how the Dojo sync loader analyzer works each module is parsed and scanned for "require" and "define" API calls, the results of which is recorded to obtain the ordered list of dependencies. One downside to using a true lexer/parser over RegEx is that the performance is affected somewhat, however other sort of optimizations can now be better supported as the information available from the parser is far richer in detail to what the RegEx can provide. For example Dojo is considering using has.js to sniff out features. The parser can be used to remove features identified by the "has" statements thus making the returned code better tailored to the environment that is loading it.

2) Use both validation based caching and expiration based caching when possible.

While returning a single stream of content versus each individual module is a significant performance improvement the size of the content can still take considerable time to be retrieved from the server. Using both validation and expiration based caching helps significantly here. Both techniques require using some form of unique identifier that represents the content being delivered. For my optimization work I decided to use an MD5 checksum value calculated from the JavaScript contents itself.

Validation based caching makes use of some HTTP caching headers. When the JavaScript response is returned an "ETag" header is added that contains the checksum value. When requests are received for the JavaScript content the request is seached for a "If-None-Match" header. If one is found and the value matches the checksum for the modules being returned an HTTP status code of SC_NOT_MODIFIED (304) is set and no reponse written. This indicates to the browser that the contents should be loaded from its local cache.

Expiration based caching takes it one stage further. If it's possible for the HTML page loading the optimized JavaScript content to include a URL that contains the checksum then the HTTP handler that returns the JavaScript content can also set an "Expires" HTTP header that sets the expiration to be sometime in the future. For the optimizer I have written the HTTP handler look for a "version" query parameter and if it matches the relevant checksum value it sets the "Expires" header one year in advanced. This works better than the validation based caching as the browser will not even make an HTTP request for the JavaScript content instead loading from its local cache. To use this type of caching you must have some form of HTML generation that is able to access the optimizer to obtain the checksum values for the set of modules. This will ensure that a URL with the correct checksum value is specified for the script tag loading the JavaScript content. Here is an example of a script tag URL that the optimizer I have written uses. Note it also contains a locale parameter that ensures applications using i18n modules can use the caching too.

/_javascript?modules=app/Calendar,&namespaces=&version=63ffd833cbe0a7ded0b92a124abd437a&locale=en_US

3) Load any localization modules that are required using the locale of the client to determine the message files written into the response.

The Dojo framework allows developers to use i18n modules for their messages. This means that simply changing the browsers configured locale will show the language specific messages.

Dojo sync loader based applications use the dojo.requireLocalization() API to identify dependencies on i18n modules. Likewise in an AMD environment quite a few of the implementations provide an i18n plugin that allows developers to prefix i18n dependency specifiers with "i18n". Using similar techniques that were used for obtaining the JavaScript module dependencies the optimizer I have written gather these i18n dependencies too.  The HTTP JS Handlers I have written look for a locale query parameter and use that value to determine which language specific i18n modules to write into the response. This means that having to return i18n modules for all locales can be avoided although it does require that the mechanism responsible for writing the applications HTML containing the script tags must be able to obtain the locale information from the HTTP request (via the "accept-language" HTTP header)

4) Support a "debug" flag that when passed set to true will ensure each module + its dependencies can be written into the loading HTML response as individual <script> tags thus enabling easy debugging via browser tooling.

What has been described so far is great for obtaining the quick loading JavaScript however it is not particularly friendly for developer usage. Debugging code that is concatenated together and also potentially compressed using a JavaScript compression tool is not fun at all.

Dealing with this in an AMD based environment is actually very simple as one of the goals of AMD is to enable loading of modules indivually. What this means is that the debug flag is used just to ensure that neither JavaScript compression or concatenation of modules is applied.

For the Dojo sync loader environments my optimizer will allow HTML generation mechanisms to obtain the list of dependencies and write script tags for each into the HTML generated. This means that debugging tools will see each module independently.

5) Allow the javascript to be compressed as part of writing the HTTP response. Use both gzip and other javascript specific compressors (shrinksafe, uglifyjs, etc)

If your HTTP handler's environment supports gzip then it is a very simple way to significantly reduce the size of the JavaScript content written back to the browser. This typically involves using a gzip stream of some form that the content is written into and then written out into the returning HTTP response. Browser that support gzip will provide the HTTP header "Accept-Encoding" including the value "gzip".

In addition to this using a JavaScript compression tool can reduce the content size significantly too. Tools such as shrinksafe, uglifyjs and Google Closure all provide good compression results. The optimizer I have written enables different compression tools to be plugged into the resource loading step of the process. The HTTP handler responsible for writing the JavaScript content uses these JS compression enabled resource loaders.

6) Support a variety of server-side environments written in Java and Javascript. For example JEE Servlet based (OSGi, JEE WebContainers) and commonjs based environments such as nodeJS.

I'm not going to go into too much detail here as I will probably write more about this in future blog posts but briefly I will say that where possible the optimizer I wrote uses JavaScript to do any optimization work. I have written Java binding and also NodeJS bindings. Both sets of bindings use a common set of javascript modules. All the Java modules can be used in both OSGi and POJO environments.

Seeing the optimizers in action

The optimizers I have written are available in the Dojo Zazl project. The easiest way to see them is via the samples that use the Zazl DTL templating engine for HTML generation. You can use the README for setup help and this site too.

For a Dojo sync loader example run the "personGrid" sample in the zazlsamples WAR file or from the zazlnodejs package using this command :

node zazlserver.js ./samples ./dojo15

For an AMD loader using RequireJS run the zazlamdsamples WAR file or run the zazlnodejs package using this command

node zazlserver.js ./amdsamples ./dojo16 requirejs.json

Use a tool like Firebug to observe the JavaScript tag load. You should see subsequent requests load from the cache be significantly faster.

Also, you can see the i18n support in action by selecting a different locale. In the sync loader "personGrid" sample you can see console messages displayed in the language currently selected. In the AMD samples you should observe the calendar widget change.

No comments:

Post a Comment