Sunday, October 27, 2013

A Simple Chromecast Media Streamer

The Google Chromecast became available over the summer to great anticipation and excitement.

I decided to buy one for a couple of reasons :
  1. I needed a way to play Netflix on one of my TV's that didn't currently have that ability.
  2. I wanted a way to cheaply and easily play videos that I had sitting on a NAS.
For 2. I had actually made some attempts via alternative methods using a Raspberry Pi running a number of different versions of XMBC. While initially things would look promising, when it actually came to using it anger things fell apart. After reading up on the Chromecast SDK  it seemed like I could write my own media streamer fairly simply.

So what follows is details of my attempt. Overall it was an interesting investigation into what can be achieved with the Chromecast with a small amount of code. Some things work well and other things not so much. Particularly the media status listener support seemed somewhat flaky.

The source code for the application is available here. The majority of the chromecast interaction can be found in  chromecast.js.

To try it out you must
  1. Checkout the app from github and run npm install
  2. Get your URL(s) whitelisted 
  3. Modify receiver.html and chromecast.js setting your application id
  4. Point your whitelisted URL at your modified receiver.html
  5. Add the domain you plan to run the sender web application on to the Chrome browser Chromecast setttings. See the "Whitelisting Chrome Apps" section here 
  6. Run the server via "node lib/server.js [port] [path to media]"
  7. Point your Chromecast enabled browser at http://[host]:[port] to load the Media List Page
I decided (at least initially) to go the Google Chrome route for the application.


For the receiver I ended up using the sample one from cast-chrome-sender-helloworld. I just had to hook it up to my whitelisted URL and host it somewhere. In the end I just put it on a Jetty instance running on the Raspberry Pi. To make it available I setup a port forward in my home networks router pointing at the Raspberry Pi jetty instance.


The sender part of the application runs in the Chrome Browser.
  • It provides access to the list of media available on the NAS (actually this can be any filesystem directory accesible to the web application). 
  • It enables the media to be played, started, stop and paused. It also enables the play position to be set
I wrote the sender in javascript running on Node.js. It's hosted within my home network running on my Raspberry Pi. This part did not have to visible outside of my home and that included the URL's for the media. I was happy to find that I could pass internally based URL's to the receiver that pointed to the node.js server and they would load fine.

The backend part of sender application simply provided a rest handler to :
  1. Get a list of available media
  2. Get and set the currently playing media
  3. Get and set the current activity id so that page reloads of the sender application can pick up on what the receiver is currently doing.
It also used a Connect static registration to enable loading the media from the NAS/filesystem.

For the frontend I decided to use a mobile friendly setup with the hopes that the Chrome browser in IOS and Android will eventually get cast support (neither of them currently do). It uses jQuery, jQuery Mobile, Backbone.js and Underscore.js with all modules in AMD format. My Zazl Optimizer is also used to provide dynamically built and optimized javascript to the frontend.

The media list page looks like this :

You can select one of the list media files to play or navigate to the Playing page.

The playing page looks like this :
You can select the Chromecast Receiver to play on, the current position by setting the slider value, and start, pause and stop the media itself.

Wednesday, August 28, 2013

Zazl AMD Optimizer and Node.js

When I first started writing the Zazl Optimizer I focused on providing a Java HTTP layer that could be used to support dynamic analysis and optimization of AMD based Web Applications. One of the  core analyzers within the Zazl AMD Optimizer is written in javascript, so with that in mind it made sense to also provide an HTTP layer written to run in Node.js.
Node.js has a number of static HTML libraries available. If one of those is combined with the Zazl AMD Optimizer you have an environment where you can write your AMD based application and serve up an optimized (concatenated and compressed) version of it to your browser based clients.

One of the most well known and used static HTML libraries is Connect. It's actually used within a large number of middleware Node.js based libraries such as Express. For the Zazl Optimizer's  purposes connect is used to serve up any static resource that is not handled by the optimizer. The packaging of the Zazl Optimizer for Node.js provides a Connect based server frontend. The section that starts up the http server looks like this (from the file found here):

    var connectOptimizer = zazloptimizer.createConnectOptimizer(appdir, compress);

    var app = connect()
        .use("/_javascript", connectOptimizer)


When creating the optimizer you give it the path to where the JavaScript resources reside and also whether to turn on compression. Also,  you can see above that the typical approach is taken to initialize the connect environment. The path is first checked for "_javascript" and directed to the optimizer to be handled if matched. Otherwise a Connect static handler for the specified application directory is search and also one to handle finding Zazl's AMD loader that the application code references. To take advantage of the AMD loader handler simply reference it as follows in the HTML file :

    <script type="text/javascript" src="loader/amd/zazl.js"></script>

That's more or less all there is to setting up usage. You can see more in two sample github repositories, one with Dojo samples and one with JQuery samples. Also both are hosted here and here.  

Note: The hosting site (Heroku) puts both apps to sleep after being idle for 1hour. Don't be surprised if the first load(s) takes some time. Subsequent loads will demonstrate the full potential. Alternatively you can download the source from the repositories and run them yourself.

Friday, December 7, 2012

Inlining HTML5 WebWorker source content

I have been playing with HTML5 WebWorkers in an attempt to solve a performance problem and came up with a simple way to inline the WebWorker's source code. The Basics of Web Workers tutorial demonstrates inlining via script tags marked as 'type="javascript/worker"' however this means placing the WebWorker source within the HTML page. I use AMD for my module loading and would like to avoid this approach. With that in mind I came up with the following :

    var webWorkerFunc = function() {
        onmessage = function(e) {


    var content = webWorkerFunc.toString();
    content = content.substring("function () {".length+1);
    content = content.substring(0, content.lastIndexOf("}"));

    var URL = window.URL || window.webkitURL;    var blob = new Blob([content]);
    var blobURL = URL.createObjectURL(blob);
    var webWorker = new Worker(blobURL);
    webWorker.onmessage = function(e) {


This approach is similar to the one described in the HTML5 Rocks tutorial except that the Function.toString() is used to obtain the WebWorker source code. The function wrapper is stripped and the result passed into the Blob constructor.

Sunday, September 23, 2012

Adventures in source map land

JavaScript source maps are a great solution to the age old problem with JavaScript where you have minified your source code but now you need to debug it. Attempting to step through the minified code can push a developer over the edge :-)

Within the Zazl Optimizer there is support to minify the JavaScript responses that are generated. The compressor interface it provides allows for different compression implementations to be configured. Although I do not have the Google Closure compiler implementation available with the Optimizer codebase I have been experimenting with providing one.

With this in mind I decided to see if I could make the minified JavaScript responses Zazl generates include the "//@ sourceMappingURL=" comments and load the source maps when requested. To begin with the source maps themselves have to be generated. When running the Closure Compiler simply setting a non-null value on the sourceMapOutputPath property of the CompilerOptions object will trigger the source map generation. The JSON string representation of the source map can then be obtained from the sourceMap property of the Results object

  CompilerOptions options = new CompilerOptions();
  options.sourceMapOutputPath = "";

  Result result = compiler.compile(extern, input, options);
  String compressedSrc = compiler.toSource();
  compressedSrc+= "\n//@ sourceMappingURL=_javascript?sourcemap="+path+".map\n";
  StringBuffer sb = new StringBuffer();
  result.sourceMap.appendTo(sb, "sourceMap");

Note in the code above the attached URL points to the Zazl HTTP handler to obtain the source map for a given module when the HTTP request contains a "sourcemap" parameter.

At this point I should indicate that for performance purposes the Zazl Optimizer does not run the compresser for each JavaScript response it generates. Individual modules are compressed and cached so that when a response is generated it is simply a matter of concatenating the required modules. This results in a single stream of JavaScript with multiple modules and also multiple "//@ sourceMappingURL=" comments separating them.

And this is where things fall apart with this approach. It appears that the Chrome implementation supporting source maps cannot deal with a single JavaScript resource containing multiple modules and multiple sourceMappingURL comments. When run in Chrome I see the debugger hook up the first module it finds in the resource and then it ignores the rest.

At the moment the only solution I can see is for Zazl to stop compressing individual modules and just compress the single JavaScript response generated. This will result in a single resource listed in the debugger, not individual modules, but the minified code will be hooked up correctly to the unminified source. I really don't want to do this as the performance hit will be substantial. A to-do for me is to find out if there is any way I can get Chrome to handle the multiple modules within the single resource. I'll update the post if I find out more.

Update 9/27/2012 :
After posting a message on the Chrome DevTools google group I was pointed to the source map specification where it describes sections. This is exactly what I needed. Instead of writing multiple sourceMappingURL comments the optimizer writes one URL that gets directed to the optimizers javascript servlet with an identifying key for the contents of the response. When the javascript servlet receives the request for the map it generates a JSON object containing the required sections for each module.

The good news is with these changes in place the Chrome debugger now shows and links to all source files correctly. The bad news is that doing other debug tasks, such as setting breakpoints, do not work.  

Thursday, July 26, 2012

Zazl Optimizer integrated into Maqetta

The Maqetta project is a great new tool for building HTML5 based user interfaces. One of its features is a "preview" option that allows developers to view the pages they have assembled. This functionality runs as an AMD based webpage loading all of its AMD modules individually. The load time of the preview can be significantly affected when running the preview in a high latency environment as each module load is an individual HTTP request. Typically the fix for this is to run some form of build tool that will concatenate all the modules together so that only one HTTP request is required. However, as the pages are assembled dynamically in Maqetta performing a static build is not really a viable option.

This is where Zazl can help. The Zazl AMD Optimizer supports dynamic optimizations such as module concatenation and can be typically integrated with minimal coding. Maqetta is OSGi based so Zazl must run in its OSGi mode as a set of OSGi bundles.

One of my main goals of the integration was to be as unobtrusive as possible in regard to the Maqetta source modifications. Only 2 core modifications were required :
  1. Modify the generated preview URL to include a "zazl=true" parameter when Zazl is required to handle the preview.
  2. Ensure that a raw version of Dojo was available for Zazl to use. Zazl requires that the AMD modules it analyzes have not been built with another build tool. Unfortunately the Dojo that Maqetta uses for preview has already been run through the Dojo build tool. Maqetta uses an ajaxLibrary Eclipse Extension Point to register paths to different libraries. A new extension instance for the raw Dojo code was added so that it did not interfere with the existing ajaxLibrary extension for the built version of Dojo.
With the Maqetta modifications in place some bootstrap code is required to setup the Zazl runtime so that it can intecept the preview URL requests and ensure that the Zazl AMD loader is used to load the AMD modules. You can see all of the bootstrap code here.

Modifications have to be made to the Preview's HTML page to ensure that the Zazl AMD loader is configured and loaded. A JEE Filter is a great tool for intercepting HTTP requests and responses. A Filter was written and configured within Maqetta to catch the preview requests and look for the "zazl=true" URL parameter. If matched an HTML parser (written using a Java Library called NekoHTML) is used to parse the HTML looking for the Dojo script tag. The parser switches the script tag with one that loads the Zazl AMD loader and also sets up the configuration.

In addition to creating the JEE Filter for the preview the bootstrap code has to ensure that the Zazl javascript servlet is configured and running and also that Zazl Resource Loading requests can find resources within the Maqetta environment. Both the JEE Filter and the Zazl javascript servlet are registered in an OSGi Activator run within a bootstap OSGi bundle called maqetta.zazl. This Activator also creates an instance of a custom Zazl Resource Loader that understands how to obtain resources from the Maqetta environment. Maqetta provides its own virtual directory API that can be used by this custom Resource Loader to obtain URL's to the resources.

The bootstrap code includes one other component. When the preview webpage is loaded it now has a reference to the Zazl AMD Loader. The Maqetta environment must be able to find this resource which resides in one of the Zazl Optimizers bundles. To achieve this the Zazl Optimizer bundle has to register an ajaxLibrary Eclipse Plugin Extension, I didn't want to contaminate the Zazl code with Maqetta specific references so an OSGi fragment bundle was created to add the required Eclipse Metadata. You see this fragment bundle here.

This integration also had to handle how the Zazl OSGi bundles would be integrated into the Maqetta git repository. The Maqetta git repository use submodules to reference its third-party dependencies. Providing direct submodule links to the Zazl git repositories on github would not work well as Zazl itself has a build step that has to be run. I decided the best way to handle this was to provide Zazl Release git repositories hosted on github.

There are 2 staging repositories:
  1. One contains the build output of Zazl with tags marking specific versions.
  2. The other contains the binary dependencies that Zazl requires to run.
This provides a nice controlled way for Maqetta to be upgraded to new versions of the Zazl Optimizer.

You can try all of this out by loading Maqetta. Developer setup details can be found here. The Preview7 Release, when available, will contain Zazl. It will be found here.

Saturday, February 11, 2012

AMD, jQuery and the Zazl Optimizer

Having got my Dynamic Optimizer running with Dojo 1.7 I decided to take a look at how jQuery works in the AMD world. With the release of JQuery 1.7 it became possible to load and reference the core jQuery code as an AMD module.  Also, after looking at the jQuery mobile library (note this is currently only available in the 1.1 version that has not yet been released) I found that it too was AMD enabled. So it seemed like the perfect time to get familiar with the jQuery world. I decided I would write a jQuery mobile based frontend to my Music Server application. It already uses the Zazl Optimizer to load a Dojo 1.7 based desktop and mobile frontend.

The first step was to obtain jQuery 1.7.1 and jQuery mobile 1.1.  jQuery 1.7.1 can be downloaded from here and jQuery mobile 1.1 can be obtained by build it from it github repostitory. I should note in both cases the uncompressed versions are used as the compression is handled by the Zazl Optimizer itself.

Once downloaded I placed jQuery 1.7.1 in a directory path of "lib/jquery/jquery-1.7.1.js" and jQuery mobile 1.1 in a directory path of "lib/jquery-mobile/" within my Web Application. I also had to obtain the required CSS file for jquery mobile(also the images it references). This I placed in a directory path of "css/jquery-mobile/" and referenced it in the HTML

    <link rel="stylesheet" href="css/jquery-mobile/" />

The HTML front-end code then could simply reference the modules via a call to the Zazl Optimizer's entry point, "zazl". The actual script tag that loads the jQuery code is inserted by an HTML Filter as described here.

    <script type="text/javascript">
            paths : {
                jquery: "lib/jquery/jquery-1.7.1",
                jquerymobile: "lib/jquery-mobile/"
        ["jquery", "jquerymobile", "app/jqmobile"],
        function($) {

Above you can see the main AMD module that handles the application logic called "app/jqmobile". Within that the jQuery core is referenced as follows :

define(['jquery'], function ($) {

    $(document).ready(function() {


That's about all there is to it. jQuery is used just as it is normally.

Sunday, January 15, 2012

Using an HTML Filter to insert javascript tags

The code I have produced for my Zazl JavaScript Optimizer works by generating URL's for HTML script tags. Because of this the developer must use some form of server-side support to generate the HTML resource so the script tags can be inserted. As the code is written in Java the obvious choice for the server-side technology is JSP's.  If you are comfortable with writing JSP's and perhaps also plan to use them to insert other dynamic content into the  returning HTML resource then they are a good solution, but if you are only interested in using the Optimizer then writing HTML is a simpler choice to pick.

I decided to write an HTML Filter to make adoption of the Optimizer easier. It can be used to insert the required javascript script tag into the HTML resource before it is returned to the requester. All that the developer is required to do is add the javascript that references the Optimizer's AMD loader entry point. This can be via an embedded script within the HTML or via a "main" javascript resource referenced by the HTML via a script tag with a "src" attribute.

Writing the HTML Filter was made fairly straight forward because of great third party open source libraries that are available, NekoHTML and UglifyJS. The HTML Filter itself is written as a JEE Filter. Filters allows the HTTP requests and responses to be modified before and after the HTTP servlet serving the HTML is executed.  In this particular case the HTTP response is obtained by the Filter and analyzed before being returned back to the requester.

NekoHTML is an HTML parser written in Java. The Zazl Optimizer HTML Filter uses it to parse the HTML response returned from the WebContainer. The parser allows the filter to find and scan embedded javascript within the HTML. It also allows the filter to identify "main" javascript resources attached to the HTML that might contain the Optimizers AMD loader entry point.

Once these javascript snippets have been obtained the filter uses the javascript parser provided by UglifyJS to parse the code and locate the Optimizer AMD loader entry point. The entry point should provide the ids of the AMD modules that can be considered the top level modules for the page. The Optimizer's analyzer is then used to analyze and generate a javascript tag URL that can be inserted into the HTML response.

In a nutshell that's about all there is to it. I should also mention that the Filter attempts to ensure the HTML resource does not get cached because if any javascript resources required for the page are modified a new javascript tag URL would have to be generated. If the HTML resource is cached the javascript changes would never be picked up by the requester. It attempts to do this by stripping out any request headers that the WebContainer might use to indicate caching is possible.

You can see this in action in the Zazl AMD Optimizer sample WAR, also a wiki page is found here that provides some more details. Alternatively, I have a Music Server Application found here that also demonstrates the Filter and Optimizer in action.