Climbing the JavaScript Client Stack Part 1

December 9, 2011 at 5:40 AMMark Falto

Climbing the JavaScript Client Stack

JavaScript (JS) is fast expanding from a client side script to a platform for full feature web applications.  The turning point seems to have been the availability of powerful JS libraries and the maturity of patterns that produce maintainable JS code.  In a sense libraries like JQuery (JQ) almost changed the fate of JavaScript.  Here I discuss some of the problems you have to face putting together HTML5/JS app and JS libraries that solve them.

JQuery or… uhh something else

                I start by listing JQuery as its own because well that’s the decision, to JQuery or not to JQuery.  JQuery is almost ubiquitous in the world of JavaScript web programming.  There is no simpler way to handle HTML DOM events, DOM manipulations and basic Ajax requests.  Once you learn the syntax of selectors it’s extremely easy to select what you want from the view and bend it to your will.  The other important point is that the JQuery community is ENORMOUS.  There is great documentation, many many plugins, and utility libraries built on to of JQuery most of which piggy back on jquery’s selector functionality providing beautifully clean syntax.  Some of the libraries I mention in this article depend explicitly on JQuery and I for one am completely ok with that.

Interesting libraries that are not JQuery:


                Prototype.Js for example is the go to JavaScript library for rails applications.  It’s called prototype because of the preferred way it injects behavior in to objects using the JS prototypes.  This enables some neat syntax that a ruby developer might love but can be problematic in JS.  Modifying an objects prototype to inject new functions or change the behavior of standard DOM functions can have interactions with other JS libraries.  That’s not playing nice.  Dojo and Ext are full toolkits that overlap with JQuery features, but EXT for example has a licensing fee.  Dojo doesn’t, but both toolkits encourage you to use them for exclusively for features like MVC, Widgets and Visualization.  Don’t get me wrong, these toolkits are incredible; Dojo has a particularly large community.  Ask yourself how much control you want.  I prefer to stick to JQuery, and piece together a stack I find useful.

Modularity & Performance

When you first transition from a structured C-style object oriented language to JS you may not immediately see how a JS application could ever be as readable or maintainable as what you’re used to.  Dependencies between sections of JavaScript seem almost hidden and sequencing script tags in the head section of your HTML page feels clumsy and slow.  Until recently browsers did not even download script files in parallel.  Through the use of script loaders you can encourage clean JS patterns and asynchronously load your JavaScript, with a readable declarative syntax.

Interesting Script loaders include: (The YUI & dojo toolkits include a script loader)


I particularly like require.js because it encourages use of pattern called Asynchronous Module Definition (AMD) and very modularized JS.  Consider the following Syntax:


    //Explicitly defines the "foo/title" module:

    define("foo/title", ["my/cart""my/inventory"], function (cart, inventory) {

        //Define foo/title object in here.


Here a module called “foo/title” is defined.  It has strict dependencies with cart.js and inventory.js.  Whenever “foo/title” is referenced somewhere else require.js downloads this file, detects the dependencies, downloads them in parallel and executes all scripts in the correct sequence.  Essentially it builds a dependency tree and downloads everything necessary and executes the scripts in the tree from the bottom up.

AMD has now gained traction with the JQuery community as JQuery now registers itself with AMD script loaders if you’re using one.  Ultimately if you fall on heavy use of the JavaScript approximation of OO concepts like classes and inheritance you’re missing the point.  JavaScript is a world rich with its own patterns and practices, rooted in functional programming concepts.

Patterns and Utilities

MVC & Templating

                It’s very likely you’re going want to pick up an MVC framework to support a UI with clean separation of concerns.  It’s much more difficult here to suggest a clear winner as each of the available solutions take a distinct view of MVC and have a learning curve.  The easiest to get up and running with is probably Knockout.JS thou I’ve had the most experience with Backbone.

Interesting JavaScript MVC Libraries include:



These libraries represent three distinct solutions.  Backbone and Underscore work together while Knockout and JSRender do the same.  Backbone uses underscore for utility functions and templating while Knockout uses JSrender for its templating solution.  Of the three, javascriptMVC takes the most architecturally pure view of MVC.  The strength of javascriptMVC is its powerful controllers which uses JQuery to bind to view elements directly mapping the JQuery events you know and love to controller behaviors.  JavascriptMVC also has a nice library of prewired widgets and plugins, thou again I’d rather piece it together myself.  Knockout is much closer to the MVVM pattern you use in WPF development.  View models contain observable fields which can be bound to view elements directly in the HTML.  The interesting bit is that they can be two way (something of an eventing super feat in javascript).  If you’re a WPF developer this will feel pretty natural.  Backbone is the lightest of the three.  Traditional “View” functionality is spread over html and a View Object.  Check out this basic Backbone View:

var DocumentRow = Backbone.View.extend({

  tagName: "li",

  className: "document-row",

  events: {

    "click .icon":          "open",

    "click .button.edit":   "openEditDialog",

    "click .button.delete": "destroy"


  render: function() {




Notice the backbone view “class” has a tagName.  It represents an list item element on the view.  In the events section, events which are relevant to elements in the list item are handled.  So here the view object blends a bit of traditional “view” and “controller” functionality.

Feature Detection

                Feature detection is something you’ll have to consider once you start delving into HTML5 features.  The three browser heavy hitters IE (mosaic based), Firefox (Geko based), and Chrome (webkit based) each implement their own subset of these features, so to ensure a cross browser future proof app you’ll have to detect individual features and work around them when they’re not available.  I say “feature detection” and not browser detection for a good reason, new browser versions are coming out regularly while features are part of a spec the browsers are building towards.  You won’t need to change your code to support a new browser if your feature detecting. 

Interesting Javascript Libraries Include:


Has.js and modernizer.js both have the same kind of “if(feature)” style syntax.  Consider this bit of Modernizer code:

if (Modernizr.webworkers) {

        var worker = new Worker('scripts/workerScript.js');






       require([“scripts/workerScript”], function (worker){




Here I test whether this environment supports HTML 5 web workers.  If it does we use them and improve the performance and responsiveness of page.  Otherwise we default to do doing the work on the same thread still asynchronously loading it with require.js and telling the same module to do the work.  Neat!

Check out Part 2 for discussion on Widgets and Visualization, Instrumentation, and Testing!

Posted in: Basics | Events | JavaScript

Tags: , , , ,

NYC Regional Code Camp Feb 19th – Post Event Wrap Up

February 22, 2011 at 9:34 PMMark Falto

     So this weekend’s code camp was very informative.  There were MVP’s, authors and current and former MS insiders.  My Trek through the conference talks took me through two very good talks on Iron python/ruby projects and .Net DLR in general.  Jimmy Schementi a former MS employee (gone rogue after MS dropped internal development on the Iron Python/Ruby projects) now works for Lab49 and is one of the driving forces behind the open source versions of these projects.  His in-depth knowledge of compiler and parsing behavior was pretty impressive.  But he did give you a little taste of how the DLR and scripting languages run on the .net platform.  Jim Wolley coauthor of Linq In Action, also had an excellent talk where he primarily gave you some clever usages of .net System.Dynamic.ExpandoObject.  I’ll be playing with scripting languages on .net but primarily in the context of testing.  How easily can you use frameworks like Cucumber or build MSTest projects with minimal lines of code.  The other highlight for me was Scott Weinstein’s talks both on RX and Config Management.  It looks like he’s learned the hard way example focused talks resonate more with people.  Lab49 is doing some nice work with Reactive LinqQ Extensions. 
     The MS sponsored tri-state events are typically posted at Peter Laudati’s site.  But there are also regional user groups who do smaller events. 
Tri-State .Net User Groups Include: 

     It was all in all a very good experience.  I encourage others to look into the community and attend these events.  It keeps your mind open and keeps you engaged.  Come out with me and others and get active and get involved.

Posted in: Events

Tags: ,

NYC Regional Code Camp Feb 19th

February 5, 2011 at 7:58 PMMark Falto

Yes its code camp time again, but unfortunately if you havent signed up already your probably out of luck.  Head over to and see if you can get on the waiting list as the event is currently SOLD OUT.  Oh well.  If you got in under the wire like I did ill see you there!

Posted in: Events

Tags: ,

XML vs. JSON vs. YAML serialization

November 22, 2010 at 11:11 PMMark Falto

 If you’ve written anything but a hello world application your familiar with the concept of object serialization.  If you hope to persist objects in files or pass them over the network they must first be serialized into a stream of bytes.  Obviously the fastest way to do this is through a direct binary serialize of your in memory object.  You can imagine this would be fastest since you literally do no work to produce the stream of bytes.  Only access and read the appropriate memory locations for the raw object in sequence.  Of course binary representations of objects are not all that useful.  In .net they specifically tie to you the .net platform and version, thou you can pass them to mono projects running on Linux.  So if I want the objects to be human readable or I want flexibility on what language consumes the objects then streams of raw binary objects just won’t fit the bill.  Any Enterprise SOA, or Web architecture is probably going to rely heavily language independent object serialization.  With that in mind here I compare several available in .net.  I order the comparisons from most to least verbose specification.  XML object representations tend to be inefficient in terms of bytes used to represent the object,  JSON is more efficient, YAML even more so.  To be fair .net natively supports only XML and JSON.  The current .net YAML implementation is available for the community on CodePlex and it is very immature.

For Starters let’s make a simple Business Entity which might represent data im eventually going to serialize.  I keep the collections very short, 2 objects.  Here’s my objects:


    public class ABusinessEntity


        public String FirstName;


        public String LastName;


        public DateTime EventTimestamp;


        public Int32 NumberOfLegs;


        public List<AChildBuisnessEntity> MyChildren;



    public class AChildBuisnessEntity


        public String ChildsFirstName;


        public String ChildsLastName;


        public String ChildSomethingElse;



Expectations:  Given an optimal implementation I would expect the less verbose serialization to be faster.  Also on first call I expect some overhead for things like JIT and the specific API's implementation.  For example Microsoft’s .net XML implementation has a very complex code generation scheme it takes to optimize serialization of every encountered type.  In fact you can even generate that emitted class yourself using Sgen.exe.


Initial XML serialization took 9.34308690070945 ms
Initial Json serialization took 10.3777854447982 ms
Initial Yaml serialization took 70.1607327188232 ms

Here we see the community implementation of YAML has the highest first call overhead by a factor 7.  Lets see what happens on second call to serialize exact same object.

Second XML serialization of same object took 0.0882095350107346 ms
Second Json serialization of same object took 0.0710285804480737 ms
Second Yaml serialization took 0.0887682652404146 ms

 On second call we see dramatic increase in speed (~100 times) of all API's in the time it took to serialize this test object (this is expected).  YAML is still the slowest (i suspect due to is very immature API), XML slightly slower, and JSON about 20% faster than XML.  Lets call that one more time to see if we've really "settled" into a final time for the cost of serialization.

Third XML serialization of same object took 0.0424634974556822 ms
Third Json serialization of same object took 0.0260507969588314 ms
Third Yaml serialization took 0.0692127072016136 ms

 Here we see further improvement (~2x faster) and a fourth run confirms this is the "settled" speed.  Clearly YAML's immature implementation cannot compete.  But interestingly JSON is almost twice as fast.  Let's throw another instance of the same object at them...

First XML serialization of new object same type took 0.0512634985731427 ms
First Json serialization of new object same type took 0.0297523847304616 ms
First Yaml serialization of new object same type took 0.0797587402868242 ms

     Ok roughly the same results as before.  JSON here is about 40% faster, and Yaml about 45% slower than XML.  So how big are those object representation in bytes...

The Xml representation is 1367 bytes long
The Json representation is 522 bytes long
The Yaml representation is 84 bytes long

     So at 522 bytes a JSON representation is less than half the size of an equivalent xml representation.  Thou that difference will vary largely with data, we would expect JSON to be significantly smaller on average.  YAML is DRAMATICALLY smaller.  Its ~14 times smaller than the equivalent xml representation.  I leave it to you to figure out whether you would get the time lost during YAML serialization with transmission time over the wire or writing/reading this representation to disk.  I suspect with such a smaller footprint you probably would and even this immature YAML implementation is worth taking a look at for any real-time application.


Posted in: .Net Internals

Tags: , , , , ,

Nov 6th Tristate area Code Camp

November 1, 2010 at 1:38 AMMark Falto

Peter Laudati, one of Microsoft's local evangelists is advertising a local area code camp.  This time its in one of the UConn buildings in Stamford, CT.

If your interested signup is still open and FREE.  Hope to see ya there!

Posted in:

Tags: ,

The Basics - Problem definition

August 22, 2010 at 3:56 PMMark Falto

Scenario #1:  During a daily or weekly review of group activities someone has brought to the attention of the group a difficult production problem.  Presented is a 2 or 3 sentence description of the problem focusing on what users describe as "effect" of the problem.  In variably the eyes of pointy haired managers dart toward the key engineers, the knowledge holders, in search of some resolution to the problem.  A discussion begins....

Scenario #2:  The tech lead of team proposes a meeting of engineers to discuss the correct solution for a complex technical problem.  This problem is so nasty in fact that also invited are the heavy hitters, the specialists from two other teams.  The meeting begins and a discussion ensues..

     You might expect that second scenario would go much better than the first, yield more fruit so to speak.  The reality is I’ve seen both these discussions degrade into the same kind of continual speculation as one person after another tries to justify their own position.  This type behavior may seem childish but it’s in our nature and it has to be controlled with structured scientific approach at problem solving.  Wild speculation cannot occur when we’ve done our homework and DEFINED THE PROBLEM.  
     One of my favorite blogs "If broken it is, fix it you should" has a spectacular article on how engineers can approach problem definition.  We should be asking the Who, What, Where, and When type questions first and if this is not known prior to a discussion the discussion must be structured to reveal this the outset.  I for one constantly challenge myself to be the leader in the above situation, to bring sanity to the mayhem.  I’ve been known to put some or all of these questions up on a board and make sure they have answers.  This becomes the summary, an artifact which lives until the problem is completely resolved.
     The other key point from the article is that we have capture next steps and a loose timeline.  As engineers we are paid to solve problems and when we can’t solve one quickly show progress toward a resolution.  Not only does capturing this information structure our own problem resolution, by doing this we play the game to speak.  Managers get what they need to defend themselves and in turn defend the engineers.  That’s what they pay us for so let’s give em what they want.

Posted in: Basics | Debugging

Tags: , ,

Coding Fonts!

August 14, 2010 at 3:58 PMMark Falto

For those of us who spend a large amount of time in thier favorite IDE the of choice font becomes a matter of comfort.  Here are some of my favorites...

Consolas (True type)

Consolas (True Type)

Consolas Font Pack


ProFont for Windows (Raster)

ProFont For Windows (Raster)


Triskweline (Raster)

Triskweline (Raster)


     Of course youll probably want fixed width fonts for coding and I find ones which clearly differentiate things like 0 from O are better.  You may also find that True type fonts which are font smoothed by default in windows can cause eye fatigue after a while.  Theres a good post at Visual Studio blog which explains how to disable all the font soothing features in windows.

     Another problem youll find is that the bitmap based fonts dont work in studio 2010.  For some god awful reason they dropped support for raster fonts in all languages except Traditional Chinese.  There a very good post at Electronic Dissoance explaining how you can modify a raster font to a true type that can be used in studio 2010.  Luckily the above fonts have true type versions available.

Posted in: Visual Studio 2008 | Visual Studio 2010

Tags: , , , , ,

Garbage Collector behavior differences in Debug vs. Release mode

August 13, 2010 at 5:18 AMMark Falto

I recently came across a scenario where the differences in garbage collector behavior from Debug and Release builds became readily apparent.  Consider the following code in the main function of a windows forms application.

bool FirstInstanceOfUI = false;
Mutex aMutex = new Mutex(false, "allowOnlyOneUIInstance", out FirstInstanceOfUI);

if (FirstInstanceOfUI)
     Application.Run( );

     The intent of this code was allow for one instance of this particular forms application to run at a single time.  Interestingly this code works when compiled in Debug mode but not in Release mode.  Why?  The trick lies in the fact that in debug mode objects are artificially "rooted" for you throughout the scope of a single method.  This means any objects declared in a method will not be GCed until the method goes out of scope.  In release mode they can be GCed as soon as the last line which uses them executes.  When this code is built in release mode it fails because the Mutex is GCed immediately after the line which declares it.  Throughout the lifetime of the application other applications initalizing will not find that a Mutex with that name already exists and so they will create it momentarily and start up.  Laughing

     The most direct way around this to force the GC to NOT collect this mutex object.  This can be done with a call to GC.KeepAlive(object).  So the code works perfectly when it looks like this:

bool FirstInstanceOfUI = false;
Mutex aMutex = new Mutex(false, "allowOnlyOneUIInstance", out FirstInstanceOfUI);

if (FirstInstanceOfUI)
     Application.Run( );


Posted in: .Net Internals

Tags: , , , ,

My Blog Sytle... and Todays Perspective

August 9, 2010 at 5:24 AMMark Falto

After hammering my readers (hah) with an initial pretentious blog about cognitive shift or something or other, i'll take moment to for a somewhat introductory post and put a few words down on my style and perspective.

First politically id consider myself left of center.  The kind of left of center which didn’t like Obama's healthcare plans but couldn’t help but giggle with glee while watching coverage of various financial CEO's with their feet over the proverbial fire.  I give money to the United Way not for tax purposes but because it’s just a good thing to do.  On the technology front I am quite skilled with and consequently invested in Microsoft technologies.  Among many other things I’ve probably read Hawking's A Brief History Of Time 3 or 4 times and I’m nearly through Brian Greene’s The Elegant Universe and More Effective C# simultaneously.

I hope that my style will be mostly direct and informational, with a bit of that preachy observational stuff tossed in when I’m in the mood.

I of course expect it evolve as I evolve, as the industry evolves as we in fact all evolve.

Posted in:


The changing landscape of Programming and the Congitivie Shift

August 2, 2010 at 10:11 PMMark Falto

Recently, a colleague sent me the following link:

     I find two interesting things come out of this article...

     If there's a revolution happening that’s driving the explosion of programming languages, it’s a cognitive one. We are learning to think about problems differently than we used to and the languages are reflecting that. When everything down to your phone has a GHZ CPU and is connected wirelessly how will you make use of that hardware? This seems to necessitate a cognitive shift in how we dissect problems. There are doors open that weren’t there before. I’d eventually like my cellphone to anonymously transmit my location and use this data for feeding traffic or city planning systems. (Maybe they’re doing that already?)

     The other interesting thing that seems to follow is the concept of ambient oriented programming. I have been tossing around the concept of a distributed data store for two years now. The idea was driven from that change in hardware and thinking I stated above. I've had conversations with colleagues on how to solve specific aspects of this problem and we invariably arrive at intelligent queuing/message passing and heuristic algorithms to monitor load on given nodes. It looks like Jessie Dedecker has formalized this problem domain and built a framework for preferred language features, design concepts and built a sample distributed kernel. See his PHD dissertation here: The forward look there is when connectivity creeps into everything from the roadways, to your air conditioner, to your clothing; these can be nodes in genetic algorithms for group problem solving. Obviously the protocols that drive these systems will have be to tolerant. When your shirt can inform your local hospital about your impending cardiac arrest I wouldn’t want that request to be delayed due to high network load.

Posted in: