Big Ideas Behind Angular2

When I first started using AngularJS (way back in version 0.8, I believe), I was continually impressed at what it could do. The team I was working with at the time had many discussions on how this young framework could be used effectively within our project. We found ourselves experimenting with different techniques and slowly figuring out what patterns worked best. We were rewarded with many “Aha!!” moments that showed us how much insight went into the framework.

As you probably already know, there’s a new version of AngularJS that is close to being released. The Angular team basically took a step back and considered all the things they learned building Angular1, many ideas and techniques that came about as browser technology matured, and advances to Javascript itself (ES2015 and TypeScript), and rewrote the framework to be a better platform for creating modern web applications.

I’ve spent some time working with the new Angular2 framework and now have a better sense of how it is used. I think the Angular team has done a great job of improving the framework.

I’ve built an example application to manage a list of images. It’s not a full featured application, but I wanted to build something that had a reasonably complex user interface and also incorporated routes and UI controls (and wasn’t a To-Do application). I also wanted to explore new ways to deal with application state using immutable data and Redux.

The source code is here.

This is the first post of a series digging into the implementation details of this Angular2/Redux application. But before getting into these details, a little overview first.

Big Ideas

There’s been a lot of talk lately about “Javascript Fatigue”. There is so much rapid change, new libraries, and conflicting ideas around creating Javascript-based applications (whether they are web applications, NodeJS, Electron, etc.) that developers are overwhelmed by all the options. It’s exhausting just trying to keep up.

I can definitely understand these feelings, but I read this a little bit differently. Yes, there is a confusing mash of options available and it is hard to tell which ones are important and should receive our attention. But there are also a lot of big ideas being discussed in public, open forums about how best to build applications. Ideas are being put out there, debated, refined, and tested. To me, it’s actually pretty interesting. It’s hard to keep up, but I do think folks are gathering around several big ideas that benefit us as developers.

Functional Programming

Functional programming has been around for a while and its benefits are well known. Since React has become popular, developers are starting to understand it more as a functional paradigm. A developer benefits from creating stateless components and passing in props, so that a React component becomes more like a function, data goes in and HTML markup comes out. Other frameworks like CycleJS make the functional nature of the framework more explicit. Languages like Elm fully embrace a functional paradigm.

Angular2 does not go down the functional path, it still uses the declarative model in HTML that we had in Angular1 (with some significant syntax changes, though). However, Angular2 has greatly improved how user interface is organized and rendered, which was influenced by React, especially in the idea of organizing your application into components.


React and Web Components have popularized the idea of organizing your application into components, specifically a tree of components, starting at a root and working through the tree to more specialized components. Larger components can be assembled from smaller components, which makes understanding, writing, and testing the pieces of your application much easier.

Components can be further categorized as well. A common way to think about components is to separate them into two categories, components that organize and orchestrate application logic and components that present user interface with inputs and outputs. This distinction is sometimes called “smart” and “dumb” components.

Container components coordinate the presentation components, application logic, and service interactions that make up your application. Presentation components can be reused and don’t know anything about the overall application. They take input data and generate output data through events, which are handled by the container components.

This allows you to think about your application in more narrowly focused pieces. If application components get too heavy (too much code), you can refactor to divide up functionality in more manageable chunks.

Since components are organized into a tree, then another important idea comes into play.

Unidirectional Data Flow

One of the problematic areas of Angular1 was the way model changes were handled. A digest loop looked for changes to model data, but a change in a data item could trigger other changes as well, requiring another round of change detection. Too many iterative changes and you see the dreaded $digest() iterations reached. Aborting! error.

This also made it very hard to reason about how changes really affected your application.

In Angular2, the strategy for change detection has changed. Changes in model data start from the top and work down through the tree of components. There is no longer any digest loop, it’s just a single pass through the tree. This makes understanding the effects of changes to model data much clearer.

Another advantage of this is that while traversing down through the tree, if the data associated with a component hasn’t changed, then there shouldn’t be any changes in the user interface defined by the component and its children. These components do not have to be rendered again and can be skipped. This provides for some nice rendering optimizations that allow for better performance and responsiveness for the end user.

To ensure that data associated with a component has not changed, the idea of immutable data really helps.

Immutable Data

If your model data is an object or an array, changes can occur anywhere within the model data. Therefore, you are not sure whether the model itself has changed unless you check for changes throughout the model’s data, such as object properties, child objects, or array entities.

This caused a problem in Angular1. For change detection, watchers were set up on data elements to determine what model data had changed. With a large set of watchers, application performance would suffer.

With immutable data objects, change detection becomes very easy. An immutable data object means that the reference to the object and all of the object’s properties and children will not change. Once they are set they cannot be changed unless a new object is created that merges the original object properties with the changes. But at this point, it is a new object and this object has a different reference. So the test for whether an immutable object has changed simply boils down to an equality test.

let modelHasChanged = (myNewReference !== myPreviousReference);

Angular2 can use this concept to know when model data has changed so that it can determine whether a component should be rendered again (specified using the OnPush ChangeDetectionStrategy).

Another benefit of using immutable data is that it helps us to avoid unintended changes to our application state. However, we can also take it a step further and consider a more concise way to manage our application state altogether.

Application State (Redux)

An application transitions between different states based on some stimulus, like a user pressing a button, a message being received, or whatever. It’s important to understand what should happen when the application transitions from one state to another. This is especially true for an application that has any reasonably complex user interface. Too often applications devolve into unmanageable mess of message handling and data changes as the application grows in complexity.

If we separate our application state from our user interface and manage it more concisely, we can better understand how our application works and more safely update and improve it. We can move the handling of application state changes to its own system to isolate it from the rest of the application.

Redux is a popular example of doing exactly this. It defines an application store, reducers, and actions that your application uses to handle its state transitions. All application state changes go through a very prescriptive pattern. The benefit is that application state is centrally located and tightly controlled. Components pass actions to the application store, where reducers make the application state changes, and then components subscribe to application state changes and render themselves based on the current application state.

Redux is popular in the React community, but it can be easily be used in Angular2 applications as well (as I’ve done in this example application).

React reducers are intended to be pure functions (no side effects). However, Angular2 application still need to retrieve data from a remote source and invoke other kinds of asynchronous operations. One more idea revolves around how to deal with these operations.


Observables are a robust way to handle streams of asynchronous events. RxJS, a popular implementation of the observable pattern, provides a set of functions that allow you to orchestrate, transform, and otherwise manipulate asynchronous events that come from a variety of sources. RxJS describes itself as “LoDash for events” (LoDash is a popular set of general-purpose utilities for Javascript).

A good way to think about RxJS in particular is that it is a mechanism to describe a series of asynchronous operations. In other words, it allows you to declaratively define exactly how a stream of events should be handled to better understand asynchronous operations.

Angular2 makes extensive use of Observables. The HTTP service for example, now returns an observable instead of a promise.

This is a significant change, but why was it required? Because observables have a few advantages over promises.

  • Observables only activate when a subscription to the observable is created. In other words, they are “lazy”. The operation associated with a promise is activated when the promise is first created.

  • Observables can be canceled. There are ways to cancel a promise, but generally involve creating a “back door” into the operation the promise wraps. Being able to cancel, for example, an HTTP request allows for more robust applications.

  • Promises can be chained together, allowing you to transform the results of a promise operation in interesting ways, but Observables have even more capability. Observables can be used to orchestrate asynchronous operations in ways much more diverse than simple promise chains.

If for some reason, however, you still need or want to use promises in your own application, it is a simple matter to wrap the output of an observable in a promise that can then be provided to other parts of your application.

Example Application

Angular2/Redux Application

The application I created, found here, allowed me to explore some of the ideas above. Subsequent posts will dive into the details of the implementation. I hope to give you some insight into the power of the Angular2 framework and what can be accomplished with it.

I am very excited about the changes in Angular2 and I think it will continue to be a robust platform for building web applications.

Angular Benchpress and Performance Tests

Ben Nadel wrote a blog post that explored the performance of rendering a large dataset using Angular (version one) and React. It was a good post and demonstrated the perceptible difference between an Angular 1 application and a React application.

The example application (found here) was intended to give a feel for this performance difference. In other words, does React feel faster than Angular?

But what if we could put actual numbers on this difference?

I’ve been playing around with the Angular2 Benchpress tool. Performance is critical to the Angular2 team. They didn’t have a tool that could collect performance information in an automated and repeatable fashion, so they built their own tool. Benchpress is the result.

We can use Benchpress to measure the actual performance of the application using Angular version 1, React, and just for good measure, Angular version 2.

Of course, Angular2 is still in early Alpha so there may be problems with the tool itself or some of my code, so take all of this with a healthy skepticism. Buyer beware.

Benchpress runs the tests multiple times and gives back average times and variances for a number of metrics, including memory and timings. These are the raw numbers I saw on my machine (Windows 8.1, 16GB memory):

Test forcedGcAmount forcedGcTime gcAmount gcTime majorGcTime renderTime scriptTime
Angular v1 7293.41+-44% 49.62+-33% 41525.56+-1% 92.69+-2% 659.66+-1% 124.72+-1% 752.35+-1%
React 4661.45+-18% 25.29+-10% 13350.06+-0% 16.05+-5% 174.47+-5% 91.14+-2% 190.53+-5%
Angular v2 3545.39+-9% 53.67+-5% 17022.78+-0% 13.67+-3% 155.66+-4% 125.70+-1% 169.37+-4%

This test measured mounting the grid, which basically measures the time to render repeated content on to the page.
You can see the formidable gap between Angular 1 and React. No surprise there. But you can also see that Angular2 is heading in the right direction. The great thing is that we can put actual numbers on performance rather than depending on our perception of the performance.

The project, along with the Benchpress tests can be found here:

Deploying an ES6, JSPM, NodeJS Application to Azure

I have a simple NodeJS application that I want to deploy to Microsoft Azure. Fortunately, there already exists detailed instructions, on how to accomplish this. But I want to add a couple of extra things. The Node application will be written using ES6, the latest version of Javascript, and also use JSPM, a newer generation package manager for Javascript components.

Why do we need a new package manager? Doesn’t Bower and Browserify already serve this purpose well? Yes, but JSPM brings a few more useful ideas to the table:

  • JSPM pulls components directly from their source, either GitHub or NPM, rather than having to package components with a separate registration file.

  • JSPM integrates with Babel to automatically compile ES6 source. The compilation of the source can happen in the browser (so that a build step is not required) or as part of generating a web application to be deployed in a production environment.

  • JSPM (through the SystemJS universal module loader) supports different module formats, such as ES6, AMD, or CommonJS, so your code works well with existing code. SystemJS also supports a plugin system that can do things like loading CSS style sheets dynamically.

The AngularJS Application

The application to be deployed is here. It is a simple AngularJS application. Since it is written in ES6, the way the application comes to life is a little bit different from the more common AngularJS pattern of using the ng-app attribute.

This is a portion of the main HTML page:

<body ng-controller="mainCtrl as vm" class="ng-cloak">
<p>{{vm.message}} {{ | date:'fullDate'}}</p>
<script src="jspm_packages/system.js"></script>
<script src="config.js"></script>
.then(function(modules) {
var angular = modules[0];
angular.bootstrap(document, ['mainApp']);
.catch(function(err) {
console.log("Bootstrap error:");

You see the controller, mainCtrl, but there is no ng-app attribute. Since the application’s modules are loaded dynamically, the bootstrap process needs to wait until these source files are loaded. JSPM incorporates the SystemJS universal module loader to load modules dynamically, including the Angular code and the main controller (which, in turn, loads other dependencies, like the application’s Angular module). A call to the anglar.bootstrap function initializes and runs the application.

The main controller simply sets a couple of properties that are bound in the view. We still have to use the same module syntax for AngularJS (at least until Angular 2 is available), but we can use the ES6 import syntax to load dependencies.

import appModule from './appModule';
class MainCtrl {
constructor() {
this.message = "Today is"; = new Date();
.controller('mainCtrl', [

Deploying to Azure

These instructions provide steps to deploying a Node application to Azure. Azure provides a Git repository that you can push to from your local repository or other Git repository.

If we follow these steps with our AngularJS application, it will deploy but it won’t run correctly. The application will not be able to load the Javascript code that it needs. The problem is that we need an extra step in our deployment.

Normally, a NodeJS application’s dependencies, the modules organized under the node_modules path, are not a part of the files stored in source control. Part of deploying the application is to pull down these node modules using the npm install command.

With jspm, we would use the same workflow by using the jspm install command. To do this we’ll need to customize the script used by Azure when deploying the application.

Microsoft has provided the Azure Command Line Interface to help with this. This tool provides a cross-platform command-line interface to manage Azure assets (more details here).

The command we are interested in is:

azure site deploymentscript --node --scriptType bash

This command creates two files, .deployment and The first file is a deployment configuration file that points to the second file, a bash script that executes deployment steps. (In this case we are using a bash script, but this command can also generate a Windows .cmd batch script type.)

In the script we can see where the Node package dependencies are installed:

# 3. Install npm packages
if [ -e "$DEPLOYMENT_TARGET/package.json" ]; then
eval $NPM_CMD install --production
exitWithMessageOnError "npm failed"
cd - > /dev/null

We need to do a similar thing with JSPM packages by adding the JSPM install command (after the node packages are installed):

eval "node_modules/.bin/jspm" install
exitWithMessageOnError "jspm failed"

One more thing we will need to do is to tell Azure what version of Node we are interested in using. We can update our package.json file to add the following:

"engines": {
"node": "0.12.x"

Now we can push our changes (via Git) up to Azure, the deployment script will install our dependencies, and the application will be live on Azure.

--> git push azure master
remote: ok Installed babel-runtime as npm:babel-runtime@^5.1.13 (5.4.2)
remote: ok Installed github:jspm/nodelibs-process@^0.1.0 (0.1.1)
remote: ok Installed babel as npm:babel-core@^5.1.13 (5.4.2)
remote: ok Installed npm:process@^0.10.0 (0.10.1)
remote: ok Installed core-js as npm:core-js@^0.9.4 (0.9.10)
remote: ok Loader files downloaded successfully
remote: Finished successfully.
remote: Deployment successful.

Production Deployment

One last step we can do is publish our application in “production” mode. The application as deployed above compiles Javascript from ES6 into ES5 on the fly. This is fine when developing the application, but this additional compile step should not be part of a production release.

There are different ways that JSPM can be used to create a production deployment. We’ll use the self-executing bundle as our example.

jspm bundle-sfx js/app/mainCtrl src/js/app-bundle.js

This combines the AngularJS code and our application code into a single, pre-compiled file. Then, instead of using System.import, we include the script directly and then call the Angular bootstrap function.

<script src="js/app-bundle.js"></script>
angular.bootstrap(document, ['mainApp']);

(Note, I created a separate index page, index-with-bundle.html, so that you can see the difference in the HTML when using this bundle.)

What Javascript Framework should you be using?

What Javascript Framework should you be using? This seems to be the question everybody is asking … and everybody seems to have a different answer. I’m here at ng-conf 2015. Is AngularJS the answer?

Angular has a particular opinion about how to build a Javascript application. But other popular alternatives exist. Ember.js prescribes a set of conventions that help build large-scale web applications. React and Flux provide composable view components that can be assembled together to build a solution. And it seems like a new Javascript framework is released every few minutes that touts some amazing new feature (which, of course, has led to Framework Fatigue).

At ng-conf, however, a thought occurred to me. We saw presentations that discussed topics like:

  • how the Angular team discusses plans for Angular 2.0 and the existing 1.x versions in public, open forums with lots of input from the community.

  • how the Angular team didn’t have the right tool to measure performance, so they built their own benchmark tool and showed actual numbers for performance improvements.

  • how the Angular team, having already built unit testing (Karma) and end-to-end (Protractor) testing tools, wanted to also include accessibility testing, to ensure all users of various abilities could use the web applications that Google and others are building.

  • how the Angular team not only has thousands of tests around the source code that are automatically run in a continuous integration environment, but that when they push out releases of Angular, they also run tests for many of the internal Google applications built on Angular to ensure that changes haven’t broken functionality.

  • how the Angular team works with other members of our diverse community, like the TypeScript team or the Ember team, to see where there are opportunities to work together.

Consider how much care and craftsmanship is going into Angular as an open-source project. Perhaps there are other projects that operate at this level, but it’s probably the exception not the rule.

No one is saying that AngularJS is the right answer to every problem. But if I am an organization trying to make a decision on what Javascript framework I should use, how could I ask for more diligence from a project that I will use as a cornerstone to build my own application? I don’t think you can go wrong with choosing Angular and its popularity is rightly justified.

ng-conf 2015 - Office Apps Hack-a-thon

The night before ng-conf 2015 started, there were a number of lightning talks and hack-night activities. You might be surprised to see Microsoft as part of the mix. With the lure of several Xbox Ones to give away, there ended up being 11 teams competing for the prizes. Josh Carroll and I represented “Team Wintellect”.

Why was Microsoft here? If you’ve been keeping up, the “new” Microsoft has really done a lot to be more open and work with technology outside of Microsoft. Tonight, they were talking about building AngularJS application as add-ons to Office 365 applications. These are Javascript-based components that can inject new functionality into Excel, Outlook, and others. (There is more detailed information at the Office Dev Center web site).

Our task was to take one of Andrew Connell’s starter projects:

And create our own project that demonstrates some interesting add-on functionality to one of these office apps.

Oh … and there was only two hours to complete it!

Josh and I decided to take a D3.js visualization, specifically this Aster Plot, and have it take the data from an Excel spreadsheet rather than from a CSV file.

This is what we came up with, I apologize up front for the (lack of) quality of this code, but we were successfully able to update data in the spreadsheet and draw the graph based on this data.

(I hope to revisit this code and clean it up. I also forgot to take a screen shot, which I’ll post later.)

The main integration code is below:

Office.initialize = function () {
console.log(">>> Office.initialize()");
Office.context.document.bindings.addFromPromptAsync('matrix', function (asyncResult) {
if (asyncResult.status == Office.AsyncResultStatus.Succeeded) {
asyncResult.value.getDataAsync(function(data) {

After the Office application initializes, we were able to prompt the user for a range of data (addFromPromptAsync) and then pull the data from the spreadsheet (getDataAsync). The rest was just drawing the D3 visualization.

Microsoft has created a nice extensibility interface for office apps and I’m interested to see what new kinds of add-ons will be created.

Also, thanks to Jeremy Thake, Andrew, and Microsoft for the event.

And for the Xbox One, too!

Modern Web Development 101

Maybe you’ve heard the term “Modern Web” before. It embodies the idea of a constantly evolving, exciting platform that unique and powerful user experiences can be built upon. It is a platform that new capabilities are added to regularly, not waiting for long release cycles.

And maybe you’ve been asked to build a “modern web application”.

So what does that mean exactly? Perhaps you’ve already got a few HTML pages, some nice-looking CSS along with pleasing images, and some Javascript to give the pages that extra little pizzazz. What else do you need?

Let’s look at what a Modern Web Application might look like, considering only the pieces that are loaded into the browser (the client-side pieces), putting aside for now the whole story about what is running on the web server itself.

Client-Side Code

Javascript is the language that the browser knows how to speak. It is a dynamic language, meaning that it doesn’t get compiled first, like a Java or C# application would. A compilation step helps identify certain types of errors in your code, but with Javascript, you’ll need additional help in determining whether your code is correct or not.

Some choose to do this with the language itself. Instead of using Javascript, they will use a language that ultimately gets compiled into Javascript and (perhaps) smooths out some of Javascript’s rough edges. For example, CoffeeScript defines a syntax that is simpler to use so there is less to type and thus you might introduce fewer typing errors. TypeScript defines additional annotations that can help ensure that variables and parameters are of the expected type. Google created Dart to define a more structured way to approach problems of large-scale Javascript applications.

Meanwhile, the ECMA Script committee has been busy finalizing ECMAScript 6. This is the next version of Javascript that brings new features (classes, arrow functions, spread operators, generator functions) and standardizes some techniques that are already in wide use (promises, modules). Existing libraries will be (and have already been) ported over to this new version of the language.

A next step is analyzing the code itself. In Javascript, you won’t get the full level of static analysis that you can get in a compiled language, but you can certainly check for many issues in your code related to syntax, variable declaration and scoping, and coding style. Tools like JSHint and ESLint can take a close look at your code and identify many issues that might cause your application to fail at runtime.

If you really want to track the health of your source code, a tool like Plato can be used to analyze the complexity and overall maintainability of your code. Taking snapshots of your code through this tool helps you determine if your team is heading in the right direction while maintaining and enhancing the code.

Unit Testing

The primary way to validate your code is, of course, testing. Test-Driven Development (TDD) is a lifestyle for many teams and testing is especially important for Modern Web applications due to the heavy use of client Javascript code. But testing for a web application can be difficult if your application is not architected well.

Many applications intermix code that interacts with the browser’s DOM (Document Object Model) with code that performs business logic with code that calls web APIs. In these cases it’s very hard to test the code in isolation. There are too many overlapping concerns that tests cannot be easily written to verify that the code is working correctly. Too often in the face of this, teams just abandon testing - at least all but the most laborious manual testing delegated to the unfortunate QA team.

If you do apply a bit of rigor to your development, you should be able to create isolated components. Then you can write tests using one of many testing frameworks. Jasmine, Mocha, and QUnit are all popular testing frameworks. These are generally used to create Unit Tests, which differ from end-to-end or integration tests in that the goal is to test a single component in isolation, usually mocking the dependencies that the component may have.

When you mock a dependency within a unit test, you are simulating an expected result the dependency might produce or just verifying that the component that is under test used the dependency in the expected manner. Jasmine has “spies” that can be used for this purpose, but other libraries exist that provide additional mocking functionality as well, like Sinon.JS.

Writing the tests is one task for your project, but there are also tools that help you run the tests, even automatically after you save files within your editor. A tool like Karma can watch for changes in your source code and automatically run the tests, giving you immediate feedback if your changes have inadvertently broken something. Systems like Gulp and Grunt can also be configured to watch files and run tests (and other build steps too, see below).

End-to-End / Integration Testing

Unit testing is important and can catch errors in the individual components of your application, but end-to-end and integration tests catch errors and validate functionality when these components are brought together. An end-to-end test would test from the browser page itself all the way to the back end Web APIs as if a user was sitting in front of the application. An integration test might only test portions of the stack, maybe just testing a service that calls a back-end Web API.

For this type of testing, you are testing with a real browser. You need a way to operate the browser itself and examine results. And you need to work with multiple browsers. Fortunately, Selenium WebDriver is a tool that can be used to automate user interactions on many different browsers. It can “drive” a browser, navigating to pages, interacting with user interface elements, and inspecting the results.

You can certainly write scripts to test your application with Selenium, but there are other tools that sit on top of Selenum that can then be used to write tests similar to the unit tests you might write. For example, Protractor, wraps Selenium and provides a testing environment and functionality that helps to identify and inspect components within AngularJS applications.

If you want to expand testing to the widest variety of browsers, including browsers that run on devices like an iPad tablet or Android phone, then there are services that provide access to physical and simulated devices that automated tests can be run against. SauceLabs and BrowserStack are two popular options.

Building Blocks

How do you build your Modern Web application? These days, you certainly don’t start from scratch. There are too many good frameworks and libraries to build on. Most are free to use within your own application due to a community that values openly sharing these foundational tools with each other.

But you do have to make some hard decisions about how you will build your application. You thought discussing politics or religion was hard? Try talking about Javascript frameworks. So at the risk of doing terrible injustice to all that I’m about to mention, here is some brief overview.

Any framework deals with an abstraction on top of the Brower’s DOM and related services (like the XMLHttpRequest object). jQuery was the first to really popularize these kinds of abstractions. Later, Backbone.js sought to focus on the model data you might use in your application, providing ways to define your models and interact with web services. Knockout.js focused on more easily integrating application code with DOM events.

Then higher-level and more comprehensive abstractions became popular. Ember and AngularJS provide a framework that covers more of what a web application might do. They prescribe patterns and provide conventions that can be used to make web applications more scalable and maintainable.

Other patterns were introduced as well. React is geared towards narrowly-focused user interface components, and includes a shadow-DOM that optimizes and enhances the performance of updating the browser’s real DOM (operations that have been traditionally slow). The idea of Isomorphic Javascript was implemented in frameworks like Meteor where the Javascript code is shared between client and server.

Web Components are yet another way to abstract and extend browser functionality.

You also have to consider how important Search Engine Optimization (SEO) is to your application. If the application renders all of the content dynamically with Javascript, then it is possible that search engines won’t be able to correctly catalog these pages (even though Google is making progress in cataloging dynamic content). You may need to render content on the web server to be delivered to the browser directly rather than dynamically loading or generating it. But most frameworks have techniques to address SEO, some better than others.


You also have to consider the content of your web application, including how the content is generated as well as how it will look.

You might use templates to generate the HTML. A template specifies a subset of the overall HTML that is rendered into the page. Multiple templates can be merged together to generate the final HTML. Handlebars and Jade are used widely. On the server side you might dynamically render your HTML from your application, but you can also pre-render templates and have your web server deliver the resulting pre-generated content.

The content itself will be styled using CSS. But fortunately, you don’t have to start from scratch with design. Bootstrap and ZURB Foundation provide a great starting point for nice-looking web content. They also have the advantage of providing a “Responsive Design“. This is where a web site looks good on a web browser but also collapses and otherwise adapts itself to display well on smaller devices (like a phone).

Just as other languages can get compiled into Javascript, CSS has pre-processors that focus on simplifying the creation and reuse of CSS definitions. They provide variables and nesting that make it easier to define style sheets. LESS and SASS are the two most popular options, where SASS can be extended with Compass and other components to provide a more rich framework.

When designing your application for mobile platforms, responsive design helps to ensure that your web application works well on whatever device it is used. But there are other considerations as well. For example, for images that are displayed within your application, you might want to optimize these images for best download speed.

You can also optimize the overall performance of your application by moving assets like your images and Javascript files to a Content Delivery Network (CDN). For an application with broad use, it moves the static files to servers distributed throughout the world to take some of the load off your web application servers.

Client Side Building

So you may have noticed that some of this work has to be done offline. This is a major component of a Modern Web Application - assets for your web application are generally built (pre-processed) before becoming part of your web application.

To perform this build, almost certainly you’ll end up using NodeJS (or the recent forked version of it, io.js). All of the components to build assets for Modern Web applications will be found in Node modules.

There are a number of build system implemented in Node. I already mentioned Gulp and Grunt. What do you build, then?

You would automate the validation of your Javascript, passing it through a Lint operation and automating the tests. If needed, you would generate the Javascript from Coffeescript or TypeScript files. You might decide to go ahead and start writing your Javascript in ES6 syntax, but until it has wide browser support, you’ll need to use a tool like 6to5 to transform ES6 syntax into ES5 (the form supported by most modern browsers today).

CSS files supports vendor prefixes which are specific to a particular type of browser. The Autoprefixer tool can automatically identify the CSS elements that need these additional prefixes and add them to your CSS.

You’ll want to minify your Javascript and CSS files. This takes out extra spaces and reformats code so that it is smaller, and thus loads more quickly. You might also supply map files that correlates the minified file back to the original source files. You might concatenate these minified files together as well.

Google’s Closure Compiler could minify and analyze your Javascript.

You’ll want to ensure that the correct version of your Javascript and CSS files get to the browser. Why is this necessary? If the file name is fixed, a browser has to determine if the file has been cached or not. It makes the decision based on expiry information that comes from the server when the file was originally requested. If a file is updated, that expiry information may no longer be valid and the browser may not request the updated version of the file. Also, proxy servers that sit between the browser and the web server may have also cached the file.

To ensure that the latest copy of the files always get to the browser, a version number or other identifier is added to the file name, either as a query string parameter or by modifying the name of the file. This identifier ensures that the browser will request the file if the identifier is changed. This is called cache-busting.

If files are concatenated together or the file name is changed, then the reference to the file must be changed as well. This might be a <script> tag within an HTML file, but there may also be changes to references within Javascript or CSS files.

There are many other tasks, minifying your HTML files, automating the analysis of your image files for optimal delivery, or setting up watches for files being changed to automatically build the affected assets.

So … What Next?

If you made it this far, you might be overwhelmed by the scope of technology involved in Modern Web Applications. Fear not, though. And don’t give up. It is truly amazing what you can do with web applications these days. It’s worth the effort to learn this stuff and a great time to be working with it.