Brad Green spent some time discussing how Google as a company is fully embracing Angular as an application development framework. In fact, the goal is to have all web application development within Google to be using Angular by the end of 2017. He also explained that it makes sense for them to invest so many resources into Angular as an open-source project because of the additional benefits to Google itself. The primary benefit is the large ecosystem that has grown around Angular. Libraries and tools would probably not exist if Angular was simply an internal Google project. In addition, Google has built several tools internally that have been reproduced in other open-source projects. It makes sense that there is benefit to sharing these efforts with the community. It also helps with hiring within Google, where proprietary in-house tools require additional training and ramp-up for new hires. And, of course, the overall quality of the source code is improved by the great feedback received from the community through PRs, documentation, and training.
Rob Wormald then focused on the goals of the Angular project in general. He talked about three general categories of web applications, highly interactive sites (like a commerce application and shopping cart), mostly static sites (like a blog), and mixtures of interactive and static content. Whatever the application, Google’s own analysis has shown that users expect a web page to display in less than two seconds and will most likely abandon the page if it takes longer than three seconds. Rob emphasized that a goal for the Angular project was to help make web applications that meet these expectations.
So, improvements have been made to what was formerly called Angular Universal and brought in under the main core framework as the platform-server package. Functionality has been added to support these scenarios, allowing the construction of the page server-side, rendering to more complete HTML before being delivered to the browser. (More details below.)
“AngularJS” is the name that is used for Angular version 1 applications, “Angular” is the name for all other versions. If you’ve already started a project in AngularJS, there is a path to upgrading the application to “Angular” through the NgUpgrade module. This module allows you to run both AngularJS and Angular components within the same application at the same time (you end up with two “instances” of the Angular framework running together). It allows you to share components and services between the instances of AngularJS and Angular. It also allows you to share routes between them.
Victor Savkin gave some practical instructions for upgrading your application. You could approach upgrading using “vertical slicing” where overall features of the application are upgraded individually. Or approach using “horizontal slicing” where individual components and services are upgraded. There are also different ways that you might arrange the routing within the application.
Whatever approach you take, components and services can be upgraded to be used by Angular or downgraded to be used by AngularJS. This should give you a good iterative way to progressively migrate your application over to the latest version.
Jeff Cross dug more deeply into pre-rendering, the process of generating more complete HTML before the content is sent to the browser. The NodeJS-based server module (in the platform-server package) provides services for rendering an Angular application offline. It also provides functionality that can deal with HTTP requests and routing as well.
The main reasons for pre-rendering are ensuring that your application can to load and become interactive as quickly as possible, to allow the pages to be scrapeable (i.e. in order to display the preview pane you see in other applications like Twitter or Facebook), and to allow the pages to be crawlable (i.e. for Search Engine Optimization).
Jeff displayed a graph with axes of “Completeness” vs. “When Rendered”. How much pre-rendering you might do in your application depends on different factors, such as the number of pages that are included, the volume of content, localization used, the amount of user-customized content, the freshness of the data, and the frequency of deployment and rebuilding content.
An additional consideration is what to do with user-generated events that occur before the page has been made fully interactive (i.e. before the Angular application has been bootstrapped and is fully functional). If the user begins typing or clicking on the page before it is ready, what should happen? To address this, a component called “preboot” is used to record these user events and then play them back accordingly. This does require some careful consideration of exactly what happens, though.
Right now, the documentation for the pre-rendering functionality is not yet available, but it should be included on the Angular.io site soon.
The Angular team has put together a
for the requirements and best practices of building packages that can be
used as libraries within Angular applications. There are a number of
things that should be included in a library package such as TypeScript
definition files (
*.d.ts) and a
*.metadata.json file. Common packages
that are used by the library should be added as peer dependencies in the
package.json file. The Angular compiler,
which wraps the TypeScript compiler, can be used to build the artifacts
required for the library.
To take advantage of optimizations like tree shaking, the recommendation is to create a single NgModule for each component within the library. Having multiple (or all) components in the library within a single NgModule will end up bringing in code that may not be used by the consumer of the library.
The Angular Language Service can be used in our code editors to help with code completion, errors, and references between components and templates. This service is available in Visual Studio Code and WebStorm and will be available for other editors soon.
If you’ve used the Redux library in your application, you have probably also used selectors from the Reselect library. Kara Erickson demonstrated using selectors for form validations when building reactive forms. But she also talked about how reactive forms will be moving towards using observable streams for validations rather than the current implementation. This gives complete control over form validation, including debouncing frequent field changes, prioritizing asynchronous validations, and presenting errors less frequently. It also allows for push validations, i.e. validations that come from the server (for example, the highest bid for a product has changed and a current bid is no longer valid).