This is Part Four of an eight-part blog series looking into a serverless application built using Azure components. The earlier parts of the series are available here: Part One | Part Two | Part Three
Once the data and API layers were in place, the next area to focus on was the client-side. This is a browser-based application and we would use HTML, CSS and Javascript to render the API calls to the user. In fact, due to the nature of the application, the majority of the logic would be held in this layer, so it was a natural step to choose a framework that would enable quality testing and enable easier integration with additional modules. We do not have hard and fast rules when it comes to which framework, but find it better to choose one that more developers on the project are comfortable with. At the moment, that tends to be Angular or React and we find that Angular has slightly more going for it when used as a full application. Again, this is not a firm call and was a pretty tight decision so please no flaming!
The benefits of a framework are the structure of the code tends to lead to better testability and build deployment, something that Angular has a lot of with the Angular CLI. Using Node JS, this gives an easy way for developers to build a live preview of the app that would update as each file is changed and run the unit tests each time, increasing the stability of the application overall.
How did we use Angular?
The first decision is around which version to use. There are two main flavours with Angular JS (or 1.x) and Angular (3, 4, 5 or 6 at time of writing). Angular JS is the original version that has a very large following and is still used by many huge projects. Angular is a complete re-write that takes many of the lessons learnt from Angular JS to improve performance and add better support for deployment and use of the modern Node JS development stack – it was therefore the better choice for us. We took the latest version at the start of the project (5.2) and have stuck with that despite 6 being available due to the large number of breaking changes.
To let us move quickly and keep design costs lower, we made use of an Angular Bootstrap template, just applying smaller CSS changes to meet the required branding. We picked the Angular Monster Admin template as we had previously used the Bootstrap templates and this had all the major Ul functionality that we required with the benefit of having the Angular components included for a low cost. Given the template has a default folder structure for the code, we made use of the same structure which was similar to the template used by Angular CLI. This breaks out the main app code from the assets (e.g. core css, images, etc), the environment variables, the scripts (such as a Javascript file run to fix issues with the Service Worker) and the unit tests (these were later removed and built in alongside the other components).
The environment variables held are all only for the local development environment as the other settings are injected using PowerShell as part of the CI/CD release process.
https://github.com/BallardChalmers/BCServerlessDemo.Client/tree/master/src
The app folder is mostly broken down in to functional areas for the application such as dashboards, reports, admin etc.
The exceptions are:
@Core: While we aim to keep the data model and services logic in the same folders as the web page, there are some models and services that can be used across a large number of logical areas. To try and reduce confusion, these were moved to the core folder. There was no hard and fast rule as to when we re-used items against moving them to Core and it worked well with developer consensus.
Shared: These are for functions used across all areas such as the navigation, authentication and components like common date picker.
Pipes: Any pipes used by the web templates for common formatting such as converting stored enums to a user friendly value.
Utils: Utility services such as logging and enums.
Security
So how do we ensure that the application is secure? Much of this will be covered in the post on Azure B2C and so I won’t be specifically covering that or our use of MSAL to connect to it until then. In this post, I will talk at a high level about where we apply the security and how that applies within Angular itself. Overall, the security is applied at the API level and uses Azure B2C to ensure that the current user can only access content that they are permitted to see. The roles that a user is a member of are used to restrict the data that can be returned from the API.
Alongside logging in with Azure B2C, the token of the user is held in the session and an http interceptor is used to pass the token automatically to any API calls as outlined in the Angular documentation. This helps ensure that all calls are secure and the token is always sent alongside API calls.
For pages that should be secured (such as the admin pages), Angular Route Guard is used – this allows a single function with the logic defined as to which route a single role is allowed. This can be combined with the left-hand navigation to define whether a link should be shown along with whether a user is allowed to access it. Where specific functionality is required on a page, such as hiding a button depending on their role, a role check function is used to identify whether they should be allowed to perform the action, e.g. whether a user can edit a journey.
Offline Working
One of the real perks of using Angular is the apparent ease of making the most that Service Workers have to offer with seeming ease. The Angular documentation makes it seem like adding full offline should only take a few lines of code and there is a certain amount of truth to that. Adding the Angular Service Worker package (@angular/pwa) creates a configuration file called ngsw-config.json that defines what should be retained when the browser goes offline and this will include all of the required assets by default. It can then be updated to include datagroups which are the set of URLs that your services use to communicate – defining these means that the service worker will cache values so that when the application is offline, it will utilise the cached versions.
What does this mean in practice? It means that you can continue to use the site in your browser even if you have no connection to the internet. With one small config file, you can turn a web connected only site in to an offline site – so where are the problems?
Confusion for the user: While working in the browser is great, it is often counter-intuitive for a user to open up Chrome or IE when they are not connected to the Internet. It can feel strange and so quite a bit of work is needed in the UX design to make it apparent that the user is offline and to be clear what is being cached, especially when not everything is. In our application (unfortunately not yet in our Github hosted version), we had documents in the application, but these were not available offline due to the space available.
Limited storage space: The exact amount of storage you have is defined by the browser as Service Workers use the local storage provided in most modern browsers. At the time of writing, this is usually 5MB which is enough to store key metadata and values but not enough to store a large number of documents or if your application has a lot of data held. Because of this, your design needs to consider the amount that can be stored.
Browser support: The number of browsers that support this is changing all the time but at the time of writing, this now works across all modern browsers. If you are unsure as to which browsers are supported, look at https://jakearchibald.github.io/isserviceworkerready/. The biggest gap for large enterprises is likely to be IE11 and so you need to carefully consider whether this will impact your users. At very least, the application should gracefully handle support, i.e. if the user has an unsupported browser, it should be clear that offline will not work.
Posting data back – the biggest quirk not made clear when starting: This is one that wasn’t obvious at all when reading most articles on Service Workers with Angular. While it will cache GET responses with no more than a little configuration, the same is not true for the other HTTP methods regularly used with REST interfaces. While users going offline can benefit from cached data, they will not be able to make updates without further development work. For our application, we implemented a cache to local storage whenever a failed post took place that would then update once the connection was restored. We did also take a look at Workbox but could not easily integrate this in to our application without a large amount of refactoring which was decided to not be worth the benefits at the stage of the project we were in – I would certainly take another look at this for a greenfield project.
Conflicts on posts: Once we had posts working, this opened up another concern – what happened when someone went online but then someone else posted an update while they were offline? There are many ways to handle this such as highlighting differences and allowing users to choose, like many source control systems. However, we agreed on a simpler version with our client where a user could check out their data when offline so no others could edit it. Once they were back online, they could check back in their changes knowing that no-one else would have changed them. We also implemented admin functionality to force check-in objects which would overwrite the offline version if someone forgot to go back online. This would lose the data completely, but it was agreed this was better than never being able to update.
Working with files
The other big area that was added to the application was the ability to work with files, specifically being able to upload them easily and download to view. The majority of the file uploads were put in place using NG2-FileUpload from Valor Software (open sourced on Github) with capabilities extended using my favouritely named package Dragula to give some additional drag and drop support. This allowed users to either drop the files in the relevant area or to click and browse to the folder on their device. In some cases, we also allowed drag and drop to update metadata around the category of the document as well.
One requirement that caused a little rethink was the ability to export a set of documents (images in our case) to a single zip file. This was added simply by using the JSZip package causing very little pain for the developers!
Summary
Angular is a very powerful web client framework and is evolving all the time (version 7 has just been released now). With the introduction of the CLI, it is now easy to set up and for anyone who hasn’t worked with Angular since the Angular JS/Angular 1.x days, it is now far better at highlighting issues in your module dependencies and set up. This allows the developers to truly focus on the business logic and with the use of one of the Angular templates, you can quickly get a good-looking application up and running.
There are always challenges and all our developers found RXJS particularly painful at times, so I would recommend getting an initial tutorial before getting stuck in although there are plenty of examples within the Angular base to take inspiration from. Offline working with Service Workers makes the ability to create a seamless web and offline experience far closer but it is not quite as far along as some of the Getting Started posts suggest and I would suggest adding extra time to what you would initially consider when planning.
If you have not used Angular before or not for a long time, now is a great time to get back in to it for a full web application developed, built and deployed in a modern way and with flexible security for the end user which brings me neatly on to the next part to cover in this series – Azure B2C.
By Kevin McDonnell, Senior Technical Architect at Ballard Chalmers
UPDATE: The next in the series is available here: Modern Serverless Development Part 5 – Authentication with Azure Active Directory B2C
About the author
Kevin McDonnell is a respected Senior Technical Architect at Ballard Chalmers. With a Master of Engineering (MEng), Engineering Science degree from the University of Oxford he specialises in .NET & Azure development and has a broad understanding of the wider Microsoft stack. He listens to what clients are looking to achieve and helps identify the best platform and solution to deliver on that. Kevin regularly blogs on Digital Workplace topics and is a regular contributor to the monthly #CollabTalk discussions on Twitter.